Sample records for normal probability model

  1. Probability density function of non-reactive solute concentration in heterogeneous porous formations.

    PubMed

    Bellin, Alberto; Tonina, Daniele

    2007-10-30

    Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for the local concentration of conservative tracers migrating in heterogeneous aquifers. Our model accounts for dilution, mechanical mixing within the sampling volume and spreading due to formation heterogeneity. It is developed by modeling local concentration dynamics with an Ito Stochastic Differential Equation (SDE) that under the hypothesis of statistical stationarity leads to the Beta probability distribution function (pdf) for the solute concentration. This model shows large flexibility in capturing the smoothing effect of the sampling volume and the associated reduction of the probability of exceeding large concentrations. Furthermore, it is fully characterized by the first two moments of the solute concentration, and these are the same pieces of information required for standard geostatistical techniques employing Normal or Log-Normal distributions. Additionally, we show that in the absence of pore-scale dispersion and for point concentrations the pdf model converges to the binary distribution of [Dagan, G., 1982. Stochastic modeling of groundwater flow by unconditional and conditional probabilities, 2, The solute transport. Water Resour. Res. 18 (4), 835-848.], while it approaches the Normal distribution for sampling volumes much larger than the characteristic scale of the aquifer heterogeneity. Furthermore, we demonstrate that the same model with the spatial moments replacing the statistical moments can be applied to estimate the proportion of the plume volume where solute concentrations are above or below critical thresholds. Application of this model to point and vertically averaged bromide concentrations from the first Cape Cod tracer test and to a set of numerical simulations confirms the above findings and for the first time it shows the superiority of the Beta model to both Normal and Log-Normal models in interpreting field data. Furthermore, we show that assuming a-priori that local concentrations are normally or log-normally distributed may result in a severe underestimate of the probability of exceeding large concentrations.

  2. [Establishment of the mathematic model of total quantum statistical moment standard similarity for application to medical theoretical research].

    PubMed

    He, Fu-yuan; Deng, Kai-wen; Huang, Sheng; Liu, Wen-long; Shi, Ji-lian

    2013-09-01

    The paper aims to elucidate and establish a new mathematic model: the total quantum statistical moment standard similarity (TQSMSS) on the base of the original total quantum statistical moment model and to illustrate the application of the model to medical theoretical research. The model was established combined with the statistical moment principle and the normal distribution probability density function properties, then validated and illustrated by the pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical method for them, and by analysis of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving the Buyanghanwu-decoction extract. The established model consists of four mainly parameters: (1) total quantum statistical moment similarity as ST, an overlapped area by two normal distribution probability density curves in conversion of the two TQSM parameters; (2) total variability as DT, a confidence limit of standard normal accumulation probability which is equal to the absolute difference value between the two normal accumulation probabilities within integration of their curve nodical; (3) total variable probability as 1-Ss, standard normal distribution probability within interval of D(T); (4) total variable probability (1-beta)alpha and (5) stable confident probability beta(1-alpha): the correct probability to make positive and negative conclusions under confident coefficient alpha. With the model, we had analyzed the TQSMS similarities of pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical methods for them were at range of 0.3852-0.9875 that illuminated different pharmacokinetic behaviors of each other; and the TQSMS similarities (ST) of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving Buyanghuanwu-decoction-extract were at range of 0.6842-0.999 2 that showed different constituents with various solvent extracts. The TQSMSS can characterize the sample similarity, by which we can quantitate the correct probability with the test of power under to make positive and negative conclusions no matter the samples come from same population under confident coefficient a or not, by which we can realize an analysis at both macroscopic and microcosmic levels, as an important similar analytical method for medical theoretical research.

  3. Bivariate categorical data analysis using normal linear conditional multinomial probability model.

    PubMed

    Sun, Bingrui; Sutradhar, Brajendra

    2015-02-10

    Bivariate multinomial data such as the left and right eyes retinopathy status data are analyzed either by using a joint bivariate probability model or by exploiting certain odds ratio-based association models. However, the joint bivariate probability model yields marginal probabilities, which are complicated functions of marginal and association parameters for both variables, and the odds ratio-based association model treats the odds ratios involved in the joint probabilities as 'working' parameters, which are consequently estimated through certain arbitrary 'working' regression models. Also, this later odds ratio-based model does not provide any easy interpretations of the correlations between two categorical variables. On the basis of pre-specified marginal probabilities, in this paper, we develop a bivariate normal type linear conditional multinomial probability model to understand the correlations between two categorical variables. The parameters involved in the model are consistently estimated using the optimal likelihood and generalized quasi-likelihood approaches. The proposed model and the inferences are illustrated through an intensive simulation study as well as an analysis of the well-known Wisconsin Diabetic Retinopathy status data. Copyright © 2014 John Wiley & Sons, Ltd.

  4. Constructing inverse probability weights for continuous exposures: a comparison of methods.

    PubMed

    Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S

    2014-03-01

    Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.

  5. Vector wind and vector wind shear models 0 to 27 km altitude for Cape Kennedy, Florida, and Vandenberg AFB, California

    NASA Technical Reports Server (NTRS)

    Smith, O. E.

    1976-01-01

    The techniques are presented to derive several statistical wind models. The techniques are from the properties of the multivariate normal probability function. Assuming that the winds can be considered as bivariate normally distributed, then (1) the wind components and conditional wind components are univariate normally distributed, (2) the wind speed is Rayleigh distributed, (3) the conditional distribution of wind speed given a wind direction is Rayleigh distributed, and (4) the frequency of wind direction can be derived. All of these distributions are derived from the 5-sample parameter of wind for the bivariate normal distribution. By further assuming that the winds at two altitudes are quadravariate normally distributed, then the vector wind shear is bivariate normally distributed and the modulus of the vector wind shear is Rayleigh distributed. The conditional probability of wind component shears given a wind component is normally distributed. Examples of these and other properties of the multivariate normal probability distribution function as applied to Cape Kennedy, Florida, and Vandenberg AFB, California, wind data samples are given. A technique to develop a synthetic vector wind profile model of interest to aerospace vehicle applications is presented.

  6. Ordinal probability effect measures for group comparisons in multinomial cumulative link models.

    PubMed

    Agresti, Alan; Kateri, Maria

    2017-03-01

    We consider simple ordinal model-based probability effect measures for comparing distributions of two groups, adjusted for explanatory variables. An "ordinal superiority" measure summarizes the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model. The measure applies directly to normal linear models and to a normal latent variable model for ordinal response variables. It equals Φ(β/2) for the corresponding ordinal model that applies a probit link function to cumulative multinomial probabilities, for standard normal cdf Φ and effect β that is the coefficient of the group indicator variable. For the more general latent variable model for ordinal responses that corresponds to a linear model with other possible error distributions and corresponding link functions for cumulative multinomial probabilities, the ordinal superiority measure equals exp(β)/[1+exp(β)] with the log-log link and equals approximately exp(β/2)/[1+exp(β/2)] with the logit link, where β is the group effect. Another ordinal superiority measure generalizes the difference of proportions from binary to ordinal responses. We also present related measures directly for ordinal models for the observed response that need not assume corresponding latent response models. We present confidence intervals for the measures and illustrate with an example. © 2016, The International Biometric Society.

  7. Derivation of the expressions for γ50 and D50 for different individual TCP and NTCP models

    NASA Astrophysics Data System (ADS)

    Stavreva, N.; Stavrev, P.; Warkentin, B.; Fallone, B. G.

    2002-10-01

    This paper presents a complete set of formulae for the position (D50) and the normalized slope (γ50) of the dose-response relationship based on the most commonly used radiobiological models for tumours as well as for normal tissues. The functional subunit response models (critical element and critical volume) are used in the derivation of the formulae for the normal tissue. Binomial statistics are used to describe the tumour control probability, the functional subunit response as well as the normal tissue complication probability. The formulae are derived for the single hit and linear quadratic models of cell kill in terms of the number of fractions and dose per fraction. It is shown that the functional subunit models predict very steep, almost step-like, normal tissue individual dose-response relationships. Furthermore, the formulae for the normalized gradient depend on the cellular parameters α and β when written in terms of number of fractions, but not when written in terms of dose per fraction.

  8. Deep brain stimulation abolishes slowing of reactions to unlikely stimuli.

    PubMed

    Antoniades, Chrystalina A; Bogacz, Rafal; Kennard, Christopher; FitzGerald, James J; Aziz, Tipu; Green, Alexander L

    2014-08-13

    The cortico-basal-ganglia circuit plays a critical role in decision making on the basis of probabilistic information. Computational models have suggested how this circuit could compute the probabilities of actions being appropriate according to Bayes' theorem. These models predict that the subthalamic nucleus (STN) provides feedback that normalizes the neural representation of probabilities, such that if the probability of one action increases, the probabilities of all other available actions decrease. Here we report the results of an experiment testing a prediction of this theory that disrupting information processing in the STN with deep brain stimulation should abolish the normalization of the neural representation of probabilities. In our experiment, we asked patients with Parkinson's disease to saccade to a target that could appear in one of two locations, and the probability of the target appearing in each location was periodically changed. When the stimulator was switched off, the target probability affected the reaction times (RT) of patients in a similar way to healthy participants. Specifically, the RTs were shorter for more probable targets and, importantly, they were longer for the unlikely targets. When the stimulator was switched on, the patients were still faster for more probable targets, but critically they did not increase RTs as the target was becoming less likely. This pattern of results is consistent with the prediction of the model that the patients on DBS no longer normalized their neural representation of prior probabilities. We discuss alternative explanations for the data in the context of other published results. Copyright © 2014 the authors 0270-6474/14/3410844-09$15.00/0.

  9. Drought forecasting in Luanhe River basin involving climatic indices

    NASA Astrophysics Data System (ADS)

    Ren, Weinan; Wang, Yixuan; Li, Jianzhu; Feng, Ping; Smith, Ronald J.

    2017-11-01

    Drought is regarded as one of the most severe natural disasters globally. This is especially the case in Tianjin City, Northern China, where drought can affect economic development and people's livelihoods. Drought forecasting, the basis of drought management, is an important mitigation strategy. In this paper, we evolve a probabilistic forecasting model, which forecasts transition probabilities from a current Standardized Precipitation Index (SPI) value to a future SPI class, based on conditional distribution of multivariate normal distribution to involve two large-scale climatic indices at the same time, and apply the forecasting model to 26 rain gauges in the Luanhe River basin in North China. The establishment of the model and the derivation of the SPI are based on the hypothesis of aggregated monthly precipitation that is normally distributed. Pearson correlation and Shapiro-Wilk normality tests are used to select appropriate SPI time scale and large-scale climatic indices. Findings indicated that longer-term aggregated monthly precipitation, in general, was more likely to be considered normally distributed and forecasting models should be applied to each gauge, respectively, rather than to the whole basin. Taking Liying Gauge as an example, we illustrate the impact of the SPI time scale and lead time on transition probabilities. Then, the controlled climatic indices of every gauge are selected by Pearson correlation test and the multivariate normality of SPI, corresponding climatic indices for current month and SPI 1, 2, and 3 months later are demonstrated using Shapiro-Wilk normality test. Subsequently, we illustrate the impact of large-scale oceanic-atmospheric circulation patterns on transition probabilities. Finally, we use a score method to evaluate and compare the performance of the three forecasting models and compare them with two traditional models which forecast transition probabilities from a current to a future SPI class. The results show that the three proposed models outperform the two traditional models and involving large-scale climatic indices can improve the forecasting accuracy.

  10. Bayesian anomaly detection in monitoring data applying relevance vector machine

    NASA Astrophysics Data System (ADS)

    Saito, Tomoo

    2011-04-01

    A method for automatically classifying the monitoring data into two categories, normal and anomaly, is developed in order to remove anomalous data included in the enormous amount of monitoring data, applying the relevance vector machine (RVM) to a probabilistic discriminative model with basis functions and their weight parameters whose posterior PDF (probabilistic density function) conditional on the learning data set is given by Bayes' theorem. The proposed framework is applied to actual monitoring data sets containing some anomalous data collected at two buildings in Tokyo, Japan, which shows that the trained models discriminate anomalous data from normal data very clearly, giving high probabilities of being normal to normal data and low probabilities of being normal to anomalous data.

  11. Multinomial mixture model with heterogeneous classification probabilities

    USGS Publications Warehouse

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  12. Re‐estimated effects of deep episodic slip on the occurrence and probability of great earthquakes in Cascadia

    USGS Publications Warehouse

    Beeler, Nicholas M.; Roeloffs, Evelyn A.; McCausland, Wendy

    2013-01-01

    Mazzotti and Adams (2004) estimated that rapid deep slip during typically two week long episodes beneath northern Washington and southern British Columbia increases the probability of a great Cascadia earthquake by 30–100 times relative to the probability during the ∼58 weeks between slip events. Because the corresponding absolute probability remains very low at ∼0.03% per week, their conclusion is that though it is more likely that a great earthquake will occur during a rapid slip event than during other times, a great earthquake is unlikely to occur during any particular rapid slip event. This previous estimate used a failure model in which great earthquakes initiate instantaneously at a stress threshold. We refine the estimate, assuming a delayed failure model that is based on laboratory‐observed earthquake initiation. Laboratory tests show that failure of intact rock in shear and the onset of rapid slip on pre‐existing faults do not occur at a threshold stress. Instead, slip onset is gradual and shows a damped response to stress and loading rate changes. The characteristic time of failure depends on loading rate and effective normal stress. Using this model, the probability enhancement during the period of rapid slip in Cascadia is negligible (<10%) for effective normal stresses of 10 MPa or more and only increases by 1.5 times for an effective normal stress of 1 MPa. We present arguments that the hypocentral effective normal stress exceeds 1 MPa. In addition, the probability enhancement due to rapid slip extends into the interevent period. With this delayed failure model for effective normal stresses greater than or equal to 50 kPa, it is more likely that a great earthquake will occur between the periods of rapid deep slip than during them. Our conclusion is that great earthquake occurrence is not significantly enhanced by episodic deep slip events.

  13. Statistical validation of normal tissue complication probability models.

    PubMed

    Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis

    2012-09-01

    To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Aggregate and Individual Replication Probability within an Explicit Model of the Research Process

    ERIC Educational Resources Information Center

    Miller, Jeff; Schwarz, Wolf

    2011-01-01

    We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by…

  15. Plant calendar pattern based on rainfall forecast and the probability of its success in Deli Serdang regency of Indonesia

    NASA Astrophysics Data System (ADS)

    Darnius, O.; Sitorus, S.

    2018-03-01

    The objective of this study was to determine the pattern of plant calendar of three types of crops; namely, palawija, rice, andbanana, based on rainfall in Deli Serdang Regency. In the first stage, we forecasted rainfall by using time series analysis, and obtained appropriate model of ARIMA (1,0,0) (1,1,1)12. Based on the forecast result, we designed a plant calendar pattern for the three types of plant. Furthermore, the probability of success in the plant types following the plant calendar pattern was calculated by using the Markov process by discretizing the continuous rainfall data into three categories; namely, Below Normal (BN), Normal (N), and Above Normal (AN) to form the probability transition matrix. Finally, the combination of rainfall forecasting models and the Markov process were used to determine the pattern of cropping calendars and the probability of success in the three crops. This research used rainfall data of Deli Serdang Regency taken from the office of BMKG (Meteorologist Climatology and Geophysics Agency), Sampali Medan, Indonesia.

  16. Dependence of normal brain integral dose and normal tissue complication probability on the prescription isodose values for γ-knife radiosurgery

    NASA Astrophysics Data System (ADS)

    Ma, Lijun

    2001-11-01

    A recent multi-institutional clinical study suggested possible benefits of lowering the prescription isodose lines for stereotactic radiosurgery procedures. In this study, we investigate the dependence of the normal brain integral dose and the normal tissue complication probability (NTCP) on the prescription isodose values for γ-knife radiosurgery. An analytical dose model was developed for γ-knife treatment planning. The dose model was commissioned by fitting the measured dose profiles for each helmet size. The dose model was validated by comparing its results with the Leksell gamma plan (LGP, version 5.30) calculations. The normal brain integral dose and the NTCP were computed and analysed for an ensemble of treatment cases. The functional dependence of the normal brain integral dose and the NCTP versus the prescribing isodose values was studied for these cases. We found that the normal brain integral dose and the NTCP increase significantly when lowering the prescription isodose lines from 50% to 35% of the maximum tumour dose. Alternatively, the normal brain integral dose and the NTCP decrease significantly when raising the prescribing isodose lines from 50% to 65% of the maximum tumour dose. The results may be used as a guideline for designing future dose escalation studies for γ-knife applications.

  17. Optimizing the parameters of the Lyman-Kutcher-Burman, Källman, and Logit+EUD models for the rectum - a comparison between normal tissue complication probability and clinical data

    NASA Astrophysics Data System (ADS)

    Trojková, Darina; Judas, Libor; Trojek, Tomáš

    2014-11-01

    Minimizing the late rectal toxicity of prostate cancer patients is a very important and widely-discussed topic. Normal tissue complication probability (NTCP) models can be used to evaluate competing treatment plans. In our work, the parameters of the Lyman-Kutcher-Burman (LKB), Källman, and Logit+EUD models are optimized by minimizing the Brier score for a group of 302 prostate cancer patients. The NTCP values are calculated and are compared with the values obtained using previously published values for the parameters. χ2 Statistics were calculated as a check of goodness of optimization.

  18. Prediction of radiation-induced liver disease by Lyman normal-tissue complication probability model in three-dimensional conformal radiation therapy for primary liver carcinoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu ZhiYong; Department of Oncology, Shanghai Medical School, Fudan University, Shanghai; Liang Shixiong

    Purpose: To describe the probability of RILD by application of the Lyman-Kutcher-Burman normal-tissue complication (NTCP) model for primary liver carcinoma (PLC) treated with hypofractionated three-dimensional conformal radiotherapy (3D-CRT). Methods and Materials: A total of 109 PLC patients treated by 3D-CRT were followed for RILD. Of these patients, 93 were in liver cirrhosis of Child-Pugh Grade A, and 16 were in Child-Pugh Grade B. The Michigan NTCP model was used to predict the probability of RILD, and then the modified Lyman NTCP model was generated for Child-Pugh A and Child-Pugh B patients by maximum-likelihood analysis. Results: Of all patients, 17 developedmore » RILD in which 8 were of Child-Pugh Grade A, and 9 were of Child-Pugh Grade B. The prediction of RILD by the Michigan model was underestimated for PLC patients. The modified n, m, TD{sub 5} (1) were 1.1, 0.28, and 40.5 Gy and 0.7, 0.43, and 23 Gy for patients with Child-Pugh A and B, respectively, which yielded better estimations of RILD probability. The hepatic tolerable doses (TD{sub 5}) would be MDTNL of 21 Gy and 6 Gy, respectively, for Child-Pugh A and B patients. Conclusions: The Michigan model was probably not fit to predict RILD in PLC patients. A modified Lyman NTCP model for RILD was recommended.« less

  19. Kullback-Leibler information function and the sequential selection of experiments to discriminate among several linear models

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1972-01-01

    The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.

  20. Probability theory for 3-layer remote sensing radiative transfer model: univariate case.

    PubMed

    Ben-David, Avishai; Davidson, Charles E

    2012-04-23

    A probability model for a 3-layer radiative transfer model (foreground layer, cloud layer, background layer, and an external source at the end of line of sight) has been developed. The 3-layer model is fundamentally important as the primary physical model in passive infrared remote sensing. The probability model is described by the Johnson family of distributions that are used as a fit for theoretically computed moments of the radiative transfer model. From the Johnson family we use the SU distribution that can address a wide range of skewness and kurtosis values (in addition to addressing the first two moments, mean and variance). In the limit, SU can also describe lognormal and normal distributions. With the probability model one can evaluate the potential for detecting a target (vapor cloud layer), the probability of observing thermal contrast, and evaluate performance (receiver operating characteristics curves) in clutter-noise limited scenarios. This is (to our knowledge) the first probability model for the 3-layer remote sensing geometry that treats all parameters as random variables and includes higher-order statistics. © 2012 Optical Society of America

  1. Transient Properties of Probability Distribution for a Markov Process with Size-dependent Additive Noise

    NASA Astrophysics Data System (ADS)

    Yamada, Yuhei; Yamazaki, Yoshihiro

    2018-04-01

    This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.

  2. A stochastic model for the normal tissue complication probability (NTCP) and applicationss.

    PubMed

    Stocks, Theresa; Hillen, Thomas; Gong, Jiafen; Burger, Martin

    2017-12-11

    The normal tissue complication probability (NTCP) is a measure for the estimated side effects of a given radiation treatment schedule. Here we use a stochastic logistic birth-death process to define an organ-specific and patient-specific NTCP. We emphasize an asymptotic simplification which relates the NTCP to the solution of a logistic differential equation. This framework is based on simple modelling assumptions and it prepares a framework for the use of the NTCP model in clinical practice. As example, we consider side effects of prostate cancer brachytherapy such as increase in urinal frequency, urinal retention and acute rectal dysfunction. © The authors 2016. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

  3. Wildfire Risk Mapping over the State of Mississippi: Land Surface Modeling Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooke, William H.; Mostovoy, Georgy; Anantharaj, Valentine G

    2012-01-01

    Three fire risk indexes based on soil moisture estimates were applied to simulate wildfire probability over the southern part of Mississippi using the logistic regression approach. The fire indexes were retrieved from: (1) accumulated difference between daily precipitation and potential evapotranspiration (P-E); (2) top 10 cm soil moisture content simulated by the Mosaic land surface model; and (3) the Keetch-Byram drought index (KBDI). The P-E, KBDI, and soil moisture based indexes were estimated from gridded atmospheric and Mosaic-simulated soil moisture data available from the North American Land Data Assimilation System (NLDAS-2). Normalized deviations of these indexes from the 31-year meanmore » (1980-2010) were fitted into the logistic regression model describing probability of wildfires occurrence as a function of the fire index. It was assumed that such normalization provides more robust and adequate description of temporal dynamics of soil moisture anomalies than the original (not normalized) set of indexes. The logistic model parameters were evaluated for 0.25 x0.25 latitude/longitude cells and for probability representing at least one fire event occurred during 5 consecutive days. A 23-year (1986-2008) forest fires record was used. Two periods were selected and examined (January mid June and mid September December). The application of the logistic model provides an overall good agreement between empirical/observed and model-fitted fire probabilities over the study area during both seasons. The fire risk indexes based on the top 10 cm soil moisture and KBDI have the largest impact on the wildfire odds (increasing it by almost 2 times in response to each unit change of the corresponding fire risk index during January mid June period and by nearly 1.5 times during mid September-December) observed over 0.25 x0.25 cells located along the state of Mississippi Coast line. This result suggests a rather strong control of fire risk indexes on fire occurrence probability over this region.« less

  4. Inherent limitations of probabilistic models for protein-DNA binding specificity

    PubMed Central

    Ruan, Shuxiang

    2017-01-01

    The specificities of transcription factors are most commonly represented with probabilistic models. These models provide a probability for each base occurring at each position within the binding site and the positions are assumed to contribute independently. The model is simple and intuitive and is the basis for many motif discovery algorithms. However, the model also has inherent limitations that prevent it from accurately representing true binding probabilities, especially for the highest affinity sites under conditions of high protein concentration. The limitations are not due to the assumption of independence between positions but rather are caused by the non-linear relationship between binding affinity and binding probability and the fact that independent normalization at each position skews the site probabilities. Generally probabilistic models are reasonably good approximations, but new high-throughput methods allow for biophysical models with increased accuracy that should be used whenever possible. PMID:28686588

  5. A Multidimensional Ideal Point Item Response Theory Model for Binary Data.

    PubMed

    Maydeu-Olivares, Albert; Hernández, Adolfo; McDonald, Roderick P

    2006-12-01

    We introduce a multidimensional item response theory (IRT) model for binary data based on a proximity response mechanism. Under the model, a respondent at the mode of the item response function (IRF) endorses the item with probability one. The mode of the IRF is the ideal point, or in the multidimensional case, an ideal hyperplane. The model yields closed form expressions for the cell probabilities. We estimate and test the goodness of fit of the model using only information contained in the univariate and bivariate moments of the data. Also, we pit the new model against the multidimensional normal ogive model estimated using NOHARM in four applications involving (a) attitudes toward censorship, (b) satisfaction with life, (c) attitudes of morality and equality, and (d) political efficacy. The normal PDF model is not invariant to simple operations such as reverse scoring. Thus, when there is no natural category to be modeled, as in many personality applications, it should be fit separately with and without reverse scoring for comparisons.

  6. A constrained multinomial Probit route choice model in the metro network: Formulation, estimation and application

    PubMed Central

    Zhang, Yongsheng; Wei, Heng; Zheng, Kangning

    2017-01-01

    Considering that metro network expansion brings us with more alternative routes, it is attractive to integrate the impacts of routes set and the interdependency among alternative routes on route choice probability into route choice modeling. Therefore, the formulation, estimation and application of a constrained multinomial probit (CMNP) route choice model in the metro network are carried out in this paper. The utility function is formulated as three components: the compensatory component is a function of influencing factors; the non-compensatory component measures the impacts of routes set on utility; following a multivariate normal distribution, the covariance of error component is structured into three parts, representing the correlation among routes, the transfer variance of route, and the unobserved variance respectively. Considering multidimensional integrals of the multivariate normal probability density function, the CMNP model is rewritten as Hierarchical Bayes formula and M-H sampling algorithm based Monte Carlo Markov Chain approach is constructed to estimate all parameters. Based on Guangzhou Metro data, reliable estimation results are gained. Furthermore, the proposed CMNP model also shows a good forecasting performance for the route choice probabilities calculation and a good application performance for transfer flow volume prediction. PMID:28591188

  7. Probabilities and statistics for backscatter estimates obtained by a scatterometer

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    Methods for the recovery of winds near the surface of the ocean from measurements of the normalized radar backscattering cross section must recognize and make use of the statistics (i.e., the sampling variability) of the backscatter measurements. Radar backscatter values from a scatterometer are random variables with expected values given by a model. A model relates backscatter to properties of the waves on the ocean, which are in turn generated by the winds in the atmospheric marine boundary layer. The effective wind speed and direction at a known height for a neutrally stratified atmosphere are the values to be recovered from the model. The probability density function for the backscatter values is a normal probability distribution with the notable feature that the variance is a known function of the expected value. The sources of signal variability, the effects of this variability on the wind speed estimation, and criteria for the acceptance or rejection of models are discussed. A modified maximum likelihood method for estimating wind vectors is described. Ways to make corrections for the kinds of errors found for the Seasat SASS model function are described, and applications to a new scatterometer are given.

  8. A new formula for normal tissue complication probability (NTCP) as a function of equivalent uniform dose (EUD).

    PubMed

    Luxton, Gary; Keall, Paul J; King, Christopher R

    2008-01-07

    To facilitate the use of biological outcome modeling for treatment planning, an exponential function is introduced as a simpler equivalent to the Lyman formula for calculating normal tissue complication probability (NTCP). The single parameter of the exponential function is chosen to reproduce the Lyman calculation to within approximately 0.3%, and thus enable easy conversion of data contained in empirical fits of Lyman parameters for organs at risk (OARs). Organ parameters for the new formula are given in terms of Lyman model m and TD(50), and conversely m and TD(50) are expressed in terms of the parameters of the new equation. The role of the Lyman volume-effect parameter n is unchanged from its role in the Lyman model. For a non-homogeneously irradiated OAR, an equation relates d(ref), n, v(eff) and the Niemierko equivalent uniform dose (EUD), where d(ref) and v(eff) are the reference dose and effective fractional volume of the Kutcher-Burman reduction algorithm (i.e. the LKB model). It follows in the LKB model that uniform EUD irradiation of an OAR results in the same NTCP as the original non-homogeneous distribution. The NTCP equation is therefore represented as a function of EUD. The inverse equation expresses EUD as a function of NTCP and is used to generate a table of EUD versus normal tissue complication probability for the Emami-Burman parameter fits as well as for OAR parameter sets from more recent data.

  9. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models.

    PubMed

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A

    2012-03-15

    To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Normal uniform mixture differential gene expression detection for cDNA microarrays

    PubMed Central

    Dean, Nema; Raftery, Adrian E

    2005-01-01

    Background One of the primary tasks in analysing gene expression data is finding genes that are differentially expressed in different samples. Multiple testing issues due to the thousands of tests run make some of the more popular methods for doing this problematic. Results We propose a simple method, Normal Uniform Differential Gene Expression (NUDGE) detection for finding differentially expressed genes in cDNA microarrays. The method uses a simple univariate normal-uniform mixture model, in combination with new normalization methods for spread as well as mean that extend the lowess normalization of Dudoit, Yang, Callow and Speed (2002) [1]. It takes account of multiple testing, and gives probabilities of differential expression as part of its output. It can be applied to either single-slide or replicated experiments, and it is very fast. Three datasets are analyzed using NUDGE, and the results are compared to those given by other popular methods: unadjusted and Bonferroni-adjusted t tests, Significance Analysis of Microarrays (SAM), and Empirical Bayes for microarrays (EBarrays) with both Gamma-Gamma and Lognormal-Normal models. Conclusion The method gives a high probability of differential expression to genes known/suspected a priori to be differentially expressed and a low probability to the others. In terms of known false positives and false negatives, the method outperforms all multiple-replicate methods except for the Gamma-Gamma EBarrays method to which it offers comparable results with the added advantages of greater simplicity, speed, fewer assumptions and applicability to the single replicate case. An R package called nudge to implement the methods in this paper will be made available soon at . PMID:16011807

  11. A Gaussian Model-Based Probabilistic Approach for Pulse Transit Time Estimation.

    PubMed

    Jang, Dae-Geun; Park, Seung-Hun; Hahn, Minsoo

    2016-01-01

    In this paper, we propose a new probabilistic approach to pulse transit time (PTT) estimation using a Gaussian distribution model. It is motivated basically by the hypothesis that PTTs normalized by RR intervals follow the Gaussian distribution. To verify the hypothesis, we demonstrate the effects of arterial compliance on the normalized PTTs using the Moens-Korteweg equation. Furthermore, we observe a Gaussian distribution of the normalized PTTs on real data. In order to estimate the PTT using the hypothesis, we first assumed that R-waves in the electrocardiogram (ECG) can be correctly identified. The R-waves limit searching ranges to detect pulse peaks in the photoplethysmogram (PPG) and to synchronize the results with cardiac beats--i.e., the peaks of the PPG are extracted within the corresponding RR interval of the ECG as pulse peak candidates. Their probabilities of being the actual pulse peak are then calculated using a Gaussian probability function. The parameters of the Gaussian function are automatically updated when a new pulse peak is identified. This update makes the probability function adaptive to variations of cardiac cycles. Finally, the pulse peak is identified as the candidate with the highest probability. The proposed approach is tested on a database where ECG and PPG waveforms are collected simultaneously during the submaximal bicycle ergometer exercise test. The results are promising, suggesting that the method provides a simple but more accurate PTT estimation in real applications.

  12. The development and implementation of stroke risk prediction model in National Health Insurance Service's personal health record.

    PubMed

    Lee, Jae-Woo; Lim, Hyun-Sun; Kim, Dong-Wook; Shin, Soon-Ae; Kim, Jinkwon; Yoo, Bora; Cho, Kyung-Hee

    2018-01-01

    The purpose of this study was to build a 10-year stroke prediction model and categorize a probability of stroke using the Korean national health examination data. Then it intended to develop the algorithm to provide a personalized warning on the basis of each user's level of stroke risk and a lifestyle correction message about the stroke risk factors. Subject to national health examinees in 2002-2003, the stroke prediction model identified when stroke was first diagnosed by following-up the cohort until 2013 and estimated a 10-year probability of stroke. It sorted the user's individual probability of stroke into five categories - normal, slightly high, high, risky, very risky, according to the five ranges of average probability of stroke in comparison to total population - less than 50 percentile, 50-70, 70-90, 90-99.9, more than 99.9 percentile, and constructed the personalized warning and lifestyle correction messages by each category. Risk factors in stroke risk model include the age, BMI, cholesterol, hypertension, diabetes, smoking status and intensity, physical activity, alcohol drinking, past history (hypertension, coronary heart disease) and family history (stroke, coronary heart disease). The AUC values of stroke risk prediction model from the external validation data set were 0.83 in men and 0.82 in women, which showed a high predictive power. The probability of stroke within 10 years for men in normal group (less than 50 percentile) was less than 3.92% and those in very risky group (top 0.01 percentile) was 66.2% and over. The women's probability of stroke within 10 years was less than 3.77% in normal group (less than 50 percentile) and 55.24% and over in very risky group. This study developed the stroke risk prediction model and the personalized warning and the lifestyle correction message based on the national health examination data and uploaded them to the personal health record service called My Health Bank in the health information website - Health iN. By doing so, it urged medical users to strengthen the motivation of health management and induced changes in their health behaviors. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  13. Pretest probability of a normal echocardiography: validation of a simple and practical algorithm for routine use.

    PubMed

    Hammoudi, Nadjib; Duprey, Matthieu; Régnier, Philippe; Achkar, Marc; Boubrit, Lila; Preud'homme, Gisèle; Healy-Brucker, Aude; Vignalou, Jean-Baptiste; Pousset, Françoise; Komajda, Michel; Isnard, Richard

    2014-02-01

    Management of increased referrals for transthoracic echocardiography (TTE) examinations is a challenge. Patients with normal TTE examinations take less time to explore than those with heart abnormalities. A reliable method for assessing pretest probability of a normal TTE may optimize management of requests. To establish and validate, based on requests for examinations, a simple algorithm for defining pretest probability of a normal TTE. In a retrospective phase, factors associated with normality were investigated and an algorithm was designed. In a prospective phase, patients were classified in accordance with the algorithm as being at high or low probability of having a normal TTE. In the retrospective phase, 42% of 618 examinations were normal. In multivariable analysis, age and absence of cardiac history were associated to normality. Low pretest probability of normal TTE was defined by known cardiac history or, in case of doubt about cardiac history, by age>70 years. In the prospective phase, the prevalences of normality were 72% and 25% in high (n=167) and low (n=241) pretest probability of normality groups, respectively. The mean duration of normal examinations was significantly shorter than abnormal examinations (13.8 ± 9.2 min vs 17.6 ± 11.1 min; P=0.0003). A simple algorithm can classify patients referred for TTE as being at high or low pretest probability of having a normal examination. This algorithm might help to optimize management of requests in routine practice. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  14. Neyman, Markov processes and survival analysis.

    PubMed

    Yang, Grace

    2013-07-01

    J. Neyman used stochastic processes extensively in his applied work. One example is the Fix and Neyman (F-N) competing risks model (1951) that uses finite homogeneous Markov processes to analyse clinical trials with breast cancer patients. We revisit the F-N model, and compare it with the Kaplan-Meier (K-M) formulation for right censored data. The comparison offers a way to generalize the K-M formulation to include risks of recovery and relapses in the calculation of a patient's survival probability. The generalization is to extend the F-N model to a nonhomogeneous Markov process. Closed-form solutions of the survival probability are available in special cases of the nonhomogeneous processes, like the popular multiple decrement model (including the K-M model) and Chiang's staging model, but these models do not consider recovery and relapses while the F-N model does. An analysis of sero-epidemiology current status data with recurrent events is illustrated. Fix and Neyman used Neyman's RBAN (regular best asymptotic normal) estimates for the risks, and provided a numerical example showing the importance of considering both the survival probability and the length of time of a patient living a normal life in the evaluation of clinical trials. The said extension would result in a complicated model and it is unlikely to find analytical closed-form solutions for survival analysis. With ever increasing computing power, numerical methods offer a viable way of investigating the problem.

  15. Aggregate and individual replication probability within an explicit model of the research process.

    PubMed

    Miller, Jeff; Schwarz, Wolf

    2011-09-01

    We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by obtaining either a statistically significant result in the same direction or any effect in that direction. We analyze both the probability of successfully replicating a particular experimental effect (i.e., the individual replication probability) and the average probability of successful replication across different studies within some research context (i.e., the aggregate replication probability), and we identify the conditions under which the latter can be approximated using the formulas of Killeen (2005a, 2007). We show how both of these probabilities depend on parameters of the research context that would rarely be known in practice. In addition, we show that the statistical uncertainty associated with the size of an initial observed effect would often prevent accurate estimation of the desired individual replication probability even if these research context parameters were known exactly. We conclude that accurate estimates of replication probability are generally unattainable.

  16. Development of a normal tissue complication probability (NTCP) model for radiation-induced hypothyroidism in nasopharyngeal carcinoma patients.

    PubMed

    Luo, Ren; Wu, Vincent W C; He, Binghui; Gao, Xiaoying; Xu, Zhenxi; Wang, Dandan; Yang, Zhining; Li, Mei; Lin, Zhixiong

    2018-05-18

    The objectives of this study were to build a normal tissue complication probability (NTCP) model of radiation-induced hypothyroidism (RHT) for nasopharyngeal carcinoma (NPC) patients and to compare it with other four published NTCP models to evaluate its efficacy. Medical notes of 174 NPC patients after radiotherapy were reviewed. Biochemical hypothyroidism was defined as an elevated level of serum thyroid-stimulating hormone (TSH) value with a normal or decreased level of serum free thyroxine (fT4) after radiotherapy. Logistic regression with leave-one-out cross-validation was performed to establish the NTCP model. Model performance was evaluated and compared by the area under the receiver operating characteristic curve (AUC) in our NPC cohort. With a median follow-up of 24 months, 39 (22.4%) patients developed biochemical hypothyroidism. Gender, chemotherapy, the percentage thyroid volume receiving more than 50 Gy (V 50 ), and the maximum dose of the pituitary (P max ) were identified as the most predictive factors for RHT. A NTCP model based on these four parameters were developed. The model comparison was made in our NPC cohort and our NTCP model performed better in RHT prediction than the other four models. This study developed a four-variable NTCP model for biochemical hypothyroidism in NPC patients post-radiotherapy. Our NTCP model for RHT presents a high prediction capability. This is a retrospective study without registration.

  17. Bivariate normal, conditional and rectangular probabilities: A computer program with applications

    NASA Technical Reports Server (NTRS)

    Swaroop, R.; Brownlow, J. D.; Ashwworth, G. R.; Winter, W. R.

    1980-01-01

    Some results for the bivariate normal distribution analysis are presented. Computer programs for conditional normal probabilities, marginal probabilities, as well as joint probabilities for rectangular regions are given: routines for computing fractile points and distribution functions are also presented. Some examples from a closed circuit television experiment are included.

  18. Normal tissue complication probability modeling of radiation-induced hypothyroidism after head-and-neck radiation therapy.

    PubMed

    Bakhshandeh, Mohsen; Hashemi, Bijan; Mahdavi, Seied Rabi Mehdi; Nikoofar, Alireza; Vasheghani, Maryam; Kazemnejad, Anoshirvan

    2013-02-01

    To determine the dose-response relationship of the thyroid for radiation-induced hypothyroidism in head-and-neck radiation therapy, according to 6 normal tissue complication probability models, and to find the best-fit parameters of the models. Sixty-five patients treated with primary or postoperative radiation therapy for various cancers in the head-and-neck region were prospectively evaluated. Patient serum samples (tri-iodothyronine, thyroxine, thyroid-stimulating hormone [TSH], free tri-iodothyronine, and free thyroxine) were measured before and at regular time intervals until 1 year after the completion of radiation therapy. Dose-volume histograms (DVHs) of the patients' thyroid gland were derived from their computed tomography (CT)-based treatment planning data. Hypothyroidism was defined as increased TSH (subclinical hypothyroidism) or increased TSH in combination with decreased free thyroxine and thyroxine (clinical hypothyroidism). Thyroid DVHs were converted to 2 Gy/fraction equivalent doses using the linear-quadratic formula with α/β = 3 Gy. The evaluated models included the following: Lyman with the DVH reduced to the equivalent uniform dose (EUD), known as LEUD; Logit-EUD; mean dose; relative seriality; individual critical volume; and population critical volume models. The parameters of the models were obtained by fitting the patients' data using a maximum likelihood analysis method. The goodness of fit of the models was determined by the 2-sample Kolmogorov-Smirnov test. Ranking of the models was made according to Akaike's information criterion. Twenty-nine patients (44.6%) experienced hypothyroidism. None of the models was rejected according to the evaluation of the goodness of fit. The mean dose model was ranked as the best model on the basis of its Akaike's information criterion value. The D(50) estimated from the models was approximately 44 Gy. The implemented normal tissue complication probability models showed a parallel architecture for the thyroid. The mean dose model can be used as the best model to describe the dose-response relationship for hypothyroidism complication. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Cost Effectiveness of Support for People Starting a New Medication for a Long-Term Condition Through Community Pharmacies: An Economic Evaluation of the New Medicine Service (NMS) Compared with Normal Practice.

    PubMed

    Elliott, Rachel A; Tanajewski, Lukasz; Gkountouras, Georgios; Avery, Anthony J; Barber, Nick; Mehta, Rajnikant; Boyd, Matthew J; Latif, Asam; Chuter, Antony; Waring, Justin

    2017-12-01

    The English community pharmacy New Medicine Service (NMS) significantly increases patient adherence to medicines, compared with normal practice. We examined the cost effectiveness of NMS compared with normal practice by combining adherence improvement and intervention costs with the effect of increased adherence on patient outcomes and healthcare costs. We developed Markov models for diseases targeted by the NMS (hypertension, type 2 diabetes mellitus, chronic obstructive pulmonary disease, asthma and antiplatelet regimens) to assess the impact of patients' non-adherence. Clinical event probability, treatment pathway, resource use and costs were extracted from literature and costing tariffs. Incremental costs and outcomes associated with each disease were incorporated additively into a composite probabilistic model and combined with adherence rates and intervention costs from the trial. Costs per extra quality-adjusted life-year (QALY) were calculated from the perspective of NHS England, using a lifetime horizon. NMS generated a mean of 0.05 (95% CI 0.00-0.13) more QALYs per patient, at a mean reduced cost of -£144 (95% CI -769 to 73). The NMS dominates normal practice with a probability of 0.78 [incremental cost-effectiveness ratio (ICER) -£3166 per QALY]. NMS has a 96.7% probability of cost effectiveness compared with normal practice at a willingness to pay of £20,000 per QALY. Sensitivity analysis demonstrated that targeting each disease with NMS has a probability over 0.90 of cost effectiveness compared with normal practice at a willingness to pay of £20,000 per QALY. Our study suggests that the NMS increased patient medicine adherence compared with normal practice, which translated into increased health gain at reduced overall cost. ClinicalTrials.gov Trial reference number NCT01635361 ( http://clinicaltrials.gov/ct2/show/NCT01635361 ). Current Controlled trials: Trial reference number ISRCTN 23560818 ( http://www.controlled-trials.com/ISRCTN23560818/ ; DOI 10.1186/ISRCTN23560818 ). UK Clinical Research Network (UKCRN) study 12494 ( http://public.ukcrn.org.uk/Search/StudyDetail.aspx?StudyID=12494 ). Department of Health Policy Research Programme.

  20. Normal tissue complication probability modelling of tissue fibrosis following breast radiotherapy

    NASA Astrophysics Data System (ADS)

    Alexander, M. A. R.; Brooks, W. A.; Blake, S. W.

    2007-04-01

    Cosmetic late effects of radiotherapy such as tissue fibrosis are increasingly regarded as being of importance. It is generally considered that the complication probability of a radiotherapy plan is dependent on the dose uniformity, and can be reduced by using better compensation to remove dose hotspots. This work aimed to model the effects of improved dose homogeneity on complication probability. The Lyman and relative seriality NTCP models were fitted to clinical fibrosis data for the breast collated from the literature. Breast outlines were obtained from a commercially available Rando phantom using the Osiris system. Multislice breast treatment plans were produced using a variety of compensation methods. Dose-volume histograms (DVHs) obtained for each treatment plan were reduced to simple numerical parameters using the equivalent uniform dose and effective volume DVH reduction methods. These parameters were input into the models to obtain complication probability predictions. The fitted model parameters were consistent with a parallel tissue architecture. Conventional clinical plans generally showed reducing complication probabilities with increasing compensation sophistication. Extremely homogenous plans representing idealized IMRT treatments showed increased complication probabilities compared to conventional planning methods, as a result of increased dose to areas receiving sub-prescription doses using conventional techniques.

  1. Normal probabilities for Vandenberg AFB wind components - monthly reference periods for all flight azimuths, 0- to 70-km altitudes

    NASA Technical Reports Server (NTRS)

    Falls, L. W.

    1975-01-01

    Vandenberg Air Force Base (AFB), California, wind component statistics are presented to be used for aerospace engineering applications that require component wind probabilities for various flight azimuths and selected altitudes. The normal (Gaussian) distribution is presented as a statistical model to represent component winds at Vandenberg AFB. Head tail, and crosswind components are tabulated for all flight azimuths for altitudes from 0 to 70 km by monthly reference periods. Wind components are given for 11 selected percentiles ranging from 0.135 percent to 99.865 percent for each month. The results of statistical goodness-of-fit tests are presented to verify the use of the Gaussian distribution as an adequate model to represent component winds at Vandenberg AFB.

  2. Normal probabilities for Cape Kennedy wind components: Monthly reference periods for all flight azimuths. Altitudes 0 to 70 kilometers

    NASA Technical Reports Server (NTRS)

    Falls, L. W.

    1973-01-01

    This document replaces Cape Kennedy empirical wind component statistics which are presently being used for aerospace engineering applications that require component wind probabilities for various flight azimuths and selected altitudes. The normal (Gaussian) distribution is presented as an adequate statistical model to represent component winds at Cape Kennedy. Head-, tail-, and crosswind components are tabulated for all flight azimuths for altitudes from 0 to 70 km by monthly reference periods. Wind components are given for 11 selected percentiles ranging from 0.135 percent to 99,865 percent for each month. Results of statistical goodness-of-fit tests are presented to verify the use of the Gaussian distribution as an adequate model to represent component winds at Cape Kennedy, Florida.

  3. Joint Segmentation and Deformable Registration of Brain Scans Guided by a Tumor Growth Model

    PubMed Central

    Gooya, Ali; Pohl, Kilian M.; Bilello, Michel; Biros, George; Davatzikos, Christos

    2011-01-01

    This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR ) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth. PMID:21995070

  4. Joint segmentation and deformable registration of brain scans guided by a tumor growth model.

    PubMed

    Gooya, Ali; Pohl, Kilian M; Bilello, Michel; Biros, George; Davatzikos, Christos

    2011-01-01

    This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth.

  5. A probabilistic approach to photovoltaic generator performance prediction

    NASA Astrophysics Data System (ADS)

    Khallat, M. A.; Rahman, S.

    1986-09-01

    A method for predicting the performance of a photovoltaic (PV) generator based on long term climatological data and expected cell performance is described. The equations for cell model formulation are provided. Use of the statistical model for characterizing the insolation level is discussed. The insolation data is fitted to appropriate probability distribution functions (Weibull, beta, normal). The probability distribution functions are utilized to evaluate the capacity factors of PV panels or arrays. An example is presented revealing the applicability of the procedure.

  6. Probabilities and statistics for backscatter estimates obtained by a scatterometer with applications to new scatterometer design data

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    The values of the Normalized Radar Backscattering Cross Section (NRCS), sigma (o), obtained by a scatterometer are random variables whose variance is a known function of the expected value. The probability density function can be obtained from the normal distribution. Models for the expected value obtain it as a function of the properties of the waves on the ocean and the winds that generated the waves. Point estimates of the expected value were found from various statistics given the parameters that define the probability density function for each value. Random intervals were derived with a preassigned probability of containing that value. A statistical test to determine whether or not successive values of sigma (o) are truly independent was derived. The maximum likelihood estimates for wind speed and direction were found, given a model for backscatter as a function of the properties of the waves on the ocean. These estimates are biased as a result of the terms in the equation that involve natural logarithms, and calculations of the point estimates of the maximum likelihood values are used to show that the contributions of the logarithmic terms are negligible and that the terms can be omitted.

  7. Functional Data Analysis in NTCP Modeling: A New Method to Explore the Radiation Dose-Volume Effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benadjaoud, Mohamed Amine, E-mail: mohamedamine.benadjaoud@gustaveroussy.fr; Université Paris sud, Le Kremlin-Bicêtre; Institut Gustave Roussy, Villejuif

    2014-11-01

    Purpose/Objective(s): To describe a novel method to explore radiation dose-volume effects. Functional data analysis is used to investigate the information contained in differential dose-volume histograms. The method is applied to the normal tissue complication probability modeling of rectal bleeding (RB) for patients irradiated in the prostatic bed by 3-dimensional conformal radiation therapy. Methods and Materials: Kernel density estimation was used to estimate the individual probability density functions from each of the 141 rectum differential dose-volume histograms. Functional principal component analysis was performed on the estimated probability density functions to explore the variation modes in the dose distribution. The functional principalmore » components were then tested for association with RB using logistic regression adapted to functional covariates (FLR). For comparison, 3 other normal tissue complication probability models were considered: the Lyman-Kutcher-Burman model, logistic model based on standard dosimetric parameters (LM), and logistic model based on multivariate principal component analysis (PCA). Results: The incidence rate of grade ≥2 RB was 14%. V{sub 65Gy} was the most predictive factor for the LM (P=.058). The best fit for the Lyman-Kutcher-Burman model was obtained with n=0.12, m = 0.17, and TD50 = 72.6 Gy. In PCA and FLR, the components that describe the interdependence between the relative volumes exposed at intermediate and high doses were the most correlated to the complication. The FLR parameter function leads to a better understanding of the volume effect by including the treatment specificity in the delivered mechanistic information. For RB grade ≥2, patients with advanced age are significantly at risk (odds ratio, 1.123; 95% confidence interval, 1.03-1.22), and the fits of the LM, PCA, and functional principal component analysis models are significantly improved by including this clinical factor. Conclusion: Functional data analysis provides an attractive method for flexibly estimating the dose-volume effect for normal tissues in external radiation therapy.« less

  8. Probability distribution functions for unit hydrographs with optimization using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Ghorbani, Mohammad Ali; Singh, Vijay P.; Sivakumar, Bellie; H. Kashani, Mahsa; Atre, Atul Arvind; Asadi, Hakimeh

    2017-05-01

    A unit hydrograph (UH) of a watershed may be viewed as the unit pulse response function of a linear system. In recent years, the use of probability distribution functions (pdfs) for determining a UH has received much attention. In this study, a nonlinear optimization model is developed to transmute a UH into a pdf. The potential of six popular pdfs, namely two-parameter gamma, two-parameter Gumbel, two-parameter log-normal, two-parameter normal, three-parameter Pearson distribution, and two-parameter Weibull is tested on data from the Lighvan catchment in Iran. The probability distribution parameters are determined using the nonlinear least squares optimization method in two ways: (1) optimization by programming in Mathematica; and (2) optimization by applying genetic algorithm. The results are compared with those obtained by the traditional linear least squares method. The results show comparable capability and performance of two nonlinear methods. The gamma and Pearson distributions are the most successful models in preserving the rising and recession limbs of the unit hydographs. The log-normal distribution has a high ability in predicting both the peak flow and time to peak of the unit hydrograph. The nonlinear optimization method does not outperform the linear least squares method in determining the UH (especially for excess rainfall of one pulse), but is comparable.

  9. Prediction of radiation-induced normal tissue complications in radiotherapy using functional image data

    NASA Astrophysics Data System (ADS)

    Nioutsikou, Elena; Partridge, Mike; Bedford, James L.; Webb, Steve

    2005-03-01

    The aim of this study has been to explicitly include the functional heterogeneity of an organ as a factor that contributes to the probability of complication of normal tissues following radiotherapy. Situations for which the inclusion of this information can be advantageous to the design of treatment plans are then investigated. A Java program has been implemented for this purpose. This makes use of a voxelated model of a patient, which is based on registered anatomical and functional data in order to enable functional voxel weighting. Using this model, the functional dose-volume histogram (fDVH) and the functional normal tissue complication probability (fNTCP) are then introduced as extensions to the conventional dose-volume histogram (DVH) and normal tissue complication probability (NTCP). In the presence of functional heterogeneity, these tools are physically more meaningful for plan evaluation than the traditional indices, as they incorporate additional information and are anticipated to show a better correlation with outcome. New parameters mf, nf and TD50f are required to replace the m, n and TD50 parameters. A range of plausible values was investigated, awaiting fitting of these new parameters to patient outcomes where functional data have been measured. As an example, the model is applied to two lung datasets utilizing accurately registered computed tomography (CT) and single photon emission computed tomography (SPECT) perfusion scans. Assuming a linear perfusion-function relationship, the biological index mean perfusion weighted lung dose (MPWLD) has been extracted from integration over outlined regions of interest. In agreement with the MPWLD ranking, the fNTCP predictions reveal that incorporation of functional imaging in radiotherapy treatment planning is most beneficial for organs with a large volume effect and large focal areas of dysfunction. There is, however, no additional advantage in cases presenting with homogeneous function. Although presented for lung radiotherapy, this model is general. It can also be applied to positron emission tomography (PET)-CT or functional magnetic resonance imaging (fMRI)-CT registered data and extended to the functional description of tumour control probability.

  10. Technology-enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution

    PubMed Central

    Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2014-01-01

    Summary Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students’ understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference. PMID:25419016

  11. Technology-enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution.

    PubMed

    Dinov, Ivo D; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2013-01-01

    Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students' understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference.

  12. A Bayesian Semiparametric Item Response Model with Dirichlet Process Priors

    ERIC Educational Resources Information Center

    Miyazaki, Kei; Hoshino, Takahiro

    2009-01-01

    In Item Response Theory (IRT), item characteristic curves (ICCs) are illustrated through logistic models or normal ogive models, and the probability that examinees give the correct answer is usually a monotonically increasing function of their ability parameters. However, since only limited patterns of shapes can be obtained from logistic models…

  13. Location Prediction Based on Transition Probability Matrices Constructing from Sequential Rules for Spatial-Temporal K-Anonymity Dataset

    PubMed Central

    Liu, Zhao; Zhu, Yunhong; Wu, Chenxue

    2016-01-01

    Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users’ privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502

  14. Incorporating detection probability into northern Great Plains pronghorn population estimates

    USGS Publications Warehouse

    Jacques, Christopher N.; Jenks, Jonathan A.; Grovenburg, Troy W.; Klaver, Robert W.; DePerno, Christopher S.

    2014-01-01

    Pronghorn (Antilocapra americana) abundances commonly are estimated using fixed-wing surveys, but these estimates are likely to be negatively biased because of violations of key assumptions underpinning line-transect methodology. Reducing bias and improving precision of abundance estimates through use of detection probability and mark-resight models may allow for more responsive pronghorn management actions. Given their potential application in population estimation, we evaluated detection probability and mark-resight models for use in estimating pronghorn population abundance. We used logistic regression to quantify probabilities that detecting pronghorn might be influenced by group size, animal activity, percent vegetation, cover type, and topography. We estimated pronghorn population size by study area and year using mixed logit-normal mark-resight (MLNM) models. Pronghorn detection probability increased with group size, animal activity, and percent vegetation; overall detection probability was 0.639 (95% CI = 0.612–0.667) with 396 of 620 pronghorn groups detected. Despite model selection uncertainty, the best detection probability models were 44% (range = 8–79%) and 180% (range = 139–217%) greater than traditional pronghorn population estimates. Similarly, the best MLNM models were 28% (range = 3–58%) and 147% (range = 124–180%) greater than traditional population estimates. Detection probability of pronghorn was not constant but depended on both intrinsic and extrinsic factors. When pronghorn detection probability is a function of animal group size, animal activity, landscape complexity, and percent vegetation, traditional aerial survey techniques will result in biased pronghorn abundance estimates. Standardizing survey conditions, increasing resighting occasions, or accounting for variation in individual heterogeneity in mark-resight models will increase the accuracy and precision of pronghorn population estimates.

  15. Modeling pore corrosion in normally open gold- plated copper connectors.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Battaile, Corbett Chandler; Moffat, Harry K.; Sun, Amy Cha-Tien

    2008-09-01

    The goal of this study is to model the electrical response of gold plated copper electrical contacts exposed to a mixed flowing gas stream consisting of air containing 10 ppb H{sub 2}S at 30 C and a relative humidity of 70%. This environment accelerates the attack normally observed in a light industrial environment (essentially a simplified version of the Battelle Class 2 environment). Corrosion rates were quantified by measuring the corrosion site density, size distribution, and the macroscopic electrical resistance of the aged surface as a function of exposure time. A pore corrosion numerical model was used to predict bothmore » the growth of copper sulfide corrosion product which blooms through defects in the gold layer and the resulting electrical contact resistance of the aged surface. Assumptions about the distribution of defects in the noble metal plating and the mechanism for how corrosion blooms affect electrical contact resistance were needed to complete the numerical model. Comparisons are made to the experimentally observed number density of corrosion sites, the size distribution of corrosion product blooms, and the cumulative probability distribution of the electrical contact resistance. Experimentally, the bloom site density increases as a function of time, whereas the bloom size distribution remains relatively independent of time. These two effects are included in the numerical model by adding a corrosion initiation probability proportional to the surface area along with a probability for bloom-growth extinction proportional to the corrosion product bloom volume. The cumulative probability distribution of electrical resistance becomes skewed as exposure time increases. While the electrical contact resistance increases as a function of time for a fraction of the bloom population, the median value remains relatively unchanged. In order to model this behavior, the resistance calculated for large blooms has been weighted more heavily.« less

  16. Normal Tissue Complication Probability Modeling of Radiation-Induced Hypothyroidism After Head-and-Neck Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakhshandeh, Mohsen; Hashemi, Bijan, E-mail: bhashemi@modares.ac.ir; Mahdavi, Seied Rabi Mehdi

    Purpose: To determine the dose-response relationship of the thyroid for radiation-induced hypothyroidism in head-and-neck radiation therapy, according to 6 normal tissue complication probability models, and to find the best-fit parameters of the models. Methods and Materials: Sixty-five patients treated with primary or postoperative radiation therapy for various cancers in the head-and-neck region were prospectively evaluated. Patient serum samples (tri-iodothyronine, thyroxine, thyroid-stimulating hormone [TSH], free tri-iodothyronine, and free thyroxine) were measured before and at regular time intervals until 1 year after the completion of radiation therapy. Dose-volume histograms (DVHs) of the patients' thyroid gland were derived from their computed tomography (CT)-basedmore » treatment planning data. Hypothyroidism was defined as increased TSH (subclinical hypothyroidism) or increased TSH in combination with decreased free thyroxine and thyroxine (clinical hypothyroidism). Thyroid DVHs were converted to 2 Gy/fraction equivalent doses using the linear-quadratic formula with {alpha}/{beta} = 3 Gy. The evaluated models included the following: Lyman with the DVH reduced to the equivalent uniform dose (EUD), known as LEUD; Logit-EUD; mean dose; relative seriality; individual critical volume; and population critical volume models. The parameters of the models were obtained by fitting the patients' data using a maximum likelihood analysis method. The goodness of fit of the models was determined by the 2-sample Kolmogorov-Smirnov test. Ranking of the models was made according to Akaike's information criterion. Results: Twenty-nine patients (44.6%) experienced hypothyroidism. None of the models was rejected according to the evaluation of the goodness of fit. The mean dose model was ranked as the best model on the basis of its Akaike's information criterion value. The D{sub 50} estimated from the models was approximately 44 Gy. Conclusions: The implemented normal tissue complication probability models showed a parallel architecture for the thyroid. The mean dose model can be used as the best model to describe the dose-response relationship for hypothyroidism complication.« less

  17. Assessing the uncertainty in a normal tissue complication probability difference (∆NTCP): radiation-induced liver disease (RILD) in liver tumour patients treated with proton vs X-ray therapy.

    PubMed

    Kobashi, Keiji; Prayongrat, Anussara; Kimoto, Takuya; Toramatsu, Chie; Dekura, Yasuhiro; Katoh, Norio; Shimizu, Shinichi; Ito, Yoichi M; Shirato, Hiroki

    2018-03-01

    Modern radiotherapy technologies such as proton beam therapy (PBT) permit dose escalation to the tumour and minimize unnecessary doses to normal tissues. To achieve appropriate patient selection for PBT, a normal tissue complication probability (NTCP) model can be applied to estimate the risk of treatment-related toxicity relative to X-ray therapy (XRT). A methodology for estimating the difference in NTCP (∆NTCP), including its uncertainty as a function of dose to normal tissue, is described in this study using the Delta method, a statistical method for evaluating the variance of functions, considering the variance-covariance matrix. We used a virtual individual patient dataset of radiation-induced liver disease (RILD) in liver tumour patients who were treated with XRT as a study model. As an alternative option for individual patient data, dose-bin data, which consists of the number of patients who developed toxicity in each dose level/bin and the total number of patients in that dose level/bin, are useful for multi-institutional data sharing. It provides comparable accuracy with individual patient data when using the Delta method. With reliable NTCP models, the ∆NTCP with uncertainty might potentially guide the use of PBT; however, clinical validation and a cost-effectiveness study are needed to determine the appropriate ∆NTCP threshold.

  18. Assessing the uncertainty in a normal tissue complication probability difference (∆NTCP): radiation-induced liver disease (RILD) in liver tumour patients treated with proton vs X-ray therapy

    PubMed Central

    Kobashi, Keiji; Kimoto, Takuya; Toramatsu, Chie; Dekura, Yasuhiro; Katoh, Norio; Shimizu, Shinichi; Ito, Yoichi M; Shirato, Hiroki

    2018-01-01

    Abstract Modern radiotherapy technologies such as proton beam therapy (PBT) permit dose escalation to the tumour and minimize unnecessary doses to normal tissues. To achieve appropriate patient selection for PBT, a normal tissue complication probability (NTCP) model can be applied to estimate the risk of treatment-related toxicity relative to X-ray therapy (XRT). A methodology for estimating the difference in NTCP (∆NTCP), including its uncertainty as a function of dose to normal tissue, is described in this study using the Delta method, a statistical method for evaluating the variance of functions, considering the variance–covariance matrix. We used a virtual individual patient dataset of radiation-induced liver disease (RILD) in liver tumour patients who were treated with XRT as a study model. As an alternative option for individual patient data, dose-bin data, which consists of the number of patients who developed toxicity in each dose level/bin and the total number of patients in that dose level/bin, are useful for multi-institutional data sharing. It provides comparable accuracy with individual patient data when using the Delta method. With reliable NTCP models, the ∆NTCP with uncertainty might potentially guide the use of PBT; however, clinical validation and a cost-effectiveness study are needed to determine the appropriate ∆NTCP threshold. PMID:29538699

  19. Two Universality Properties Associated with the Monkey Model of Zipf's Law

    NASA Astrophysics Data System (ADS)

    Perline, Richard; Perline, Ron

    2016-03-01

    The distribution of word probabilities in the monkey model of Zipf's law is associated with two universality properties: (1) the power law exponent converges strongly to $-1$ as the alphabet size increases and the letter probabilities are specified as the spacings from a random division of the unit interval for any distribution with a bounded density function on $[0,1]$; and (2), on a logarithmic scale the version of the model with a finite word length cutoff and unequal letter probabilities is approximately normally distributed in the part of the distribution away from the tails. The first property is proved using a remarkably general limit theorem for the logarithm of sample spacings from Shao and Hahn, and the second property follows from Anscombe's central limit theorem for a random number of i.i.d. random variables. The finite word length model leads to a hybrid Zipf-lognormal mixture distribution closely related to work in other areas.

  20. The decline and fall of Type II error rates

    Treesearch

    Steve Verrill; Mark Durst

    2005-01-01

    For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.

  1. Energetics and Birth Rates of Supernova Remnants in the Large Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Leahy, D. A.

    2017-03-01

    Published X-ray emission properties for a sample of 50 supernova remnants (SNRs) in the Large Magellanic Cloud (LMC) are used as input for SNR evolution modeling calculations. The forward shock emission is modeled to obtain the initial explosion energy, age, and circumstellar medium density for each SNR in the sample. The resulting age distribution yields a SNR birthrate of 1/(500 yr) for the LMC. The explosion energy distribution is well fit by a log-normal distribution, with a most-probable explosion energy of 0.5× {10}51 erg, with a 1σ dispersion by a factor of 3 in energy. The circumstellar medium density distribution is broader than the explosion energy distribution, with a most-probable density of ˜0.1 cm-3. The shape of the density distribution can be fit with a log-normal distribution, with incompleteness at high density caused by the shorter evolution times of SNRs.

  2. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rupšys, P.

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.

  3. Effects of ignition location models on the burn patterns of simulated wildfires

    USGS Publications Warehouse

    Bar-Massada, A.; Syphard, A.D.; Hawbaker, T.J.; Stewart, S.I.; Radeloff, V.C.

    2011-01-01

    Fire simulation studies that use models such as FARSITE often assume that ignition locations are distributed randomly, because spatially explicit information about actual ignition locations are difficult to obtain. However, many studies show that the spatial distribution of ignition locations, whether human-caused or natural, is non-random. Thus, predictions from fire simulations based on random ignitions may be unrealistic. However, the extent to which the assumption of ignition location affects the predictions of fire simulation models has never been systematically explored. Our goal was to assess the difference in fire simulations that are based on random versus non-random ignition location patterns. We conducted four sets of 6000 FARSITE simulations for the Santa Monica Mountains in California to quantify the influence of random and non-random ignition locations and normal and extreme weather conditions on fire size distributions and spatial patterns of burn probability. Under extreme weather conditions, fires were significantly larger for non-random ignitions compared to random ignitions (mean area of 344.5 ha and 230.1 ha, respectively), but burn probability maps were highly correlated (r = 0.83). Under normal weather, random ignitions produced significantly larger fires than non-random ignitions (17.5 ha and 13.3 ha, respectively), and the spatial correlations between burn probability maps were not high (r = 0.54), though the difference in the average burn probability was small. The results of the study suggest that the location of ignitions used in fire simulation models may substantially influence the spatial predictions of fire spread patterns. However, the spatial bias introduced by using a random ignition location model may be minimized if the fire simulations are conducted under extreme weather conditions when fire spread is greatest. ?? 2010 Elsevier Ltd.

  4. Probabilities of Dilating Vesicoureteral Reflux in Children with First Time Simple Febrile Urinary Tract Infection, and Normal Renal and Bladder Ultrasound.

    PubMed

    Rianthavorn, Pornpimol; Tangngamsakul, Onjira

    2016-11-01

    We evaluated risk factors and assessed predicted probabilities for grade III or higher vesicoureteral reflux (dilating reflux) in children with a first simple febrile urinary tract infection and normal renal and bladder ultrasound. Data for 167 children 2 to 72 months old with a first febrile urinary tract infection and normal ultrasound were compared between those who had dilating vesicoureteral reflux (12 patients, 7.2%) and those who did not. Exclusion criteria consisted of history of prenatal hydronephrosis or familial reflux and complicated urinary tract infection. The logistic regression model was used to identify independent variables associated with dilating reflux. Predicted probabilities for dilating reflux were assessed. Patient age and prevalence of nonEscherichia coli bacteria were greater in children who had dilating reflux compared to those who did not (p = 0.02 and p = 0.004, respectively). Gender distribution was similar between the 2 groups (p = 0.08). In multivariate analysis older age and nonE. coli bacteria independently predicted dilating reflux, with odds ratios of 1.04 (95% CI 1.01-1.07, p = 0.02) and 3.76 (95% CI 1.05-13.39, p = 0.04), respectively. The impact of nonE. coli bacteria on predicted probabilities of dilating reflux increased with patient age. We support the concept of selective voiding cystourethrogram in children with a first simple febrile urinary tract infection and normal ultrasound. Voiding cystourethrogram should be considered in children with late onset urinary tract infection due to nonE. coli bacteria since they are at risk for dilating reflux even if the ultrasound is normal. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  5. Effect of Cisplatin on Parotid Gland Function in Concomitant Radiochemotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hey, Jeremias; Setz, Juergen; Gerlach, Reinhard

    2009-12-01

    Purpose: To determine the influence of concomitant radiochemotherapy with cisplatin on parotid gland tissue complication probability. Methods and Materials: Patients treated with either radiotherapy (n = 61) or concomitant radiochemotherapy with cisplatin (n = 36) for head-and-neck cancer were prospectively evaluated. The dose and volume distributions of the parotid glands were noted in dose-volume histograms. Stimulated salivary flow rates were measured before, during the 2nd and 6th weeks and at 4 weeks and 6 months after the treatment. The data were fit using the normal tissue complication probability model of Lyman. Complication was defined as a reduction of the salivarymore » flow rate to less than 25% of the pretreatment flow rate. Results: The normal tissue complication probability model parameter TD{sub 50} (the dose leading to a complication probability of 50%) was found to be 32.2 Gy at 4 weeks and 32.1 Gy at 6 months for concomitant radiochemotherapy and 41.1 Gy at 4 weeks and 39.6 Gy at 6 months for radiotherapy. The tolerated dose for concomitant radiochemotherapy was at least 7 to 8 Gy lower than for radiotherapy alone at TD{sub 50}. Conclusions: In this study, the concomitant radiochemotherapy tended to cause a higher probability of parotid gland tissue damage. Advanced radiotherapy planning approaches such as intensity-modulated radiotherapy may be partiticularly important for parotid sparing in radiochemotherapy because of cisplatin-related increased radiosensitivity of glands.« less

  6. Normal probability plots with confidence.

    PubMed

    Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang

    2015-01-01

    Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Mixed and Mixture Regression Models for Continuous Bounded Responses Using the Beta Distribution

    ERIC Educational Resources Information Center

    Verkuilen, Jay; Smithson, Michael

    2012-01-01

    Doubly bounded continuous data are common in the social and behavioral sciences. Examples include judged probabilities, confidence ratings, derived proportions such as percent time on task, and bounded scale scores. Dependent variables of this kind are often difficult to analyze using normal theory models because their distributions may be quite…

  8. Epidemiological modeling in a branching population. Particular case of a general SIS model with two age classes.

    PubMed

    Jacob, C; Viet, A F

    2003-03-01

    This paper covers the elaboration of a general class of multitype branching processes for modeling in a branching population, the evolution of a disease with horizontal and vertical transmissions. When the size of the population may tend to infinity, normalization must be carried out. As the initial size tends to infinity, the normalized model converges a.s. to a dynamical system the solution of which is the probability law of the state of health for an individual ancestors line. The focal point of this study concerns the transient and asymptotical behaviors of a SIS model with two age classes in a branching population. We will compare the asymptotical probability of extinction on the scale of a finite population and on the scale of an individual in an infinite population: when the rates of transmission are small compared to the rate of renewing the population of susceptibles, the two models lead to a.s. extinction, giving consistent results, which no longer applies to the opposite situation of important transmissions. In that case the size of the population plays a crucial role in the spreading of the disease.

  9. A prospective cohort study on radiation-induced hypothyroidism: development of an NTCP model.

    PubMed

    Boomsma, Marjolein J; Bijl, Hendrik P; Christianen, Miranda E M C; Beetz, Ivo; Chouvalova, Olga; Steenbakkers, Roel J H M; van der Laan, Bernard F A M; Wolffenbuttel, Bruce H R; Oosting, Sjoukje F; Schilstra, Cornelis; Langendijk, Johannes A

    2012-11-01

    To establish a multivariate normal tissue complication probability (NTCP) model for radiation-induced hypothyroidism. The thyroid-stimulating hormone (TSH) level of 105 patients treated with (chemo-) radiation therapy for head-and-neck cancer was prospectively measured during a median follow-up of 2.5 years. Hypothyroidism was defined as elevated serum TSH with decreased or normal free thyroxin (T4). A multivariate logistic regression model with bootstrapping was used to determine the most important prognostic variables for radiation-induced hypothyroidism. Thirty-five patients (33%) developed primary hypothyroidism within 2 years after radiation therapy. An NTCP model based on 2 variables, including the mean thyroid gland dose and the thyroid gland volume, was most predictive for radiation-induced hypothyroidism. NTCP values increased with higher mean thyroid gland dose (odds ratio [OR]: 1.064/Gy) and decreased with higher thyroid gland volume (OR: 0.826/cm(3)). Model performance was good with an area under the curve (AUC) of 0.85. This is the first prospective study resulting in an NTCP model for radiation-induced hypothyroidism. The probability of hypothyroidism rises with increasing dose to the thyroid gland, whereas it reduces with increasing thyroid gland volume. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. A Bayesian joint probability modeling approach for seasonal forecasting of streamflows at multiple sites

    NASA Astrophysics Data System (ADS)

    Wang, Q. J.; Robertson, D. E.; Chiew, F. H. S.

    2009-05-01

    Seasonal forecasting of streamflows can be highly valuable for water resources management. In this paper, a Bayesian joint probability (BJP) modeling approach for seasonal forecasting of streamflows at multiple sites is presented. A Box-Cox transformed multivariate normal distribution is proposed to model the joint distribution of future streamflows and their predictors such as antecedent streamflows and El Niño-Southern Oscillation indices and other climate indicators. Bayesian inference of model parameters and uncertainties is implemented using Markov chain Monte Carlo sampling, leading to joint probabilistic forecasts of streamflows at multiple sites. The model provides a parametric structure for quantifying relationships between variables, including intersite correlations. The Box-Cox transformed multivariate normal distribution has considerable flexibility for modeling a wide range of predictors and predictands. The Bayesian inference formulated allows the use of data that contain nonconcurrent and missing records. The model flexibility and data-handling ability means that the BJP modeling approach is potentially of wide practical application. The paper also presents a number of statistical measures and graphical methods for verification of probabilistic forecasts of continuous variables. Results for streamflows at three river gauges in the Murrumbidgee River catchment in southeast Australia show that the BJP modeling approach has good forecast quality and that the fitted model is consistent with observed data.

  11. The effects of small field dosimetry on the biological models used in evaluating IMRT dose distributions

    NASA Astrophysics Data System (ADS)

    Cardarelli, Gene A.

    The primary goal in radiation oncology is to deliver lethal radiation doses to tumors, while minimizing dose to normal tissue. IMRT has the capability to increase the dose to the targets and decrease the dose to normal tissue, increasing local control, decrease toxicity and allow for effective dose escalation. This advanced technology does present complex dose distributions that are not easily verified. Furthermore, the dose inhomogeneity caused by non-uniform dose distributions seen in IMRT treatments has caused the development of biological models attempting to characterize the dose-volume effect in the response of organized tissues to radiation. Dosimetry of small fields can be quite challenging when measuring dose distributions for high-energy X-ray beams used in IMRT. The proper modeling of these small field distributions is essential in reproducing accurate dose for IMRT. This evaluation was conducted to quantify the effects of small field dosimetry on IMRT plan dose distributions and the effects on four biological model parameters. The four biological models evaluated were: (1) the generalized Equivalent Uniform Dose (gEUD), (2) the Tumor Control Probability (TCP), (3) the Normal Tissue Complication Probability (NTCP) and (4) the Probability of uncomplicated Tumor Control (P+). These models are used to estimate local control, survival, complications and uncomplicated tumor control. This investigation compares three distinct small field dose algorithms. Dose algorithms were created using film, small ion chamber, and a combination of ion chamber measurements and small field fitting parameters. Due to the nature of uncertainties in small field dosimetry and the dependence of biological models on dose volume information, this examination quantifies the effects of small field dosimetry techniques on radiobiological models and recommends pathways to reduce the errors in using these models to evaluate IMRT dose distributions. This study demonstrates the importance of valid physical dose modeling prior to the use of biological modeling. The success of using biological function data, such as hypoxia, in clinical IMRT planning will greatly benefit from the results of this study.

  12. Probabilistic analysis of preload in the abutment screw of a dental implant complex.

    PubMed

    Guda, Teja; Ross, Thomas A; Lang, Lisa A; Millwater, Harry R

    2008-09-01

    Screw loosening is a problem for a percentage of implants. A probabilistic analysis to determine the cumulative probability distribution of the preload, the probability of obtaining an optimal preload, and the probabilistic sensitivities identifying important variables is lacking. The purpose of this study was to examine the inherent variability of material properties, surface interactions, and applied torque in an implant system to determine the probability of obtaining desired preload values and to identify the significant variables that affect the preload. Using software programs, an abutment screw was subjected to a tightening torque and the preload was determined from finite element (FE) analysis. The FE model was integrated with probabilistic analysis software. Two probabilistic analysis methods (advanced mean value and Monte Carlo sampling) were applied to determine the cumulative distribution function (CDF) of preload. The coefficient of friction, elastic moduli, Poisson's ratios, and applied torque were modeled as random variables and defined by probability distributions. Separate probability distributions were determined for the coefficient of friction in well-lubricated and dry environments. The probabilistic analyses were performed and the cumulative distribution of preload was determined for each environment. A distinct difference was seen between the preload probability distributions generated in a dry environment (normal distribution, mean (SD): 347 (61.9) N) compared to a well-lubricated environment (normal distribution, mean (SD): 616 (92.2) N). The probability of obtaining a preload value within the target range was approximately 54% for the well-lubricated environment and only 0.02% for the dry environment. The preload is predominately affected by the applied torque and coefficient of friction between the screw threads and implant bore at lower and middle values of the preload CDF, and by the applied torque and the elastic modulus of the abutment screw at high values of the preload CDF. Lubrication at the threaded surfaces between the abutment screw and implant bore affects the preload developed in the implant complex. For the well-lubricated surfaces, only approximately 50% of implants will have preload values within the generally accepted range. This probability can be improved by applying a higher torque than normally recommended or a more closely controlled torque than typically achieved. It is also suggested that materials with higher elastic moduli be used in the manufacture of the abutment screw to achieve a higher preload.

  13. Data normalization in biosurveillance: an information-theoretic approach.

    PubMed

    Peter, William; Najmi, Amir H; Burkom, Howard

    2007-10-11

    An approach to identifying public health threats by characterizing syndromic surveillance data in terms of its surprisability is discussed. Surprisability in our model is measured by assigning a probability distribution to a time series, and then calculating its entropy, leading to a straightforward designation of an alert. Initial application of our method is to investigate the applicability of using suitably-normalized syndromic counts (i.e., proportions) to improve early event detection.

  14. Negative Binomial Process Count and Mixture Modeling.

    PubMed

    Zhou, Mingyuan; Carin, Lawrence

    2015-02-01

    The seemingly disjoint problems of count and mixture modeling are united under the negative binomial (NB) process. A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to an NB process for count modeling. A draw from the NB process consists of a Poisson distributed finite number of distinct atoms, each of which is associated with a logarithmic distributed number of data samples. We reveal relationships between various count- and mixture-modeling distributions and construct a Poisson-logarithmic bivariate distribution that connects the NB and Chinese restaurant table distributions. Fundamental properties of the models are developed, and we derive efficient Bayesian inference. It is shown that with augmentation and normalization, the NB process and gamma-NB process can be reduced to the Dirichlet process and hierarchical Dirichlet process, respectively. These relationships highlight theoretical, structural, and computational advantages of the NB process. A variety of NB processes, including the beta-geometric, beta-NB, marked-beta-NB, marked-gamma-NB and zero-inflated-NB processes, with distinct sharing mechanisms, are also constructed. These models are applied to topic modeling, with connections made to existing algorithms under Poisson factor analysis. Example results show the importance of inferring both the NB dispersion and probability parameters.

  15. Bayesian soft X-ray tomography using non-stationary Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Li, Dong; Svensson, J.; Thomsen, H.; Medina, F.; Werner, A.; Wolf, R.

    2013-08-01

    In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.

  16. Bayesian soft X-ray tomography using non-stationary Gaussian Processes.

    PubMed

    Li, Dong; Svensson, J; Thomsen, H; Medina, F; Werner, A; Wolf, R

    2013-08-01

    In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.

  17. Rectal bleeding, fecal incontinence, and high stool frequency after conformal radiotherapy for prostate cancer: Normal tissue complication probability modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peeters, Stephanie; Hoogeman, Mischa S.; Heemsbergen, Wilma D.

    2006-09-01

    Purpose: To analyze whether inclusion of predisposing clinical features in the Lyman-Kutcher-Burman (LKB) normal tissue complication probability (NTCP) model improves the estimation of late gastrointestinal toxicity. Methods and Materials: This study includes 468 prostate cancer patients participating in a randomized trial comparing 68 with 78 Gy. We fitted the probability of developing late toxicity within 3 years (rectal bleeding, high stool frequency, and fecal incontinence) with the original, and a modified LKB model, in which a clinical feature (e.g., history of abdominal surgery) was taken into account by fitting subset specific TD50s. The ratio of these TD50s is the dose-modifyingmore » factor for that clinical feature. Dose distributions of anorectal (bleeding and frequency) and anal wall (fecal incontinence) were used. Results: The modified LKB model gave significantly better fits than the original LKB model. Patients with a history of abdominal surgery had a lower tolerance to radiation than did patients without previous surgery, with a dose-modifying factor of 1.1 for bleeding and of 2.5 for fecal incontinence. The dose-response curve for bleeding was approximately two times steeper than that for frequency and three times steeper than that for fecal incontinence. Conclusions: Inclusion of predisposing clinical features significantly improved the estimation of the NTCP. For patients with a history of abdominal surgery, more severe dose constraints should therefore be used during treatment plan optimization.« less

  18. Using satellite remote sensing to model and map the distribution of Bicknell's thrush (Catharus bicknelli) in the White Mountains of New Hampshire

    NASA Astrophysics Data System (ADS)

    Hale, Stephen Roy

    Landsat-7 Enhanced Thematic Mapper satellite imagery was used to model Bicknell's Thrush (Catharus bicknelli) distribution in the White Mountains of New Hampshire. The proof-of-concept was established for using satellite imagery in species-habitat modeling, where for the first time imagery spectral features were used to estimate a species-habitat model variable. The model predicted rising probabilities of thrush presence with decreasing dominant vegetation height, increasing elevation, and decreasing distance to nearest Fir Sapling cover type. To solve the model at all locations required regressor estimates at every pixel, which were not available for the dominant vegetation height and elevation variables. Topographically normalized imagery features Normalized Difference Vegetation Index and Band 1 (blue) were used to estimate dominant vegetation height using multiple linear regression; and a Digital Elevation Model was used to estimate elevation. Distance to nearest Fir Sapling cover type was obtained for each pixel from a land cover map specifically constructed for this project. The Bicknell's Thrush habitat model was derived using logistic regression, which produced the probability of detecting a singing male based on the pattern of model covariates. Model validation using Bicknell's Thrush data not used in model calibration, revealed that the model accurately estimated thrush presence at probabilities ranging from 0 to <0.40 and from 0.50 to <0.60. Probabilities from 0.40 to <0.50 and greater than 0.60 significantly underestimated and overestimated presence, respectively. Applying the model to the study area illuminated an important implication for Bicknell's Thrush conservation. The model predicted increasing numbers of presences and increasing relative density with rising elevation, with which exists a concomitant decrease in land area. Greater land area of lower density habitats may account for more total individuals and reproductive output than higher density less abundant land area. Efforts to conserve areas of highest individual density under the assumption that density reflects habitat quality could target the smallest fraction of the total population.

  19. Modeling of recovery profiles in mentally disabled and intact patients after sevoflurane anesthesia; a pharmacodynamic analysis.

    PubMed

    Shin, Teo Jeon; Noh, Gyu-Jeong; Koo, Yong-Seo; Han, Dong Woo

    2014-11-01

    Mentally disabled patients show different recovery profiles compared to normal patients after general anesthesia. However, the relationship of dose-recovery profiles of mentally disabled patients has never been compared to that of normal patients. Twenty patients (10 mentally disabled patients and 10 mentally intact patients) scheduled to dental surgery under general anesthesia was recruited. Sevoflurane was administered to maintain anesthesia during dental treatment. At the end of the surgery, sevoflurane was discontinued. End-tidal sevoflurane and recovery of consciousness (ROC) were recorded after sevoflurane discontinuation. The pharmacodynamic relation between the probability of ROC and end-tidal sevoflurane concentration was analyzed using NONMEM software (version VII). End-tidal sevoflurane concentration associated with 50% probability of ROC (C₅₀) and γ value were lower in the mentally disabled patients (C₅₀=0.37 vol %, γ=16.5 in mentally intact patients, C₅₀=0.19 vol %, γ=4.58 in mentally disabled patients). Mentality was a significant covariate of C₅₀ for ROC and γ value to pharmacodynamic model. A sigmoid Emanx model explains the pharmacodynamic relationship between end-tidal sevoflurane concentration and ROC. Mentally disabled patients may recover slower from anesthesia at lower sevoflurane concentration at ROC an compared to normal patients.

  20. Modeling of Recovery Profiles in Mentally Disabled and Intact Patients after Sevoflurane Anesthesia; A Pharmacodynamic Analysis

    PubMed Central

    Shin, Teo Jeon; Noh, Gyu-Jeong; Koo, Yong-Seo

    2014-01-01

    Purpose Mentally disabled patients show different recovery profiles compared to normal patients after general anesthesia. However, the relationship of dose-recovery profiles of mentally disabled patients has never been compared to that of normal patients. Materials and Methods Twenty patients (10 mentally disabled patients and 10 mentally intact patients) scheduled to dental surgery under general anesthesia was recruited. Sevoflurane was administered to maintain anesthesia during dental treatment. At the end of the surgery, sevoflurane was discontinued. End-tidal sevoflurane and recovery of consciousness (ROC) were recorded after sevoflurane discontinuation. The pharmacodynamic relation between the probability of ROC and end-tidal sevoflurane concentration was analyzed using NONMEM software (version VII). Results End-tidal sevoflurane concentration associated with 50% probability of ROC (C50) and γ value were lower in the mentally disabled patients (C50=0.37 vol %, γ=16.5 in mentally intact patients, C50=0.19 vol %, γ=4.58 in mentally disabled patients). Mentality was a significant covariate of C50 for ROC and γ value to pharmacodynamic model. Conclusion A sigmoid Emanx model explains the pharmacodynamic relationship between end-tidal sevoflurane concentration and ROC. Mentally disabled patients may recover slower from anesthesia at lower sevoflurane concentration at ROC an compared to normal patients. PMID:25323901

  1. Radiobiological Impact of Reduced Margins and Treatment Technique for Prostate Cancer in Terms of Tumor Control Probability (TCP) and Normal Tissue Complication Probability (NTCP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jensen, Ingelise, E-mail: inje@rn.d; Carl, Jesper; Lund, Bente

    2011-07-01

    Dose escalation in prostate radiotherapy is limited by normal tissue toxicities. The aim of this study was to assess the impact of margin size on tumor control and side effects for intensity-modulated radiation therapy (IMRT) and 3D conformal radiotherapy (3DCRT) treatment plans with increased dose. Eighteen patients with localized prostate cancer were enrolled. 3DCRT and IMRT plans were compared for a variety of margin sizes. A marker detectable on daily portal images was presupposed for narrow margins. Prescribed dose was 82 Gy within 41 fractions to the prostate clinical target volume (CTV). Tumor control probability (TCP) calculations based on themore » Poisson model including the linear quadratic approach were performed. Normal tissue complication probability (NTCP) was calculated for bladder, rectum and femoral heads according to the Lyman-Kutcher-Burman method. All plan types presented essentially identical TCP values and very low NTCP for bladder and femoral heads. Mean doses for these critical structures reached a minimum for IMRT with reduced margins. Two endpoints for rectal complications were analyzed. A marked decrease in NTCP for IMRT plans with narrow margins was seen for mild RTOG grade 2/3 as well as for proctitis/necrosis/stenosis/fistula, for which NTCP <7% was obtained. For equivalent TCP values, sparing of normal tissue was demonstrated with the narrow margin approach. The effect was more pronounced for IMRT than 3DCRT, with respect to NTCP for mild, as well as severe, rectal complications.« less

  2. Elastic K-means using posterior probability.

    PubMed

    Zheng, Aihua; Jiang, Bo; Li, Yan; Zhang, Xuehan; Ding, Chris

    2017-01-01

    The widely used K-means clustering is a hard clustering algorithm. Here we propose a Elastic K-means clustering model (EKM) using posterior probability with soft capability where each data point can belong to multiple clusters fractionally and show the benefit of proposed Elastic K-means. Furthermore, in many applications, besides vector attributes information, pairwise relations (graph information) are also available. Thus we integrate EKM with Normalized Cut graph clustering into a single clustering formulation. Finally, we provide several useful matrix inequalities which are useful for matrix formulations of learning models. Based on these results, we prove the correctness and the convergence of EKM algorithms. Experimental results on six benchmark datasets demonstrate the effectiveness of proposed EKM and its integrated model.

  3. Short-term droughts forecast using Markov chain model in Victoria, Australia

    NASA Astrophysics Data System (ADS)

    Rahmat, Siti Nazahiyah; Jayasuriya, Niranjali; Bhuiyan, Muhammed A.

    2017-07-01

    A comprehensive risk management strategy for dealing with drought should include both short-term and long-term planning. The objective of this paper is to present an early warning method to forecast drought using the Standardised Precipitation Index (SPI) and a non-homogeneous Markov chain model. A model such as this is useful for short-term planning. The developed method has been used to forecast droughts at a number of meteorological monitoring stations that have been regionalised into six (6) homogenous clusters with similar drought characteristics based on SPI. The non-homogeneous Markov chain model was used to estimate drought probabilities and drought predictions up to 3 months ahead. The drought severity classes defined using the SPI were computed at a 12-month time scale. The drought probabilities and the predictions were computed for six clusters that depict similar drought characteristics in Victoria, Australia. Overall, the drought severity class predicted was quite similar for all the clusters, with the non-drought class probabilities ranging from 49 to 57 %. For all clusters, the near normal class had a probability of occurrence varying from 27 to 38 %. For the more moderate and severe classes, the probabilities ranged from 2 to 13 % and 3 to 1 %, respectively. The developed model predicted drought situations 1 month ahead reasonably well. However, 2 and 3 months ahead predictions should be used with caution until the models are developed further.

  4. Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models

    ERIC Educational Resources Information Center

    Doebler, Anna; Doebler, Philipp; Holling, Heinz

    2013-01-01

    The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…

  5. Probable alpha and 14C cluster emission from hyper Ac nuclei

    NASA Astrophysics Data System (ADS)

    Santhosh, K. P.

    2013-10-01

    A systematic study on the probability for the emission of 4He and 14C cluster from hyper {Λ/207-234}Ac and non-strange normal 207-234Ac nuclei are performed for the first time using our fission model, the Coulomb and proximity potential model (CPPM). The predicted half lives show that hyper {Λ/207-234}Ac nuclei are unstable against 4He emission and 14C emission from hyper {Λ/217-228}Ac are favorable for measurement. Our study also show that hyper {Λ/207-234}Ac are stable against hyper {Λ/4}He and {Λ/14}C emission. The role of neutron shell closure ( N = 126) in hyper {Λ/214}Fr daughter and role of proton/neutron shell closure ( Z ≈ 82, N = 126) in hyper {Λ/210}Bi daughter are also revealed. As hyper-nuclei decays to normal nuclei by mesonic/non-mesonic decay and since most of the predicted half lives for 4He and 14C emission from normal Ac nuclei are favourable for measurement, we presume that alpha and 14C cluster emission from hyper Ac nuclei can be detected in laboratory in a cascade (two-step) process.

  6. Dose response explorer: an integrated open-source tool for exploring and modelling radiotherapy dose volume outcome relationships

    NASA Astrophysics Data System (ADS)

    El Naqa, I.; Suneja, G.; Lindsay, P. E.; Hope, A. J.; Alaly, J. R.; Vicic, M.; Bradley, J. D.; Apte, A.; Deasy, J. O.

    2006-11-01

    Radiotherapy treatment outcome models are a complicated function of treatment, clinical and biological factors. Our objective is to provide clinicians and scientists with an accurate, flexible and user-friendly software tool to explore radiotherapy outcomes data and build statistical tumour control or normal tissue complications models. The software tool, called the dose response explorer system (DREES), is based on Matlab, and uses a named-field structure array data type. DREES/Matlab in combination with another open-source tool (CERR) provides an environment for analysing treatment outcomes. DREES provides many radiotherapy outcome modelling features, including (1) fitting of analytical normal tissue complication probability (NTCP) and tumour control probability (TCP) models, (2) combined modelling of multiple dose-volume variables (e.g., mean dose, max dose, etc) and clinical factors (age, gender, stage, etc) using multi-term regression modelling, (3) manual or automated selection of logistic or actuarial model variables using bootstrap statistical resampling, (4) estimation of uncertainty in model parameters, (5) performance assessment of univariate and multivariate analyses using Spearman's rank correlation and chi-square statistics, boxplots, nomograms, Kaplan-Meier survival plots, and receiver operating characteristics curves, and (6) graphical capabilities to visualize NTCP or TCP prediction versus selected variable models using various plots. DREES provides clinical researchers with a tool customized for radiotherapy outcome modelling. DREES is freely distributed. We expect to continue developing DREES based on user feedback.

  7. A calibrated agent-based computer model of stochastic cell dynamics in normal human colon crypts useful for in silico experiments.

    PubMed

    Bravo, Rafael; Axelrod, David E

    2013-11-18

    Normal colon crypts consist of stem cells, proliferating cells, and differentiated cells. Abnormal rates of proliferation and differentiation can initiate colon cancer. We have measured the variation in the number of each of these cell types in multiple crypts in normal human biopsy specimens. This has provided the opportunity to produce a calibrated computational model that simulates cell dynamics in normal human crypts, and by changing model parameter values, to simulate the initiation and treatment of colon cancer. An agent-based model of stochastic cell dynamics in human colon crypts was developed in the multi-platform open-source application NetLogo. It was assumed that each cell's probability of proliferation and probability of death is determined by its position in two gradients along the crypt axis, a divide gradient and in a die gradient. A cell's type is not intrinsic, but rather is determined by its position in the divide gradient. Cell types are dynamic, plastic, and inter-convertible. Parameter values were determined for the shape of each of the gradients, and for a cell's response to the gradients. This was done by parameter sweeps that indicated the values that reproduced the measured number and variation of each cell type, and produced quasi-stationary stochastic dynamics. The behavior of the model was verified by its ability to reproduce the experimentally observed monocolonal conversion by neutral drift, the formation of adenomas resulting from mutations either at the top or bottom of the crypt, and by the robust ability of crypts to recover from perturbation by cytotoxic agents. One use of the virtual crypt model was demonstrated by evaluating different cancer chemotherapy and radiation scheduling protocols. A virtual crypt has been developed that simulates the quasi-stationary stochastic cell dynamics of normal human colon crypts. It is unique in that it has been calibrated with measurements of human biopsy specimens, and it can simulate the variation of cell types in addition to the average number of each cell type. The utility of the model was demonstrated with in silico experiments that evaluated cancer therapy protocols. The model is available for others to conduct additional experiments.

  8. A probabilistic NF2 relational algebra for integrated information retrieval and database systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuhr, N.; Roelleke, T.

    The integration of information retrieval (IR) and database systems requires a data model which allows for modelling documents as entities, representing uncertainty and vagueness and performing uncertain inference. For this purpose, we present a probabilistic data model based on relations in non-first-normal-form (NF2). Here, tuples are assigned probabilistic weights giving the probability that a tuple belongs to a relation. Thus, the set of weighted index terms of a document are represented as a probabilistic subrelation. In a similar way, imprecise attribute values are modelled as a set-valued attribute. We redefine the relational operators for this type of relations such thatmore » the result of each operator is again a probabilistic NF2 relation, where the weight of a tuple gives the probability that this tuple belongs to the result. By ordering the tuples according to decreasing probabilities, the model yields a ranking of answers like in most IR models. This effect also can be used for typical database queries involving imprecise attribute values as well as for combinations of database and IR queries.« less

  9. Stochastic methods for analysis of power flow in electric networks

    NASA Astrophysics Data System (ADS)

    1982-09-01

    The modeling and effects of probabilistic behavior on steady state power system operation were analyzed. A solution to the steady state network flow equations which adhere both to Kirchoff's Laws and probabilistic laws, using either combinatorial or functional approximation techniques was obtained. The development of sound techniques for producing meaningful data to serve as input is examined. Electric demand modeling, equipment failure analysis, and algorithm development are investigated. Two major development areas are described: a decomposition of stochastic processes which gives stationarity, ergodicity, and even normality; and a powerful surrogate probability approach using proportions of time which allows the calculation of joint events from one dimensional probability spaces.

  10. Dosimetry in nuclear medicine therapy: radiobiology application and results.

    PubMed

    Strigari, L; Benassi, M; Chiesa, C; Cremonesi, M; Bodei, L; D'Andrea, M

    2011-04-01

    The linear quadratic model (LQM) has largely been used to assess the radiobiological damage to tissue by external beam fractionated radiotherapy and more recently has been extended to encompass a general continuous time varying dose rate protocol such as targeted radionuclide therapy (TRT). In this review, we provide the basic aspects of radiobiology, from a theoretical point of view, starting from the "four Rs" of radiobiology and introducing the biologically effective doses, which may be used to quantify the impact of a treatment on both tumors and normal tissues. We also present the main parameters required in the LQM, and illustrate the main models of tumor control probability and normal tissue complication probability and summarize the main dose-effect responses, reported in literature, which demonstrate the tentative link between targeted radiotherapy doses and those used in conventional radiotherapy. A better understanding of the radiobiology and mechanisms of action of TRT could contribute to describe the clinical data and guide the development of future compounds and the designing of prospective clinical trials.

  11. A Prospective Cohort Study on Radiation-induced Hypothyroidism: Development of an NTCP Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boomsma, Marjolein J.; Bijl, Hendrik P.; Christianen, Miranda E.M.C.

    Purpose: To establish a multivariate normal tissue complication probability (NTCP) model for radiation-induced hypothyroidism. Methods and Materials: The thyroid-stimulating hormone (TSH) level of 105 patients treated with (chemo-) radiation therapy for head-and-neck cancer was prospectively measured during a median follow-up of 2.5 years. Hypothyroidism was defined as elevated serum TSH with decreased or normal free thyroxin (T4). A multivariate logistic regression model with bootstrapping was used to determine the most important prognostic variables for radiation-induced hypothyroidism. Results: Thirty-five patients (33%) developed primary hypothyroidism within 2 years after radiation therapy. An NTCP model based on 2 variables, including the mean thyroidmore » gland dose and the thyroid gland volume, was most predictive for radiation-induced hypothyroidism. NTCP values increased with higher mean thyroid gland dose (odds ratio [OR]: 1.064/Gy) and decreased with higher thyroid gland volume (OR: 0.826/cm{sup 3}). Model performance was good with an area under the curve (AUC) of 0.85. Conclusions: This is the first prospective study resulting in an NTCP model for radiation-induced hypothyroidism. The probability of hypothyroidism rises with increasing dose to the thyroid gland, whereas it reduces with increasing thyroid gland volume.« less

  12. Probabilistic Tsunami Hazard Assessment along Nankai Trough (2) a comprehensive assessment including a variety of earthquake source areas other than those that the Earthquake Research Committee, Japanese government (2013) showed

    NASA Astrophysics Data System (ADS)

    Hirata, K.; Fujiwara, H.; Nakamura, H.; Osada, M.; Morikawa, N.; Kawai, S.; Ohsumi, T.; Aoi, S.; Yamamoto, N.; Matsuyama, H.; Toyama, N.; Kito, T.; Murashima, Y.; Murata, Y.; Inoue, T.; Saito, R.; Takayama, J.; Akiyama, S.; Korenaga, M.; Abe, Y.; Hashimoto, N.

    2016-12-01

    For the forthcoming Nankai earthquake with M8 to M9 class, the Earthquake Research Committee(ERC)/Headquarters for Earthquake Research Promotion, Japanese government (2013) showed 15 examples of earthquake source areas (ESAs) as possible combinations of 18 sub-regions (6 segments along trough and 3 segments normal to trough) and assessed the occurrence probability within the next 30 years (from Jan. 1, 2013) was 60% to 70%. Hirata et al.(2015, AGU) presented Probabilistic Tsunami Hazard Assessment (PTHA) along Nankai Trough in the case where diversity of the next event's ESA is modeled by only the 15 ESAs. In this study, we newly set 70 ESAs in addition of the previous 15 ESAs so that total of 85 ESAs are considered. By producing tens of faults models, with various slip distribution patterns, for each of 85 ESAs, we obtain 2500 fault models in addition of previous 1400 fault models so that total of 3900 fault models are considered to model the diversity of the next Nankai earthquake rupture (Toyama et al.,2015, JpGU). For PTHA, the occurrence probability of the next Nankai earthquake is distributed to possible 3900 fault models in the viewpoint of similarity to the 15 ESAs' extents (Abe et al.,2015, JpGU). A major concept of the occurrence probability distribution is; (i) earthquakes rupturing on any of 15 ESAs that ERC(2013) showed most likely occur, (ii) earthquakes rupturing on any of ESAs whose along-trench extent is the same as any of 15 ESAs but trough-normal extent differs from it second likely occur, (iii) earthquakes rupturing on any of ESAs whose both of along-trough and trough-normal extents differ from any of 15 ESAs rarely occur. Procedures for tsunami simulation and probabilistic tsunami hazard synthesis are the same as Hirata et al (2015). A tsunami hazard map, synthesized under an assumption that the Nankai earthquakes can be modeled as a renewal process based on BPT distribution with a mean recurrence interval of 88.2 years (ERC, 2013) and an aperiodicity of 0.22, as the median of the values (0.20 to 0.24)that ERC (2013) recommended, suggests that several coastal segments along the southwest coast of Shikoku Island, the southeast coast of Kii Peninsula, and the west coast of Izu Peninsula show over 26 % in exceedance probability that maximum water rise exceeds 10 meters at any coastal point within the next 30 years.

  13. Mixture EMOS model for calibrating ensemble forecasts of wind speed.

    PubMed

    Baran, S; Lerch, S

    2016-03-01

    Ensemble model output statistics (EMOS) is a statistical tool for post-processing forecast ensembles of weather variables obtained from multiple runs of numerical weather prediction models in order to produce calibrated predictive probability density functions. The EMOS predictive probability density function is given by a parametric distribution with parameters depending on the ensemble forecasts. We propose an EMOS model for calibrating wind speed forecasts based on weighted mixtures of truncated normal (TN) and log-normal (LN) distributions where model parameters and component weights are estimated by optimizing the values of proper scoring rules over a rolling training period. The new model is tested on wind speed forecasts of the 50 member European Centre for Medium-range Weather Forecasts ensemble, the 11 member Aire Limitée Adaptation dynamique Développement International-Hungary Ensemble Prediction System ensemble of the Hungarian Meteorological Service, and the eight-member University of Washington mesoscale ensemble, and its predictive performance is compared with that of various benchmark EMOS models based on single parametric families and combinations thereof. The results indicate improved calibration of probabilistic and accuracy of point forecasts in comparison with the raw ensemble and climatological forecasts. The mixture EMOS model significantly outperforms the TN and LN EMOS methods; moreover, it provides better calibrated forecasts than the TN-LN combination model and offers an increased flexibility while avoiding covariate selection problems. © 2016 The Authors Environmetrics Published by JohnWiley & Sons Ltd.

  14. A hybrid probabilistic/spectral model of scalar mixing

    NASA Astrophysics Data System (ADS)

    Vaithianathan, T.; Collins, Lance

    2002-11-01

    In the probability density function (PDF) description of a turbulent reacting flow, the local temperature and species concentration are replaced by a high-dimensional joint probability that describes the distribution of states in the fluid. The PDF has the great advantage of rendering the chemical reaction source terms closed, independent of their complexity. However, molecular mixing, which involves two-point information, must be modeled. Indeed, the qualitative shape of the PDF is sensitive to this modeling, hence the reliability of the model to predict even the closed chemical source terms rests heavily on the mixing model. We will present a new closure to the mixing based on a spectral representation of the scalar field. The model is implemented as an ensemble of stochastic particles, each carrying scalar concentrations at different wavenumbers. Scalar exchanges within a given particle represent ``transfer'' while scalar exchanges between particles represent ``mixing.'' The equations governing the scalar concentrations at each wavenumber are derived from the eddy damped quasi-normal Markovian (or EDQNM) theory. The model correctly predicts the evolution of an initial double delta function PDF into a Gaussian as seen in the numerical study by Eswaran & Pope (1988). Furthermore, the model predicts the scalar gradient distribution (which is available in this representation) approaches log normal at long times. Comparisons of the model with data derived from direct numerical simulations will be shown.

  15. Radiobiological impact of reduced margins and treatment technique for prostate cancer in terms of tumor control probability (TCP) and normal tissue complication probability (NTCP).

    PubMed

    Jensen, Ingelise; Carl, Jesper; Lund, Bente; Larsen, Erik H; Nielsen, Jane

    2011-01-01

    Dose escalation in prostate radiotherapy is limited by normal tissue toxicities. The aim of this study was to assess the impact of margin size on tumor control and side effects for intensity-modulated radiation therapy (IMRT) and 3D conformal radiotherapy (3DCRT) treatment plans with increased dose. Eighteen patients with localized prostate cancer were enrolled. 3DCRT and IMRT plans were compared for a variety of margin sizes. A marker detectable on daily portal images was presupposed for narrow margins. Prescribed dose was 82 Gy within 41 fractions to the prostate clinical target volume (CTV). Tumor control probability (TCP) calculations based on the Poisson model including the linear quadratic approach were performed. Normal tissue complication probability (NTCP) was calculated for bladder, rectum and femoral heads according to the Lyman-Kutcher-Burman method. All plan types presented essentially identical TCP values and very low NTCP for bladder and femoral heads. Mean doses for these critical structures reached a minimum for IMRT with reduced margins. Two endpoints for rectal complications were analyzed. A marked decrease in NTCP for IMRT plans with narrow margins was seen for mild RTOG grade 2/3 as well as for proctitis/necrosis/stenosis/fistula, for which NTCP <7% was obtained. For equivalent TCP values, sparing of normal tissue was demonstrated with the narrow margin approach. The effect was more pronounced for IMRT than 3DCRT, with respect to NTCP for mild, as well as severe, rectal complications. Copyright © 2011 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  16. Acute Brain Dysfunction: Development and Validation of a Daily Prediction Model.

    PubMed

    Marra, Annachiara; Pandharipande, Pratik P; Shotwell, Matthew S; Chandrasekhar, Rameela; Girard, Timothy D; Shintani, Ayumi K; Peelen, Linda M; Moons, Karl G M; Dittus, Robert S; Ely, E Wesley; Vasilevskis, Eduard E

    2018-03-24

    The goal of this study was to develop and validate a dynamic risk model to predict daily changes in acute brain dysfunction (ie, delirium and coma), discharge, and mortality in ICU patients. Using data from a multicenter prospective ICU cohort, a daily acute brain dysfunction-prediction model (ABD-pm) was developed by using multinomial logistic regression that estimated 15 transition probabilities (from one of three brain function states [normal, delirious, or comatose] to one of five possible outcomes [normal, delirious, comatose, ICU discharge, or died]) using baseline and daily risk factors. Model discrimination was assessed by using predictive characteristics such as negative predictive value (NPV). Calibration was assessed by plotting empirical vs model-estimated probabilities. Internal validation was performed by using a bootstrap procedure. Data were analyzed from 810 patients (6,711 daily transitions). The ABD-pm included individual risk factors: mental status, age, preexisting cognitive impairment, baseline and daily severity of illness, and daily administration of sedatives. The model yielded very high NPVs for "next day" delirium (NPV: 0.823), coma (NPV: 0.892), normal cognitive state (NPV: 0.875), ICU discharge (NPV: 0.905), and mortality (NPV: 0.981). The model demonstrated outstanding calibration when predicting the total number of patients expected to be in any given state across predicted risk. We developed and internally validated a dynamic risk model that predicts the daily risk for one of three cognitive states, ICU discharge, or mortality. The ABD-pm may be useful for predicting the proportion of patients for each outcome state across entire ICU populations to guide quality, safety, and care delivery activities. Copyright © 2018 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

  17. Elastic K-means using posterior probability

    PubMed Central

    Zheng, Aihua; Jiang, Bo; Li, Yan; Zhang, Xuehan; Ding, Chris

    2017-01-01

    The widely used K-means clustering is a hard clustering algorithm. Here we propose a Elastic K-means clustering model (EKM) using posterior probability with soft capability where each data point can belong to multiple clusters fractionally and show the benefit of proposed Elastic K-means. Furthermore, in many applications, besides vector attributes information, pairwise relations (graph information) are also available. Thus we integrate EKM with Normalized Cut graph clustering into a single clustering formulation. Finally, we provide several useful matrix inequalities which are useful for matrix formulations of learning models. Based on these results, we prove the correctness and the convergence of EKM algorithms. Experimental results on six benchmark datasets demonstrate the effectiveness of proposed EKM and its integrated model. PMID:29240756

  18. Combining the LKB NTCP model with radiosensitivity parameters to characterize toxicity of radionuclides based on a multiclonogen kidney model: a theoretical assessment.

    PubMed

    Lin, Hui; Jing, Jia; Xu, Liangfeng; Wu, Dongsheng; Xu, Yuanying

    2012-06-01

    The Lyman-Kutcher-Burman (LKB) normal tissue complication probability (NTCP) model is often used to estimate the damage level to normal tissue. However, it does not manifestly involve the influence of radiosensitivity parameters. This work replaces the generalized mean equivalent uniform dose (gEUD) with the equivalent uniform dose (EUD) in the LKB model to investigate the effect of a variety of radiobiological parameters on the NTCP to characterize the toxicity of five types of radionuclides. The dose for 50 % complication probability (D (50)) is replaced by the corresponding EUD for 50 % complication probability (EUD(50)). The properties of a variety of radiobiological characteristics, such as biologically effective dose (BED), NTCP, and EUD, for five types of radioisotope ((131)I, (186)Re, (188)Re, (90)Y, and (67)Cu) are investigated by various radiosensitivity parameters such as intrinsic radiosensitivity α, alpha-beta ratio α/β, cell repair half-time, cell mean clonogen doubling time, etc. The high-energy beta emitters ((90)Y and (188)Re) have high initial dose rate and mean absorbed dose per injected activity in kidney, and their kidney toxicity should be of greater concern if they are excreted through kidneys. The radiobiological effect of (188)Re changes most sharply with the radiobiological parameters due to its high-energy electrons and very short physical half-life. The dose for a probability of 50% injury within 5y (D (50/5)) 28 Gy for whole-kidney irradiation should be adjusted according to different radionuclides and different radiosensitivity of individuals. The D (50/5) of individuals with low α/β or low α, or low biological clearance half-time, will be less than 28 Gy. The 50 % complication probability dose for (67)Cu and (188)Re could be 25 Gy and 22 Gy. The same mean absorbed dose generally corresponds to different degrees of damage for tissues of different radiosensitivity and different radionuclides. The influence of various radiobiological parameters should be taken into consideration in the NTCP model.

  19. Computer simulation of the probability that endangered whales will interact with oil spills

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, M.; Jayko, K.; Bowles, A.

    1987-03-01

    A numerical model system was developed to assess quantitatively the probability that endangered bowhead and gray whales will encounter spilled oil in Alaskan waters. Bowhead and gray whale migration and diving-surfacing models, and an oil-spill trajectory model comprise the system. The migration models were developed from conceptual considerations, then calibrated with and tested against observations. The movement of a whale point is governed by a random walk algorithm which stochastically follows a migratory pathway. The oil-spill model, developed under a series of other contracts, accounts for transport and spreading behavior in open water and in the presence of sea ice.more » Historical wind records and heavy, normal, or light ice cover data sets are selected at random to provide stochastic oil-spill scenarios for whale-oil interaction simulations.« less

  20. A comparison of portfolio selection models via application on ISE 100 index data

    NASA Astrophysics Data System (ADS)

    Altun, Emrah; Tatlidil, Hüseyin

    2013-10-01

    Markowitz Model, a classical approach to portfolio optimization problem, relies on two important assumptions: the expected return is multivariate normally distributed and the investor is risk averter. But this model has not been extensively used in finance. Empirical results show that it is very hard to solve large scale portfolio optimization problems with Mean-Variance (M-V)model. Alternative model, Mean Absolute Deviation (MAD) model which is proposed by Konno and Yamazaki [7] has been used to remove most of difficulties of Markowitz Mean-Variance model. MAD model don't need to assume that the probability of the rates of return is normally distributed and based on Linear Programming. Another alternative portfolio model is Mean-Lower Semi Absolute Deviation (M-LSAD), which is proposed by Speranza [3]. We will compare these models to determine which model gives more appropriate solution to investors.

  1. Modelling dynamics with context-free grammars

    NASA Astrophysics Data System (ADS)

    García-Huerta, Juan-M.; Jiménez-Hernández, Hugo; Herrera-Navarro, Ana-M.; Hernández-Díaz, Teresa; Terol-Villalobos, Ivan

    2014-03-01

    This article presents a strategy to model the dynamics performed by vehicles in a freeway. The proposal consists on encode the movement as a set of finite states. A watershed-based segmentation is used to localize regions with high-probability of motion. Each state represents a proportion of a camera projection in a two-dimensional space, where each state is associated to a symbol, such that any combination of symbols is expressed as a language. Starting from a sequence of symbols through a linear algorithm a free-context grammar is inferred. This grammar represents a hierarchical view of common sequences observed into the scene. Most probable grammar rules express common rules associated to normal movement behavior. Less probable rules express themselves a way to quantify non-common behaviors and they might need more attention. Finally, all sequences of symbols that does not match with the grammar rules, may express itself uncommon behaviors (abnormal). The grammar inference is built with several sequences of images taken from a freeway. Testing process uses the sequence of symbols emitted by the scenario, matching the grammar rules with common freeway behaviors. The process of detect abnormal/normal behaviors is managed as the task of verify if any word generated by the scenario is recognized by the grammar.

  2. Modeling and forecasting foreign exchange daily closing prices with normal inverse Gaussian

    NASA Astrophysics Data System (ADS)

    Teneng, Dean

    2013-09-01

    We fit the normal inverse Gaussian(NIG) distribution to foreign exchange closing prices using the open software package R and select best models by Käärik and Umbleja (2011) proposed strategy. We observe that daily closing prices (12/04/2008 - 07/08/2012) of CHF/JPY, AUD/JPY, GBP/JPY, NZD/USD, QAR/CHF, QAR/EUR, SAR/CHF, SAR/EUR, TND/CHF and TND/EUR are excellent fits while EGP/EUR and EUR/GBP are good fits with a Kolmogorov-Smirnov test p-value of 0.062 and 0.08 respectively. It was impossible to estimate normal inverse Gaussian parameters (by maximum likelihood; computational problem) for JPY/CHF but CHF/JPY was an excellent fit. Thus, while the stochastic properties of an exchange rate can be completely modeled with a probability distribution in one direction, it may be impossible the other way around. We also demonstrate that foreign exchange closing prices can be forecasted with the normal inverse Gaussian (NIG) Lévy process, both in cases where the daily closing prices can and cannot be modeled by NIG distribution.

  3. Disordered models of acquired dyslexia

    NASA Astrophysics Data System (ADS)

    Virasoro, M. A.

    We show that certain specific correlations in the probability of errors observed in dyslexic patients that are normally explained by introducing additional complexity in the model for the reading process are typical of any Neural Network system that has learned to deal with a quasiregular environment. On the other hand we show that in Neural Networks the more regular behavior does not become naturally the default behavior.

  4. Does Litter Size Variation Affect Models of Terrestrial Carnivore Extinction Risk and Management?

    PubMed Central

    Devenish-Nelson, Eleanor S.; Stephens, Philip A.; Harris, Stephen; Soulsbury, Carl; Richards, Shane A.

    2013-01-01

    Background Individual variation in both survival and reproduction has the potential to influence extinction risk. Especially for rare or threatened species, reliable population models should adequately incorporate demographic uncertainty. Here, we focus on an important form of demographic stochasticity: variation in litter sizes. We use terrestrial carnivores as an example taxon, as they are frequently threatened or of economic importance. Since data on intraspecific litter size variation are often sparse, it is unclear what probability distribution should be used to describe the pattern of litter size variation for multiparous carnivores. Methodology/Principal Findings We used litter size data on 32 terrestrial carnivore species to test the fit of 12 probability distributions. The influence of these distributions on quasi-extinction probabilities and the probability of successful disease control was then examined for three canid species – the island fox Urocyon littoralis, the red fox Vulpes vulpes, and the African wild dog Lycaon pictus. Best fitting probability distributions differed among the carnivores examined. However, the discretised normal distribution provided the best fit for the majority of species, because variation among litter-sizes was often small. Importantly, however, the outcomes of demographic models were generally robust to the distribution used. Conclusion/Significance These results provide reassurance for those using demographic modelling for the management of less studied carnivores in which litter size variation is estimated using data from species with similar reproductive attributes. PMID:23469140

  5. Does litter size variation affect models of terrestrial carnivore extinction risk and management?

    PubMed

    Devenish-Nelson, Eleanor S; Stephens, Philip A; Harris, Stephen; Soulsbury, Carl; Richards, Shane A

    2013-01-01

    Individual variation in both survival and reproduction has the potential to influence extinction risk. Especially for rare or threatened species, reliable population models should adequately incorporate demographic uncertainty. Here, we focus on an important form of demographic stochasticity: variation in litter sizes. We use terrestrial carnivores as an example taxon, as they are frequently threatened or of economic importance. Since data on intraspecific litter size variation are often sparse, it is unclear what probability distribution should be used to describe the pattern of litter size variation for multiparous carnivores. We used litter size data on 32 terrestrial carnivore species to test the fit of 12 probability distributions. The influence of these distributions on quasi-extinction probabilities and the probability of successful disease control was then examined for three canid species - the island fox Urocyon littoralis, the red fox Vulpes vulpes, and the African wild dog Lycaon pictus. Best fitting probability distributions differed among the carnivores examined. However, the discretised normal distribution provided the best fit for the majority of species, because variation among litter-sizes was often small. Importantly, however, the outcomes of demographic models were generally robust to the distribution used. These results provide reassurance for those using demographic modelling for the management of less studied carnivores in which litter size variation is estimated using data from species with similar reproductive attributes.

  6. Conflict Probability Estimation for Free Flight

    NASA Technical Reports Server (NTRS)

    Paielli, Russell A.; Erzberger, Heinz

    1996-01-01

    The safety and efficiency of free flight will benefit from automated conflict prediction and resolution advisories. Conflict prediction is based on trajectory prediction and is less certain the farther in advance the prediction, however. An estimate is therefore needed of the probability that a conflict will occur, given a pair of predicted trajectories and their levels of uncertainty. A method is developed in this paper to estimate that conflict probability. The trajectory prediction errors are modeled as normally distributed, and the two error covariances for an aircraft pair are combined into a single equivalent covariance of the relative position. A coordinate transformation is then used to derive an analytical solution. Numerical examples and Monte Carlo validation are presented.

  7. Use of collateral information to improve LANDSAT classification accuracies

    NASA Technical Reports Server (NTRS)

    Strahler, A. H. (Principal Investigator)

    1981-01-01

    Methods to improve LANDSAT classification accuracies were investigated including: (1) the use of prior probabilities in maximum likelihood classification as a methodology to integrate discrete collateral data with continuously measured image density variables; (2) the use of the logit classifier as an alternative to multivariate normal classification that permits mixing both continuous and categorical variables in a single model and fits empirical distributions of observations more closely than the multivariate normal density function; and (3) the use of collateral data in a geographic information system as exercised to model a desired output information layer as a function of input layers of raster format collateral and image data base layers.

  8. Evidential reasoning research on intrusion detection

    NASA Astrophysics Data System (ADS)

    Wang, Xianpei; Xu, Hua; Zheng, Sheng; Cheng, Anyu

    2003-09-01

    In this paper, we mainly aim at D-S theory of evidence and the network intrusion detection these two fields. It discusses the method how to apply this probable reasoning as an AI technology to the Intrusion Detection System (IDS). This paper establishes the application model, describes the new mechanism of reasoning and decision-making and analyses how to implement the model based on the synscan activities detection on the network. The results suggest that if only rational probability values were assigned at the beginning, the engine can, according to the rules of evidence combination and hierarchical reasoning, compute the values of belief and finally inform the administrators of the qualities of the traced activities -- intrusions, normal activities or abnormal activities.

  9. Threshold detection in an on-off binary communications channel with atmospheric scintillation

    NASA Technical Reports Server (NTRS)

    Webb, W. E.; Marino, J. T., Jr.

    1974-01-01

    The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-emperical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. Bit error probabilities for non-optimum threshold detection system were also investigated.

  10. Threshold detection in an on-off binary communications channel with atmospheric scintillation

    NASA Technical Reports Server (NTRS)

    Webb, W. E.

    1975-01-01

    The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-empirical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. The bit error probabilities for nonoptimum threshold detection systems were also investigated.

  11. A quantum probability perspective on borderline vagueness.

    PubMed

    Blutner, Reinhard; Pothos, Emmanuel M; Bruza, Peter

    2013-10-01

    The term "vagueness" describes a property of natural concepts, which normally have fuzzy boundaries, admit borderline cases, and are susceptible to Zeno's sorites paradox. We will discuss the psychology of vagueness, especially experiments investigating the judgment of borderline cases and contradictions. In the theoretical part, we will propose a probabilistic model that describes the quantitative characteristics of the experimental finding and extends Alxatib's and Pelletier's () theoretical analysis. The model is based on a Hopfield network for predicting truth values. Powerful as this classical perspective is, we show that it falls short of providing an adequate coverage of the relevant empirical results. In the final part, we will argue that a substantial modification of the analysis put forward by Alxatib and Pelletier and its probabilistic pendant is needed. The proposed modification replaces the standard notion of probabilities by quantum probabilities. The crucial phenomenon of borderline contradictions can be explained then as a quantum interference phenomenon. © 2013 Cognitive Science Society, Inc.

  12. Neuroimaging Characteristics of Small-Vessel Disease in Older Adults with Normal Cognition, Mild Cognitive Impairment, and Alzheimer Disease.

    PubMed

    Mimenza-Alvarado, Alberto; Aguilar-Navarro, Sara G; Yeverino-Castro, Sara; Mendoza-Franco, César; Ávila-Funes, José Alberto; Román, Gustavo C

    2018-01-01

    Cerebral small-vessel disease (SVD) represents the most frequent type of vascular brain lesions, often coexisting with Alzheimer disease (AD). By quantifying white matter hyperintensities (WMH) and hippocampal and parietal atrophy, we aimed to describe the prevalence and severity of SVD among older adults with normal cognition (NC), mild cognitive impairment (MCI), and probable AD and to describe associated risk factors. This study included 105 older adults evaluated with magnetic resonance imaging and clinical and neuropsychological tests. We used the Fazekas scale (FS) for quantification of WMH, the Scheltens scale (SS) for hippocampal atrophy, and the Koedam scale (KS) for parietal atrophy. Logistic regression models were performed to determine the association between FS, SS, and KS scores and the presence of NC, MCI, or probable AD. Compared to NC subjects, SVD was more prevalent in MCI and probable AD subjects. After adjusting for confounding factors, logistic regression showed a positive association between higher scores on the FS and probable AD (OR = 7.6, 95% CI 2.7-20, p < 0.001). With the use of the SS and KS (OR = 4.5, 95% CI 3.5-58, p = 0.003 and OR = 8.9, 95% CI 1-72, p = 0.04, respectively), the risk also remained significant for probable AD. These results suggest an association between severity of vascular brain lesions and neurodegeneration.

  13. Wildfire risk in the wildland-urban interface: A simulation study in northwestern Wisconsin

    USGS Publications Warehouse

    Massada, Avi Bar; Radeloff, Volker C.; Stewart, Susan I.; Hawbaker, Todd J.

    2009-01-01

    The rapid growth of housing in and near the wildland–urban interface (WUI) increases wildfirerisk to lives and structures. To reduce fire risk, it is necessary to identify WUI housing areas that are more susceptible to wildfire. This is challenging, because wildfire patterns depend on fire behavior and spread, which in turn depend on ignition locations, weather conditions, the spatial arrangement of fuels, and topography. The goal of our study was to assess wildfirerisk to a 60,000 ha WUI area in northwesternWisconsin while accounting for all of these factors. We conducted 6000 simulations with two dynamic fire models: Fire Area Simulator (FARSITE) and Minimum Travel Time (MTT) in order to map the spatial pattern of burn probabilities. Simulations were run under normal and extreme weather conditions to assess the effect of weather on fire spread, burn probability, and risk to structures. The resulting burn probability maps were intersected with maps of structure locations and land cover types. The simulations revealed clear hotspots of wildfire activity and a large range of wildfirerisk to structures in the study area. As expected, the extreme weather conditions yielded higher burn probabilities over the entire landscape, as well as to different land cover classes and individual structures. Moreover, the spatial pattern of risk was significantly different between extreme and normal weather conditions. The results highlight the fact that extreme weather conditions not only produce higher fire risk than normal weather conditions, but also change the fine-scale locations of high risk areas in the landscape, which is of great importance for fire management in WUI areas. In addition, the choice of weather data may limit the potential for comparisons of risk maps for different areas and for extrapolating risk maps to future scenarios where weather conditions are unknown. Our approach to modeling wildfirerisk to structures can aid fire risk reduction management activities by identifying areas with elevated wildfirerisk and those most vulnerable under extreme weather conditions.

  14. Normal Tissue Complication Probability (NTCP) Modelling of Severe Acute Mucositis using a Novel Oral Mucosal Surface Organ at Risk.

    PubMed

    Dean, J A; Welsh, L C; Wong, K H; Aleksic, A; Dunne, E; Islam, M R; Patel, A; Patel, P; Petkar, I; Phillips, I; Sham, J; Schick, U; Newbold, K L; Bhide, S A; Harrington, K J; Nutting, C M; Gulliford, S L

    2017-04-01

    A normal tissue complication probability (NTCP) model of severe acute mucositis would be highly useful to guide clinical decision making and inform radiotherapy planning. We aimed to improve upon our previous model by using a novel oral mucosal surface organ at risk (OAR) in place of an oral cavity OAR. Predictive models of severe acute mucositis were generated using radiotherapy dose to the oral cavity OAR or mucosal surface OAR and clinical data. Penalised logistic regression and random forest classification (RFC) models were generated for both OARs and compared. Internal validation was carried out with 100-iteration stratified shuffle split cross-validation, using multiple metrics to assess different aspects of model performance. Associations between treatment covariates and severe mucositis were explored using RFC feature importance. Penalised logistic regression and RFC models using the oral cavity OAR performed at least as well as the models using mucosal surface OAR. Associations between dose metrics and severe mucositis were similar between the mucosal surface and oral cavity models. The volumes of oral cavity or mucosal surface receiving intermediate and high doses were most strongly associated with severe mucositis. The simpler oral cavity OAR should be preferred over the mucosal surface OAR for NTCP modelling of severe mucositis. We recommend minimising the volume of mucosa receiving intermediate and high doses, where possible. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  15. Detection of mastitis in dairy cattle by use of mixture models for repeated somatic cell scores: a Bayesian approach via Gibbs sampling.

    PubMed

    Odegård, J; Jensen, J; Madsen, P; Gianola, D; Klemetsdal, G; Heringstad, B

    2003-11-01

    The distribution of somatic cell scores could be regarded as a mixture of at least two components depending on a cow's udder health status. A heteroscedastic two-component Bayesian normal mixture model with random effects was developed and implemented via Gibbs sampling. The model was evaluated using datasets consisting of simulated somatic cell score records. Somatic cell score was simulated as a mixture representing two alternative udder health statuses ("healthy" or "diseased"). Animals were assigned randomly to the two components according to the probability of group membership (Pm). Random effects (additive genetic and permanent environment), when included, had identical distributions across mixture components. Posterior probabilities of putative mastitis were estimated for all observations, and model adequacy was evaluated using measures of sensitivity, specificity, and posterior probability of misclassification. Fitting different residual variances in the two mixture components caused some bias in estimation of parameters. When the components were difficult to disentangle, so were their residual variances, causing bias in estimation of Pm and of location parameters of the two underlying distributions. When all variance components were identical across mixture components, the mixture model analyses returned parameter estimates essentially without bias and with a high degree of precision. Including random effects in the model increased the probability of correct classification substantially. No sizable differences in probability of correct classification were found between models in which a single cow effect (ignoring relationships) was fitted and models where this effect was split into genetic and permanent environmental components, utilizing relationship information. When genetic and permanent environmental effects were fitted, the between-replicate variance of estimates of posterior means was smaller because the model accounted for random genetic drift.

  16. Log-Linear Models for Gene Association

    PubMed Central

    Hu, Jianhua; Joshi, Adarsh; Johnson, Valen E.

    2009-01-01

    We describe a class of log-linear models for the detection of interactions in high-dimensional genomic data. This class of models leads to a Bayesian model selection algorithm that can be applied to data that have been reduced to contingency tables using ranks of observations within subjects, and discretization of these ranks within gene/network components. Many normalization issues associated with the analysis of genomic data are thereby avoided. A prior density based on Ewens’ sampling distribution is used to restrict the number of interacting components assigned high posterior probability, and the calculation of posterior model probabilities is expedited by approximations based on the likelihood ratio statistic. Simulation studies are used to evaluate the efficiency of the resulting algorithm for known interaction structures. Finally, the algorithm is validated in a microarray study for which it was possible to obtain biological confirmation of detected interactions. PMID:19655032

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tucker, Susan L.; Liu, H. Helen; Wang, Shulian

    Purpose: The aim of this study was to investigate the effect of radiation dose distribution in the lung on the risk of postoperative pulmonary complications among esophageal cancer patients. Methods and Materials: We analyzed data from 110 patients with esophageal cancer treated with concurrent chemoradiotherapy followed by surgery at our institution from 1998 to 2003. The endpoint for analysis was postsurgical pneumonia or acute respiratory distress syndrome. Dose-volume histograms (DVHs) and dose-mass histograms (DMHs) for the whole lung were used to fit normal-tissue complication probability (NTCP) models, and the quality of fits were compared using bootstrap analysis. Results: Normal-tissue complicationmore » probability modeling identified that the risk of postoperative pulmonary complications was most significantly associated with small absolute volumes of lung spared from doses {>=}5 Gy (VS5), that is, exposed to doses <5 Gy. However, bootstrap analysis found no significant difference between the quality of this model and fits based on other dosimetric parameters, including mean lung dose, effective dose, and relative volume of lung receiving {>=}5 Gy, probably because of correlations among these factors. The choice of DVH vs. DMH or the use of fractionation correction did not significantly affect the results of the NTCP modeling. The parameter values estimated for the Lyman NTCP model were as follows (with 95% confidence intervals in parentheses): n = 1.85 (0.04, {infinity}), m = 0.55 (0.22, 1.02), and D {sub 5} = 17.5 Gy (9.4 Gy, 102 Gy). Conclusions: In this cohort of esophageal cancer patients, several dosimetric parameters including mean lung dose, effective dose, and absolute volume of lung receiving <5 Gy provided similar descriptions of the risk of postoperative pulmonary complications as a function of Radiation dose distribution in the lung.« less

  18. VizieR Online Data Catalog: A catalog of exoplanet physical parameters (Foreman-Mackey+, 2014)

    NASA Astrophysics Data System (ADS)

    Foreman-Mackey, D.; Hogg, D. W.; Morton, T. D.

    2017-05-01

    The first ingredient for any probabilistic inference is a likelihood function, a description of the probability of observing a specific data set given a set of model parameters. In this particular project, the data set is a catalog of exoplanet measurements and the model parameters are the values that set the shape and normalization of the occurrence rate density. (2 data files).

  19. A multilayer approach for price dynamics in financial markets

    NASA Astrophysics Data System (ADS)

    Biondo, Alessio Emanuele; Pluchino, Alessandro; Rapisarda, Andrea

    2017-02-01

    We introduce a new Self-Organized Criticality (SOC) model for simulating price evolution in an artificial financial market, based on a multilayer network of traders. The model also implements, in a quite realistic way with respect to previous studies, the order book dynamics, by considering two assets with variable fundamental prices. Fat tails in the probability distributions of normalized returns are observed, together with other features of real financial markets.

  20. Marginal Structural Cox Models for Estimating the Association Between β-Interferon Exposure and Disease Progression in a Multiple Sclerosis Cohort

    PubMed Central

    Karim, Mohammad Ehsanul; Gustafson, Paul; Petkau, John; Zhao, Yinshan; Shirani, Afsaneh; Kingwell, Elaine; Evans, Charity; van der Kop, Mia; Oger, Joel; Tremlett, Helen

    2014-01-01

    Longitudinal observational data are required to assess the association between exposure to β-interferon medications and disease progression among relapsing-remitting multiple sclerosis (MS) patients in the “real-world” clinical practice setting. Marginal structural Cox models (MSCMs) can provide distinct advantages over traditional approaches by allowing adjustment for time-varying confounders such as MS relapses, as well as baseline characteristics, through the use of inverse probability weighting. We assessed the suitability of MSCMs to analyze data from a large cohort of 1,697 relapsing-remitting MS patients in British Columbia, Canada (1995–2008). In the context of this observational study, which spanned more than a decade and involved patients with a chronic yet fluctuating disease, the recently proposed “normalized stabilized” weights were found to be the most appropriate choice of weights. Using this model, no association between β-interferon exposure and the hazard of disability progression was found (hazard ratio = 1.36, 95% confidence interval: 0.95, 1.94). For sensitivity analyses, truncated normalized unstabilized weights were used in additional MSCMs and to construct inverse probability weight-adjusted survival curves; the findings did not change. Additionally, qualitatively similar conclusions from approximation approaches to the weighted Cox model (i.e., MSCM) extend confidence in the findings. PMID:24939980

  1. A Markov chain model to evaluate the effect of CYP3A5 and ABCB1 polymorphisms on adverse events associated with tacrolimus in pediatric renal transplantation.

    PubMed

    Sy, Sherwin K B; Heuberger, Jules; Shilbayeh, Sireen; Conrado, Daniela J; Derendorf, Hartmut

    2013-10-01

    The SNP A6986G of the CYP3A5 gene (*3) results in a non-functional protein due to a splicing defect whereas the C3435T was associated with variable expression of the ABCB1 gene, due to protein instability. Part of the large interindividual variability in tacrolimus efficacy and toxicity can be accounted for by these genetic factors. Seventy-two individuals were examined for A6986G and C3435T polymorphism using a PCR-RFLP-based technique to estimate genotype and allele frequencies in the Jordanian population. The association of age, hematocrit, platelet count, CYP3A5, and ABCB1 polymorphisms with tacrolimus dose- and body-weight-normalized levels in the subset of 38 pediatric renal transplant patients was evaluated. A Markov model was used to evaluate the time-dependent probability of an adverse event occurrence by CYP3A5 phenotypes and ABCB1 genotypes. The time-dependent probability of adverse event was about double in CYP3A5 non-expressors compared to the expressors for the first 12 months of therapy. The CYP3A5 non-expressors had higher corresponding normalized tacrolimus levels compared to the expressors in the first 3 months. The correlation trend between probability of adverse events and normalized tacrolimus concentrations for the two CYP3A5 phenotypes persisted for the first 9 months of therapy. The differences among ABCB1 genotypes in terms of adverse events and normalized tacrolimus levels were only observed in the first 3 months of therapy. The information on CYP3A5 genotypes and tacrolimus dose requirement is important in designing effective programs toward management of tacrolimus side effects particularly for the initial dose when tacrolimus blood levels are not available for therapeutic drug monitoring.

  2. A probability distribution model of tooth pits for evaluating time-varying mesh stiffness of pitting gears

    NASA Astrophysics Data System (ADS)

    Lei, Yaguo; Liu, Zongyao; Wang, Delong; Yang, Xiao; Liu, Huan; Lin, Jing

    2018-06-01

    Tooth damage often causes a reduction in gear mesh stiffness. Thus time-varying mesh stiffness (TVMS) can be treated as an indication of gear health conditions. This study is devoted to investigating the mesh stiffness variations of a pair of external spur gears with tooth pitting, and proposes a new model for describing tooth pitting based on probability distribution. In the model, considering the appearance and development process of tooth pitting, we model the pitting on the surface of spur gear teeth as a series of pits with a uniform distribution in the direction of tooth width and a normal distribution in the direction of tooth height, respectively. In addition, four pitting degrees, from no pitting to severe pitting, are modeled. Finally, influences of tooth pitting on TVMS are analyzed in details and the proposed model is validated by comparing with a finite element model. The comparison results show that the proposed model is effective for the TVMS evaluations of pitting gears.

  3. Symbolic Model-Based SAR Feature Analysis and Change Detection

    DTIC Science & Technology

    1992-02-01

    normalization fac- tor described above in the Dempster rule of combination. Another problem is that in certain cases D-S overweights prior probabilities compared...Beaufort Sea data set and the Peru data set. The Phoenix results are described in section 6.2.2 including a partial trace of the opera- tion of the

  4. Major Accidents (Gray Swans) Likelihood Modeling Using Accident Precursors and Approximate Reasoning.

    PubMed

    Khakzad, Nima; Khan, Faisal; Amyotte, Paul

    2015-07-01

    Compared to the remarkable progress in risk analysis of normal accidents, the risk analysis of major accidents has not been so well-established, partly due to the complexity of such accidents and partly due to low probabilities involved. The issue of low probabilities normally arises from the scarcity of major accidents' relevant data since such accidents are few and far between. In this work, knowing that major accidents are frequently preceded by accident precursors, a novel precursor-based methodology has been developed for likelihood modeling of major accidents in critical infrastructures based on a unique combination of accident precursor data, information theory, and approximate reasoning. For this purpose, we have introduced an innovative application of information analysis to identify the most informative near accident of a major accident. The observed data of the near accident were then used to establish predictive scenarios to foresee the occurrence of the major accident. We verified the methodology using offshore blowouts in the Gulf of Mexico, and then demonstrated its application to dam breaches in the United Sates. © 2015 Society for Risk Analysis.

  5. Extrapolation of Normal Tissue Complication Probability for Different Fractionations in Liver Irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tai An; Erickson, Beth; Li, X. Allen

    2009-05-01

    Purpose: The ability to predict normal tissue complication probability (NTCP) is essential for NTCP-based treatment planning. The purpose of this work is to estimate the Lyman NTCP model parameters for liver irradiation from published clinical data of different fractionation regimens. A new expression of normalized total dose (NTD) is proposed to convert NTCP data between different treatment schemes. Method and Materials: The NTCP data of radiation- induced liver disease (RILD) from external beam radiation therapy for primary liver cancer patients were selected for analysis. The data were collected from 4 institutions for tumor sizes in the range of of 8-10more » cm. The dose per fraction ranged from 1.5 Gy to 6 Gy. A modified linear-quadratic model with two components corresponding to radiosensitive and radioresistant cells in the normal liver tissue was proposed to understand the new NTD formalism. Results: There are five parameters in the model: TD{sub 50}, m, n, {alpha}/{beta} and f. With two parameters n and {alpha}/{beta} fixed to be 1.0 and 2.0 Gy, respectively, the extracted parameters from the fitting are TD{sub 50}(1) = 40.3 {+-} 8.4Gy, m =0.36 {+-} 0.09, f = 0.156 {+-} 0.074 Gy and TD{sub 50}(1) = 23.9 {+-} 5.3Gy, m = 0.41 {+-} 0.15, f = 0.0 {+-} 0.04 Gy for patients with liver cirrhosis scores of Child-Pugh A and Child-Pugh B, respectively. The fitting results showed that the liver cirrhosis score significantly affects fractional dose dependence of NTD. Conclusion: The Lyman parameters generated presently and the new form of NTD may be used to predict NTCP for treatment planning of innovative liver irradiation with different fractionations, such as hypofractioned stereotactic body radiation therapy.« less

  6. Fire frequency, area burned, and severity: A quantitative approach to defining a normal fire year

    USGS Publications Warehouse

    Lutz, J.A.; Key, C.H.; Kolden, C.A.; Kane, J.T.; van Wagtendonk, J.W.

    2011-01-01

    Fire frequency, area burned, and fire severity are important attributes of a fire regime, but few studies have quantified the interrelationships among them in evaluating a fire year. Although area burned is often used to summarize a fire season, burned area may not be well correlated with either the number or ecological effect of fires. Using the Landsat data archive, we examined all 148 wildland fires (prescribed fires and wildfires) >40 ha from 1984 through 2009 for the portion of the Sierra Nevada centered on Yosemite National Park, California, USA. We calculated mean fire frequency and mean annual area burned from a combination of field- and satellite-derived data. We used the continuous probability distribution of the differenced Normalized Burn Ratio (dNBR) values to describe fire severity. For fires >40 ha, fire frequency, annual area burned, and cumulative severity were consistent in only 13 of 26 years (50 %), but all pair-wise comparisons among these fire regime attributes were significant. Borrowing from long-established practice in climate science, we defined "fire normals" to be the 26 year means of fire frequency, annual area burned, and the area under the cumulative probability distribution of dNBR. Fire severity normals were significantly lower when they were aggregated by year compared to aggregation by area. Cumulative severity distributions for each year were best modeled with Weibull functions (all 26 years, r2 ??? 0.99; P < 0.001). Explicit modeling of the cumulative severity distributions may allow more comprehensive modeling of climate-severity and area-severity relationships. Together, the three metrics of number of fires, size of fires, and severity of fires provide land managers with a more comprehensive summary of a given fire year than any single metric.

  7. Towards a model-based patient selection strategy for proton therapy: External validation of photon-derived Normal Tissue Complication Probability models in a head and neck proton therapy cohort

    PubMed Central

    Blanchard, P; Wong, AJ; Gunn, GB; Garden, AS; Mohamed, ASR; Rosenthal, DI; Crutison, J; Wu, R; Zhang, X; Zhu, XR; Mohan, R; Amin, MV; Fuller, CD; Frank, SJ

    2017-01-01

    Objective To externally validate head and neck cancer (HNC) photon-derived normal tissue complication probability (NTCP) models in patients treated with proton beam therapy (PBT). Methods This prospective cohort consisted of HNC patients treated with PBT at a single institution. NTCP models were selected based on the availability of data for validation and evaluated using the leave-one-out cross-validated area under the curve (AUC) for the receiver operating characteristics curve. Results 192 patients were included. The most prevalent tumor site was oropharynx (n=86, 45%), followed by sinonasal (n=28), nasopharyngeal (n=27) or parotid (n=27) tumors. Apart from the prediction of acute mucositis (reduction of AUC of 0.17), the models overall performed well. The validation (PBT) AUC and the published AUC were respectively 0.90 versus 0.88 for feeding tube 6 months post-PBT; 0.70 versus 0.80 for physician rated dysphagia 6 months post-PBT; 0.70 versus 0.80 for dry mouth 6 months post-PBT; and 0.73 versus 0.85 for hypothyroidism 12 months post-PBT. Conclusion While the drop in NTCP model performance was expected in PBT patients, the models showed robustness and remained valid. Further work is warranted, but these results support the validity of the model-based approach for treatment selection for HNC patients. PMID:27641784

  8. Ensemble learning and model averaging for material identification in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Basener, William F.

    2017-05-01

    In this paper we present a method for identifying the material contained in a pixel or region of pixels in a hyperspectral image. An identification process can be performed on a spectrum from an image from pixels that has been pre-determined to be of interest, generally comparing the spectrum from the image to spectra in an identification library. The metric for comparison used in this paper a Bayesian probability for each material. This probability can be computed either from Bayes' theorem applied to normal distributions for each library spectrum or using model averaging. Using probabilities has the advantage that the probabilities can be summed over spectra for any material class to obtain a class probability. For example, the probability that the spectrum of interest is a fabric is equal to the sum of all probabilities for fabric spectra in the library. We can do the same to determine the probability for a specific type of fabric, or any level of specificity contained in our library. Probabilities not only tell us which material is most likely, the tell us how confident we can be in the material presence; a probability close to 1 indicates near certainty of the presence of a material in the given class, and a probability close to 0.5 indicates that we cannot know if the material is present at the given level of specificity. This is much more informative than a detection score from a target detection algorithm or a label from a classification algorithm. In this paper we present results in the form of a hierarchical tree with probabilities for each node. We use Forest Radiance imagery with 159 bands.

  9. Vertical changes in the probability distribution of downward irradiance within the near-surface ocean under sunny conditions

    NASA Astrophysics Data System (ADS)

    Gernez, Pierre; Stramski, Dariusz; Darecki, Miroslaw

    2011-07-01

    Time series measurements of fluctuations in underwater downward irradiance, Ed, within the green spectral band (532 nm) show that the probability distribution of instantaneous irradiance varies greatly as a function of depth within the near-surface ocean under sunny conditions. Because of intense light flashes caused by surface wave focusing, the near-surface probability distributions are highly skewed to the right and are heavy tailed. The coefficients of skewness and excess kurtosis at depths smaller than 1 m can exceed 3 and 20, respectively. We tested several probability models, such as lognormal, Gumbel, Fréchet, log-logistic, and Pareto, which are potentially suited to describe the highly skewed heavy-tailed distributions. We found that the models cannot approximate with consistently good accuracy the high irradiance values within the right tail of the experimental distribution where the probability of these values is less than 10%. This portion of the distribution corresponds approximately to light flashes with Ed > 1.5?, where ? is the time-averaged downward irradiance. However, the remaining part of the probability distribution covering all irradiance values smaller than the 90th percentile can be described with a reasonable accuracy (i.e., within 20%) with a lognormal model for all 86 measurements from the top 10 m of the ocean included in this analysis. As the intensity of irradiance fluctuations decreases with depth, the probability distribution tends toward a function symmetrical around the mean like the normal distribution. For the examined data set, the skewness and excess kurtosis assumed values very close to zero at a depth of about 10 m.

  10. Collapse susceptibility mapping in karstified gypsum terrain (Sivas basin - Turkey) by conditional probability, logistic regression, artificial neural network models

    NASA Astrophysics Data System (ADS)

    Yilmaz, Isik; Keskin, Inan; Marschalko, Marian; Bednarik, Martin

    2010-05-01

    This study compares the GIS based collapse susceptibility mapping methods such as; conditional probability (CP), logistic regression (LR) and artificial neural networks (ANN) applied in gypsum rock masses in Sivas basin (Turkey). Digital Elevation Model (DEM) was first constructed using GIS software. Collapse-related factors, directly or indirectly related to the causes of collapse occurrence, such as distance from faults, slope angle and aspect, topographical elevation, distance from drainage, topographic wetness index- TWI, stream power index- SPI, Normalized Difference Vegetation Index (NDVI) by means of vegetation cover, distance from roads and settlements were used in the collapse susceptibility analyses. In the last stage of the analyses, collapse susceptibility maps were produced from CP, LR and ANN models, and they were then compared by means of their validations. Area Under Curve (AUC) values obtained from all three methodologies showed that the map obtained from ANN model looks like more accurate than the other models, and the results also showed that the artificial neural networks is a usefull tool in preparation of collapse susceptibility map and highly compatible with GIS operating features. Key words: Collapse; doline; susceptibility map; gypsum; GIS; conditional probability; logistic regression; artificial neural networks.

  11. A Performance Comparison on the Probability Plot Correlation Coefficient Test using Several Plotting Positions for GEV Distribution.

    NASA Astrophysics Data System (ADS)

    Ahn, Hyunjun; Jung, Younghun; Om, Ju-Seong; Heo, Jun-Haeng

    2014-05-01

    It is very important to select the probability distribution in Statistical hydrology. Goodness of fit test is a statistical method that selects an appropriate probability model for a given data. The probability plot correlation coefficient (PPCC) test as one of the goodness of fit tests was originally developed for normal distribution. Since then, this test has been widely applied to other probability models. The PPCC test is known as one of the best goodness of fit test because it shows higher rejection powers among them. In this study, we focus on the PPCC tests for the GEV distribution which is widely used in the world. For the GEV model, several plotting position formulas are suggested. However, the PPCC statistics are derived only for the plotting position formulas (Goel and De, In-na and Nguyen, and Kim et al.) in which the skewness coefficient (or shape parameter) are included. And then the regression equations are derived as a function of the shape parameter and sample size for a given significance level. In addition, the rejection powers of these formulas are compared using Monte-Carlo simulation. Keywords: Goodness-of-fit test, Probability plot correlation coefficient test, Plotting position, Monte-Carlo Simulation ACKNOWLEDGEMENTS This research was supported by a grant 'Establishing Active Disaster Management System of Flood Control Structures by using 3D BIM Technique' [NEMA-12-NH-57] from the Natural Hazard Mitigation Research Group, National Emergency Management Agency of Korea.

  12. Early detection of probable idiopathic Parkinson's disease: I. development of a diagnostic test battery.

    PubMed

    Montgomery, Erwin B; Koller, William C; LaMantia, Theodora J K; Newman, Mary C; Swanson-Hyland, Elizabeth; Kaszniak, Alfred W; Lyons, Kelly

    2000-05-01

    We developed a test battery as an inexpensive and objective aid for the early diagnosis of idiopathic Parkinson's disease (iPD) and its differential diagnoses. The test battery incorporates tests of motor function, olfaction, and mood. In the motor task, a wrist flexion-and-extension task to different targets, movement velocities were recorded. Olfaction was tested with the University of Pennsylvania Smell Identification Test. Mood was assessed with the Beck Depression Inventory. An initial regression model was developed from the results of 19 normal control subjects and 18 patients with early, mild, probable iPD. Prospective application to an independent validation set of 122 normal control subjects and 103 patients resulted in an 88% specificity rate and 69% sensitivity rate, with an area under the Receiver Operator Characteristic curve of 0.87. Copyright © 2000 Movement Disorder Society.

  13. The bingo model of survivorship: 1. probabilistic aspects.

    PubMed

    Murphy, E A; Trojak, J E; Hou, W; Rohde, C A

    1981-01-01

    A "bingo" model is one in which the pattern of survival of a system is determined by whichever of several components, each with its own particular distribution for survival, fails first. The model is motivated by the study of lifespan in animals. A number of properties of such systems are discussed in general. They include the use of a special criterion of skewness that probably corresponds more closely than traditional measures to what the eye observes in casually inspecting data. This criterion is the ratio, r(h), of the probability density at a point an arbitrary distance, h, above the mode to that an equal distance below the mode. If this ratio is positive for all positive arguments, the distribution is considered positively asymmetrical and conversely. Details of the bingo model are worked out for several types of base distributions: the rectangular, the triangular, the logistic, and by numerical methods, the normal, lognormal, and gamma.

  14. Application of Markov chain model to daily maximum temperature for thermal comfort in Malaysia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nordin, Muhamad Asyraf bin Che; Hassan, Husna

    2015-10-22

    The Markov chain’s first order principle has been widely used to model various meteorological fields, for prediction purposes. In this study, a 14-year (2000-2013) data of daily maximum temperatures in Bayan Lepas were used. Earlier studies showed that the outdoor thermal comfort range based on physiologically equivalent temperature (PET) index in Malaysia is less than 34°C, thus the data obtained were classified into two state: normal state (within thermal comfort range) and hot state (above thermal comfort range). The long-run results show the probability of daily temperature exceed TCR will be only 2.2%. On the other hand, the probability dailymore » temperature within TCR will be 97.8%.« less

  15. A new concept in seismic landslide hazard analysis for practical application

    NASA Astrophysics Data System (ADS)

    Lee, Chyi-Tyi

    2017-04-01

    A seismic landslide hazard model could be constructed using deterministic approach (Jibson et al., 2000) or statistical approach (Lee, 2014). Both approaches got landslide spatial probability under a certain return-period earthquake. In the statistical approach, our recent study found that there are common patterns among different landslide susceptibility models of the same region. The common susceptibility could reflect relative stability of slopes at a region; higher susceptibility indicates lower stability. Using the common susceptibility together with an earthquake event landslide inventory and a map of topographically corrected Arias intensity, we can build the relationship among probability of failure, Arias intensity and the susceptibility. This relationship can immediately be used to construct a seismic landslide hazard map for the region that the empirical relationship built. If the common susceptibility model is further normalized and the empirical relationship built with normalized susceptibility, then the empirical relationship may be practically applied to different region with similar tectonic environments and climate conditions. This could be feasible, when a region has no existing earthquake-induce landslide data to train the susceptibility model and to build the relationship. It is worth mentioning that a rain-induced landslide susceptibility model has common pattern similar to earthquake-induced landslide susceptibility in the same region, and is usable to build the relationship with an earthquake event landslide inventory and a map of Arias intensity. These will be introduced with examples in the meeting.

  16. Analysis of data from NASA B-57B gust gradient program

    NASA Technical Reports Server (NTRS)

    Frost, W.; Lin, M. C.; Chang, H. P.; Ringnes, E.

    1985-01-01

    Statistical analysis of the turbulence measured in flight 6 of the NASA B-57B over Denver, Colorado, from July 7 to July 23, 1982 included the calculations of average turbulence parameters, integral length scales, probability density functions, single point autocorrelation coefficients, two point autocorrelation coefficients, normalized autospectra, normalized two point autospectra, and two point cross sectra for gust velocities. The single point autocorrelation coefficients were compared with the theoretical model developed by von Karman. Theoretical analyses were developed which address the effects spanwise gust distributions, using two point spatial turbulence correlations.

  17. Bright high z SnIa: A challenge for ΛCDM

    NASA Astrophysics Data System (ADS)

    Perivolaropoulos, L.; Shafieloo, A.

    2009-06-01

    It has recently been pointed out by Kowalski et. al. [Astrophys. J. 686, 749 (2008).ASJOAB0004-637X10.1086/589937] that there is “an unexpected brightness of the SnIa data at z>1.” We quantify this statement by constructing a new statistic which is applicable directly on the type Ia supernova (SnIa) distance moduli. This statistic is designed to pick up systematic brightness trends of SnIa data points with respect to a best fit cosmological model at high redshifts. It is based on binning the normalized differences between the SnIa distance moduli and the corresponding best fit values in the context of a specific cosmological model (e.g. ΛCDM). These differences are normalized by the standard errors of the observed distance moduli. We then focus on the highest redshift bin and extend its size toward lower redshifts until the binned normalized difference (BND) changes sign (crosses 0) at a redshift zc (bin size Nc). The bin size Nc of this crossing (the statistical variable) is then compared with the corresponding crossing bin size Nmc for Monte Carlo data realizations based on the best fit model. We find that the crossing bin size Nc obtained from the Union08 and Gold06 data with respect to the best fit ΛCDM model is anomalously large compared to Nmc of the corresponding Monte Carlo data sets obtained from the best fit ΛCDM in each case. In particular, only 2.2% of the Monte Carlo ΛCDM data sets are consistent with the Gold06 value of Nc while the corresponding probability for the Union08 value of Nc is 5.3%. Thus, according to this statistic, the probability that the high redshift brightness bias of the Union08 and Gold06 data sets is realized in the context of a (w0,w1)=(-1,0) model (ΛCDM cosmology) is less than 6%. The corresponding realization probability in the context of a (w0,w1)=(-1.4,2) model is more than 30% for both the Union08 and the Gold06 data sets indicating a much better consistency for this model with respect to the BND statistic.

  18. Directional data analysis under the general projected normal distribution

    PubMed Central

    Wang, Fangpo; Gelfand, Alan E.

    2013-01-01

    The projected normal distribution is an under-utilized model for explaining directional data. In particular, the general version provides flexibility, e.g., asymmetry and possible bimodality along with convenient regression specification. Here, we clarify the properties of this general class. We also develop fully Bayesian hierarchical models for analyzing circular data using this class. We show how they can be fit using MCMC methods with suitable latent variables. We show how posterior inference for distributional features such as the angular mean direction and concentration can be implemented as well as how prediction within the regression setting can be handled. With regard to model comparison, we argue for an out-of-sample approach using both a predictive likelihood scoring loss criterion and a cumulative rank probability score criterion. PMID:24046539

  19. Applying the log-normal distribution to target detection

    NASA Astrophysics Data System (ADS)

    Holst, Gerald C.

    1992-09-01

    Holst and Pickard experimentally determined that MRT responses tend to follow a log-normal distribution. The log normal distribution appeared reasonable because nearly all visual psychological data is plotted on a logarithmic scale. It has the additional advantage that it is bounded to positive values; an important consideration since probability of detection is often plotted in linear coordinates. Review of published data suggests that the log-normal distribution may have universal applicability. Specifically, the log-normal distribution obtained from MRT tests appears to fit the target transfer function and the probability of detection of rectangular targets.

  20. Analytical modeling of electron energy loss spectroscopy of graphene: Ab initio study versus extended hydrodynamic model.

    PubMed

    Djordjević, Tijana; Radović, Ivan; Despoja, Vito; Lyon, Keenan; Borka, Duško; Mišković, Zoran L

    2018-01-01

    We present an analytical modeling of the electron energy loss (EEL) spectroscopy data for free-standing graphene obtained by scanning transmission electron microscope. The probability density for energy loss of fast electrons traversing graphene under normal incidence is evaluated using an optical approximation based on the conductivity of graphene given in the local, i.e., frequency-dependent form derived by both a two-dimensional, two-fluid extended hydrodynamic (eHD) model and an ab initio method. We compare the results for the real and imaginary parts of the optical conductivity in graphene obtained by these two methods. The calculated probability density is directly compared with the EEL spectra from three independent experiments and we find very good agreement, especially in the case of the eHD model. Furthermore, we point out that the subtraction of the zero-loss peak from the experimental EEL spectra has a strong influence on the analytical model for the EEL spectroscopy data. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Estimation and prediction of maximum daily rainfall at Sagar Island using best fit probability models

    NASA Astrophysics Data System (ADS)

    Mandal, S.; Choudhury, B. U.

    2015-07-01

    Sagar Island, setting on the continental shelf of Bay of Bengal, is one of the most vulnerable deltas to the occurrence of extreme rainfall-driven climatic hazards. Information on probability of occurrence of maximum daily rainfall will be useful in devising risk management for sustaining rainfed agrarian economy vis-a-vis food and livelihood security. Using six probability distribution models and long-term (1982-2010) daily rainfall data, we studied the probability of occurrence of annual, seasonal and monthly maximum daily rainfall (MDR) in the island. To select the best fit distribution models for annual, seasonal and monthly time series based on maximum rank with minimum value of test statistics, three statistical goodness of fit tests, viz. Kolmogorove-Smirnov test (K-S), Anderson Darling test ( A 2 ) and Chi-Square test ( X 2) were employed. The fourth probability distribution was identified from the highest overall score obtained from the three goodness of fit tests. Results revealed that normal probability distribution was best fitted for annual, post-monsoon and summer seasons MDR, while Lognormal, Weibull and Pearson 5 were best fitted for pre-monsoon, monsoon and winter seasons, respectively. The estimated annual MDR were 50, 69, 86, 106 and 114 mm for return periods of 2, 5, 10, 20 and 25 years, respectively. The probability of getting an annual MDR of >50, >100, >150, >200 and >250 mm were estimated as 99, 85, 40, 12 and 03 % level of exceedance, respectively. The monsoon, summer and winter seasons exhibited comparatively higher probabilities (78 to 85 %) for MDR of >100 mm and moderate probabilities (37 to 46 %) for >150 mm. For different recurrence intervals, the percent probability of MDR varied widely across intra- and inter-annual periods. In the island, rainfall anomaly can pose a climatic threat to the sustainability of agricultural production and thus needs adequate adaptation and mitigation measures.

  2. Mathematical estimates of recovery after loss of activity: II. Long-range connectivity facilitates rapid functional recovery.

    PubMed

    Hübler, Merla J; Buchman, Timothy G

    2008-02-01

    To model the effects of system connectedness on recovery of dysfunctional tissues. One-dimensional elementary cellular automata models with small-world features, where the center-input for a few cells comes not from itself but, with a given probability, from another cell. This probability represents the connectivity of the network. The long-range connections are chosen randomly to survey the potential influences of distant information flowing into a local region. MATLAB and Mathematica computing environments. None. None. We determined the recovery rate of the entropy after perturbing a uniformly dormant system. We observed that the recovery of normal activity after perturbation of a dormant system had the characteristics of an epidemic. Moreover, we found that the rate of recovery to normal steady-state activity increased rapidly even for small amounts of long-range connectivity. Findings obtained through numerical simulation were verified through analytical solutions. This study links our hypothesis that multiple organ function syndromes represent recoupling failure with a mathematical model showing the contribution of such coupling to reactivation of dormant systems. The implication is that strategies aimed not at target tissues or target organs but rather at restoring the quality and quantity of interconnections across those tissues and organs may be a novel therapeutic strategy.

  3. On the radiobiological impact of metal artifacts in head-and-neck IMRT in terms of tumor control probability (TCP) and normal tissue complication probability (NTCP).

    PubMed

    Kim, Yusung; Tomé, Wolfgang A

    2007-11-01

    To investigate the effects of distorted head-and-neck (H&N) intensity-modulated radiation therapy (IMRT) dose distributions (hot and cold spots) on normal tissue complication probability (NTCP) and tumor control probability (TCP) due to dental-metal artifacts. Five patients' IMRT treatment plans have been analyzed, employing five different planning image data-sets: (a) uncorrected (UC); (b) homogeneous uncorrected (HUC); (c) sinogram completion corrected (SCC); (d) minimum-value-corrected (MVC); and (e) streak-artifact-reduction including minimum-value-correction (SAR-MVC), which has been taken as the reference data-set. The effects on NTCP and TCP were evaluated using the Lyman-NTCP model and the Logistic-TCP model, respectively. When compared to the predicted NTCP obtained using the reference data-set, the treatment plan based on the original CT data-set (UC) yielded an increase in NTCP of 3.2 and 2.0% for the spared parotid gland and the spinal cord, respectively. While for the treatment plans based on the MVC CT data-set the NTCP increased by a 1.1% and a 0.1% for the spared parotid glands and the spinal cord, respectively. In addition, the MVC correction method showed a reduction in TCP for target volumes (MVC: delta TCP = -0.6% vs. UC: delta TCP = -1.9%) with respect to that of the reference CT data-set. Our results indicate that the presence of dental-metal-artifacts in H&N planning CT data-sets has an impact on the estimates of TCP and NTCP. In particular dental-metal-artifacts lead to an increase in NTCP for the spared parotid glands and a slight decrease in TCP for target volumes.

  4. Ditching Investigation of a 1/12-Scale Model of the Douglas F4D-1 Airplane, TED No. NACA DE 384

    NASA Technical Reports Server (NTRS)

    Windham, John O.

    1956-01-01

    A ditching investigation was made of a l/l2-scale dynamically similar model of the Douglas F4D-1 airplane to study its behavior when ditched. The model was landed in calm water at the Langley tank no. 2 monorail. Various landing attitudes, speeds, and configurations were investigated. The behavior of the model was determined from visual observations, acceleration records, and motion-picture records of the ditchings. Data are presented in tables, sequence photographs, time-history acceleration curves, and attitude curves. From the results of the investigation, it was concluded that the airplane should be ditched at the lowest speed and highest attitude consistent with adequate control (near 22 deg) with landing gear retracted. In a calm-water ditching under these conditions the airplane will probably nose in slightly, then make a fairly smooth run. The fuselage bottom will sustain appreciable damage so that rapid flooding and short flotation time are likely. Maximum longitudinal deceleration will be about 4g and maximum normal acceleration will be about 6g in a landing run of about 420 feet, In a calm-water ditching under similar conditions with the landing gear extended, the airplane will probably dive. Maximum longitudinal decelerations will be about 5-1/2g and maximum normal accelerations will be about 3-1/2g in a landing run of about 170 feet.

  5. The Impact of Breastfeeding on Early Childhood Obesity: Evidence From the National Survey of Children's Health.

    PubMed

    Hansstein, Francesca V

    2016-03-01

    To investigate how breastfeeding initiation and duration affect the likelihood of being overweight and obese in children aged 2 to 5. Cross-sectional data from the 2003 National Survey of Children's Health. Rural and urban areas of the United States. Households where at least one member was between the ages of 2 and 5 (sample size 8207). Parent-reported body mass index, breastfeeding initiation and duration, covariates (gender, family income and education, ethnicity, child care attendance, maternal health and physical activity, residential area). Partial proportional odds models. In early childhood, breastfed children had 5.3% higher probability of being normal weight (p = .002) and 8.9% (p < .001) lower probability of being obese compared to children who had never been breastfed. Children who had been breastfed for less than 3 months had 3.1% lower probability of being normal weight (p = .013) and 4.7% higher probability of being obese (p = .013) with respect to children who had been breastfed for 3 months and above. Study findings suggest that length of breastfeeding, whether exclusive or not, may be associated with lower risk of obesity in early childhood. However, caution is needed in generalizing results because of the limitations of the analysis. Based on findings from this study and others, breastfeeding promotion policies can cite the potential protective effect that breastfeeding has on weight in early childhood. © The Author(s) 2016.

  6. Decisions under risk in Parkinson's disease: preserved evaluation of probability and magnitude.

    PubMed

    Sharp, Madeleine E; Viswanathan, Jayalakshmi; McKeown, Martin J; Appel-Cresswell, Silke; Stoessl, A Jon; Barton, Jason J S

    2013-11-01

    Unmedicated Parkinson's disease patients tend to be risk-averse while dopaminergic treatment causes a tendency to take risks. While dopamine agonists may result in clinically apparent impulse control disorders, treatment with levodopa also causes shift in behaviour associated with an enhanced response to rewards. Two important determinants in decision-making are how subjects perceive the magnitude and probability of outcomes. Our objective was to determine if patients with Parkinson's disease on or off levodopa showed differences in their perception of value when making decisions under risk. The Vancouver Gambling task presents subjects with a choice between one prospect with larger outcome and a second with higher probability. Eighteen age-matched controls and eighteen patients with Parkinson's disease before and after levodopa were tested. In the Gain Phase subjects chose between one prospect with higher probability and another with larger reward to maximize their gains. In the Loss Phase, subjects played to minimize their losses. Patients with Parkinson's disease, on or off levodopa, were similar to controls when evaluating gains. However, in the Loss Phase before levodopa, they were more likely to avoid the prospect with lower probability but larger loss, as indicated by the steeper slope of their group psychometric function (t(24) = 2.21, p = 0.04). Modelling with prospect theory suggested that this was attributable to a 28% overestimation of the magnitude of loss, rather than an altered perception of its probability. While pre-medicated patients with Parkinson's disease show risk-aversion for large losses, patients on levodopa have normal perception of magnitude and probability for both loss and gain. The finding of accurate and normally biased decisions under risk in medicated patients with PD is important because it indicates that, if there is indeed anomalous risk-seeking behaviour in such a cohort, it may derive from abnormalities in components of decision making that are separate from evaluations of size and probability. © 2013 Elsevier Ltd. All rights reserved.

  7. Weighing Clinical Evidence Using Patient Preferences: An Application of Probabilistic Multi-Criteria Decision Analysis.

    PubMed

    Broekhuizen, Henk; IJzerman, Maarten J; Hauber, A Brett; Groothuis-Oudshoorn, Catharina G M

    2017-03-01

    The need for patient engagement has been recognized by regulatory agencies, but there is no consensus about how to operationalize this. One approach is the formal elicitation and use of patient preferences for weighing clinical outcomes. The aim of this study was to demonstrate how patient preferences can be used to weigh clinical outcomes when both preferences and clinical outcomes are uncertain by applying a probabilistic value-based multi-criteria decision analysis (MCDA) method. Probability distributions were used to model random variation and parameter uncertainty in preferences, and parameter uncertainty in clinical outcomes. The posterior value distributions and rank probabilities for each treatment were obtained using Monte-Carlo simulations. The probability of achieving the first rank is the probability that a treatment represents the highest value to patients. We illustrated our methodology for a simplified case on six HIV treatments. Preferences were modeled with normal distributions and clinical outcomes were modeled with beta distributions. The treatment value distributions showed the rank order of treatments according to patients and illustrate the remaining decision uncertainty. This study demonstrated how patient preference data can be used to weigh clinical evidence using MCDA. The model takes into account uncertainty in preferences and clinical outcomes. The model can support decision makers during the aggregation step of the MCDA process and provides a first step toward preference-based personalized medicine, yet requires further testing regarding its appropriate use in real-world settings.

  8. Credit scoring analysis using kernel discriminant

    NASA Astrophysics Data System (ADS)

    Widiharih, T.; Mukid, M. A.; Mustafid

    2018-05-01

    Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.

  9. Wind models for the NSTS ascent trajectory biasing for wind load alleviation

    NASA Technical Reports Server (NTRS)

    Smith, O. E.; Adelfang, S. I.; Batts, G. W.; Hill, C. K.

    1989-01-01

    New concepts are presented for aerospace vehicle ascent wind profile biasing. The purpose for wind biasing the ascent trajectory is to provide ascent wind loads relief and thus decrease the probability for launch delays due to wind loads exceeding critical limits. Wind biasing trajectories to the profile of monthly mean winds have been widely used for this purpose. The wind profile models presented give additional alternatives for wind biased trajectories. They are derived from the properties of the bivariate normal probability function using the available wind statistical parameters for the launch site. The analytical expressions are presented to permit generalizations. Specific examples are given to illustrate the procedures. The wind profile models can be used to establish the ascent trajectory steering commands to guide the vehicle through the first stage. For the National Space Transportation System (NSTS) program these steering commands are called I-loads.

  10. Radiobiological Impact of Planning Techniques for Prostate Cancer in Terms of Tumor Control Probability and Normal Tissue Complication Probability

    PubMed Central

    Rana, S; Cheng, CY

    2014-01-01

    Background: The radiobiological models describe the effects of the radiation treatment on cancer and healthy cells, and the radiobiological effects are generally characterized by the tumor control probability (TCP) and normal tissue complication probability (NTCP). Aim: The purpose of this study was to assess the radiobiological impact of RapidArc planning techniques for prostate cancer in terms of TCP and normal NTCP. Subjects and Methods: A computed tomography data set of ten cases involving low-risk prostate cancer was selected for this retrospective study. For each case, two RapidArc plans were created in Eclipse treatment planning system. The double arc (DA) plan was created using two full arcs and the single arc (SA) plan was created using one full arc. All treatment plans were calculated with anisotropic analytical algorithm. Radiobiological modeling response evaluation was performed by calculating Niemierko's equivalent uniform dose (EUD)-based Tumor TCP and NTCP values. Results: For prostate tumor, the average EUD in the SA plans was slightly higher than in the DA plans (78.10 Gy vs. 77.77 Gy; P = 0.01), but the average TCP was comparable (98.3% vs. 98.3%; P = 0.01). In comparison to the DA plans, the SA plans produced higher average EUD to bladder (40.71 Gy vs. 40.46 Gy; P = 0.03) and femoral heads (10.39 Gy vs. 9.40 Gy; P = 0.03), whereas both techniques produced NTCP well below 0.1% for bladder (P = 0.14) and femoral heads (P = 0.26). In contrast, the SA plans produced higher average NTCP compared to the DA plans (2.2% vs. 1.9%; P = 0.01). Furthermore, the EUD to rectum was slightly higher in the SA plans (62.88 Gy vs. 62.22 Gy; P = 0.01). Conclusion: The SA and DA techniques produced similar TCP for low-risk prostate cancer. The NTCP for femoral heads and bladder was comparable in the SA and DA plans; however, the SA technique resulted in higher NTCP for rectum in comparison with the DA technique. PMID:24761232

  11. Wildfire risk in the wildland-urban interface: A simulation study in northwestern Wisconsin

    USGS Publications Warehouse

    Bar-Massada, A.; Radeloff, V.C.; Stewart, S.I.; Hawbaker, T.J.

    2009-01-01

    The rapid growth of housing in and near the wildland-urban interface (WUI) increases wildfire risk to lives and structures. To reduce fire risk, it is necessary to identify WUI housing areas that are more susceptible to wildfire. This is challenging, because wildfire patterns depend on fire behavior and spread, which in turn depend on ignition locations, weather conditions, the spatial arrangement of fuels, and topography. The goal of our study was to assess wildfire risk to a 60,000 ha WUI area in northwestern Wisconsin while accounting for all of these factors. We conducted 6000 simulations with two dynamic fire models: Fire Area Simulator (FARSITE) and Minimum Travel Time (MTT) in order to map the spatial pattern of burn probabilities. Simulations were run under normal and extreme weather conditions to assess the effect of weather on fire spread, burn probability, and risk to structures. The resulting burn probability maps were intersected with maps of structure locations and land cover types. The simulations revealed clear hotspots of wildfire activity and a large range of wildfire risk to structures in the study area. As expected, the extreme weather conditions yielded higher burn probabilities over the entire landscape, as well as to different land cover classes and individual structures. Moreover, the spatial pattern of risk was significantly different between extreme and normal weather conditions. The results highlight the fact that extreme weather conditions not only produce higher fire risk than normal weather conditions, but also change the fine-scale locations of high risk areas in the landscape, which is of great importance for fire management in WUI areas. In addition, the choice of weather data may limit the potential for comparisons of risk maps for different areas and for extrapolating risk maps to future scenarios where weather conditions are unknown. Our approach to modeling wildfire risk to structures can aid fire risk reduction management activities by identifying areas with elevated wildfire risk and those most vulnerable under extreme weather conditions. ?? 2009 Elsevier B.V.

  12. A ghrelin gene variant may predict crossover rate from restricting-type anorexia nervosa to other phenotypes of eating disorders: a retrospective survival analysis.

    PubMed

    Ando, Tetsuya; Komaki, Gen; Nishimura, Hiroki; Naruo, Tetsuro; Okabe, Kenjiro; Kawai, Keisuke; Takii, Masato; Oka, Takakazu; Kodama, Naoki; Nakamoto, Chiemi; Ishikawa, Toshio; Suzuki-Hotta, Mari; Minatozaki, Kazunori; Yamaguchi, Chikara; Nishizono-Maher, Aya; Kono, Masaki; Kajiwara, Sohei; Suematsu, Hiroyuki; Tomita, Yuichiro; Ebana, Shoichi; Okamoto, Yuri; Nagata, Katsutaro; Nakai, Yoshikatsu; Koide, Masanori; Kobayashi, Nobuyuki; Kurokawa, Nobuo; Nagata, Toshihiko; Kiriike, Nobuo; Takenaka, Yoshito; Nagamine, Kiyohide; Ookuma, Kazuyoshi; Murata, Shiho

    2010-08-01

    Patients with anorexia nervosa restricting type (AN-R) often develop bulimic symptoms and crossover to AN-binge eating/purging type (AN-BP), or to bulimia nervosa (BN). We have reported earlier that genetic variants of an orexigenic peptide ghrelin are associated with BN. Here, the relationship between a ghrelin gene variant and the rate of change from AN-R to other phenotypes of eating disorders (EDs) was investigated. Participants were 165 patients with ED, initially diagnosed as AN-R. The dates of their AN-R onset and changes in diagnosis to other subtypes of ED were investigated retrospectively. Ghrelin gene 3056 T-->C SNP (single nucleotide polymorphism) was genotyped. Probability and hazard ratios were analyzed using life table analysis and Cox's proportional hazard regression model, in which the starting point was the time of AN-R onset and the outcome events were the time of (i) onset of binge eating, that is, when patients changed to binge eating AN and BN and (ii) recovery of normal weight, that is, when patients changed to BN or remission. Patients with the TT genotype at 3056 T-->C had a higher probability and hazard ratio for recovery of normal weight. The ghrelin SNP was not related with the onset of binge eating. The 3056 T-->C SNP of the ghrelin gene is related to the probability and the rate of recovery of normal body weight from restricting-type AN.

  13. Probability estimates of seismic event occurrence compared to health hazards - Forecasting Taipei's Earthquakes

    NASA Astrophysics Data System (ADS)

    Fung, D. C. N.; Wang, J. P.; Chang, S. H.; Chang, S. C.

    2014-12-01

    Using a revised statistical model built on past seismic probability models, the probability of different magnitude earthquakes occurring within variable timespans can be estimated. The revised model is based on Poisson distribution and includes the use of best-estimate values of the probability distribution of different magnitude earthquakes recurring from a fault from literature sources. Our study aims to apply this model to the Taipei metropolitan area with a population of 7 million, which lies in the Taipei Basin and is bounded by two normal faults: the Sanchaio and Taipei faults. The Sanchaio fault is suggested to be responsible for previous large magnitude earthquakes, such as the 1694 magnitude 7 earthquake in northwestern Taipei (Cheng et. al., 2010). Based on a magnitude 7 earthquake return period of 543 years, the model predicts the occurrence of a magnitude 7 earthquake within 20 years at 1.81%, within 79 years at 6.77% and within 300 years at 21.22%. These estimates increase significantly when considering a magnitude 6 earthquake; the chance of one occurring within the next 20 years is estimated to be 3.61%, 79 years at 13.54% and 300 years at 42.45%. The 79 year period represents the average lifespan of the Taiwan population. In contrast, based on data from 2013, the probability of Taiwan residents experiencing heart disease or malignant neoplasm is 11.5% and 29%. The inference of this study is that the calculated risk that the Taipei population is at from a potentially damaging magnitude 6 or greater earthquake occurring within their lifetime is just as great as of suffering from a heart attack or other health ailments.

  14. Statistical-Dynamical Seasonal Forecasts of Central-Southwest Asian Winter Precipitation.

    NASA Astrophysics Data System (ADS)

    Tippett, Michael K.; Goddard, Lisa; Barnston, Anthony G.

    2005-06-01

    Interannual precipitation variability in central-southwest (CSW) Asia has been associated with East Asian jet stream variability and western Pacific tropical convection. However, atmospheric general circulation models (AGCMs) forced by observed sea surface temperature (SST) poorly simulate the region's interannual precipitation variability. The statistical-dynamical approach uses statistical methods to correct systematic deficiencies in the response of AGCMs to SST forcing. Statistical correction methods linking model-simulated Indo-west Pacific precipitation and observed CSW Asia precipitation result in modest, but statistically significant, cross-validated simulation skill in the northeast part of the domain for the period from 1951 to 1998. The statistical-dynamical method is also applied to recent (winter 1998/99 to 2002/03) multimodel, two-tier December-March precipitation forecasts initiated in October. This period includes 4 yr (winter of 1998/99 to 2001/02) of severe drought. Tercile probability forecasts are produced using ensemble-mean forecasts and forecast error estimates. The statistical-dynamical forecasts show enhanced probability of below-normal precipitation for the four drought years and capture the return to normal conditions in part of the region during the winter of 2002/03.May Kabul be without gold, but not without snow.—Traditional Afghan proverb

  15. Bayesian Model Selection in Geophysics: The evidence

    NASA Astrophysics Data System (ADS)

    Vrugt, J. A.

    2016-12-01

    Bayesian inference has found widespread application and use in science and engineering to reconcile Earth system models with data, including prediction in space (interpolation), prediction in time (forecasting), assimilation of observations and deterministic/stochastic model output, and inference of the model parameters. Per Bayes theorem, the posterior probability, , P(H|D), of a hypothesis, H, given the data D, is equivalent to the product of its prior probability, P(H), and likelihood, L(H|D), divided by a normalization constant, P(D). In geophysics, the hypothesis, H, often constitutes a description (parameterization) of the subsurface for some entity of interest (e.g. porosity, moisture content). The normalization constant, P(D), is not required for inference of the subsurface structure, yet of great value for model selection. Unfortunately, it is not particularly easy to estimate P(D) in practice. Here, I will introduce the various building blocks of a general purpose method which provides robust and unbiased estimates of the evidence, P(D). This method uses multi-dimensional numerical integration of the posterior (parameter) distribution. I will then illustrate this new estimator by application to three competing subsurface models (hypothesis) using GPR travel time data from the South Oyster Bacterial Transport Site, in Virginia, USA. The three subsurface models differ in their treatment of the porosity distribution and use (a) horizontal layering with fixed layer thicknesses, (b) vertical layering with fixed layer thicknesses and (c) a multi-Gaussian field. The results of the new estimator are compared against the brute force Monte Carlo method, and the Laplace-Metropolis method.

  16. Normal Tissue Complication Probability (NTCP) modeling of late rectal bleeding following external beam radiotherapy for prostate cancer: A Test of the QUANTEC-recommended NTCP model.

    PubMed

    Liu, Mitchell; Moiseenko, Vitali; Agranovich, Alexander; Karvat, Anand; Kwan, Winkle; Saleh, Ziad H; Apte, Aditya A; Deasy, Joseph O

    2010-10-01

    Validating a predictive model for late rectal bleeding following external beam treatment for prostate cancer would enable safer treatments or dose escalation. We tested the normal tissue complication probability (NTCP) model recommended in the recent QUANTEC review (quantitative analysis of normal tissue effects in the clinic). One hundred and sixty one prostate cancer patients were treated with 3D conformal radiotherapy for prostate cancer at the British Columbia Cancer Agency in a prospective protocol. The total prescription dose for all patients was 74 Gy, delivered in 2 Gy/fraction. 159 3D treatment planning datasets were available for analysis. Rectal dose volume histograms were extracted and fitted to a Lyman-Kutcher-Burman NTCP model. Late rectal bleeding (>grade 2) was observed in 12/159 patients (7.5%). Multivariate logistic regression with dose-volume parameters (V50, V60, V70, etc.) was non-significant. Among clinical variables, only age was significant on a Kaplan-Meier log-rank test (p=0.007, with an optimal cut point of 77 years). Best-fit Lyman-Kutcher-Burman model parameters (with 95% confidence intervals) were: n = 0.068 (0.01, +infinity); m =0.14 (0.0, 0.86); and TD50 = 81 (27, 136) Gy. The peak values fall within the 95% QUANTEC confidence intervals. On this dataset, both models had only modest ability to predict complications: the best-fit model had a Spearman's rank correlation coefficient of rs = 0.099 (p = 0.11) and area under the receiver operating characteristic curve (AUC) of 0.62; the QUANTEC model had rs=0.096 (p= 0.11) and a corresponding AUC of 0.61. Although the QUANTEC model consistently predicted higher NTCP values, it could not be rejected according to the χ(2) test (p = 0.44). Observed complications, and best-fit parameter estimates, were consistent with the QUANTEC-preferred NTCP model. However, predictive power was low, at least partly because the rectal dose distribution characteristics do not vary greatly within this patient cohort.

  17. A robust method to forecast volcanic ash clouds

    USGS Publications Warehouse

    Denlinger, Roger P.; Pavolonis, Mike; Sieglaff, Justin

    2012-01-01

    Ash clouds emanating from volcanic eruption columns often form trails of ash extending thousands of kilometers through the Earth's atmosphere, disrupting air traffic and posing a significant hazard to air travel. To mitigate such hazards, the community charged with reducing flight risk must accurately assess risk of ash ingestion for any flight path and provide robust forecasts of volcanic ash dispersal. In response to this need, a number of different transport models have been developed for this purpose and applied to recent eruptions, providing a means to assess uncertainty in forecasts. Here we provide a framework for optimal forecasts and their uncertainties given any model and any observational data. This involves random sampling of the probability distributions of input (source) parameters to a transport model and iteratively running the model with different inputs, each time assessing the predictions that the model makes about ash dispersal by direct comparison with satellite data. The results of these comparisons are embodied in a likelihood function whose maximum corresponds to the minimum misfit between model output and observations. Bayes theorem is then used to determine a normalized posterior probability distribution and from that a forecast of future uncertainty in ash dispersal. The nature of ash clouds in heterogeneous wind fields creates a strong maximum likelihood estimate in which most of the probability is localized to narrow ranges of model source parameters. This property is used here to accelerate probability assessment, producing a method to rapidly generate a prediction of future ash concentrations and their distribution based upon assimilation of satellite data as well as model and data uncertainties. Applying this method to the recent eruption of Eyjafjallajökull in Iceland, we show that the 3 and 6 h forecasts of ash cloud location probability encompassed the location of observed satellite-determined ash cloud loads, providing an efficient means to assess all of the hazards associated with these ash clouds.

  18. Multivariate Normal Tissue Complication Probability Modeling of Heart Valve Dysfunction in Hodgkin Lymphoma Survivors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cella, Laura, E-mail: laura.cella@cnr.it; Department of Advanced Biomedical Sciences, Federico II University School of Medicine, Naples; Liuzzi, Raffaele

    Purpose: To establish a multivariate normal tissue complication probability (NTCP) model for radiation-induced asymptomatic heart valvular defects (RVD). Methods and Materials: Fifty-six patients treated with sequential chemoradiation therapy for Hodgkin lymphoma (HL) were retrospectively reviewed for RVD events. Clinical information along with whole heart, cardiac chambers, and lung dose distribution parameters was collected, and the correlations to RVD were analyzed by means of Spearman's rank correlation coefficient (Rs). For the selection of the model order and parameters for NTCP modeling, a multivariate logistic regression method using resampling techniques (bootstrapping) was applied. Model performance was evaluated using the area under themore » receiver operating characteristic curve (AUC). Results: When we analyzed the whole heart, a 3-variable NTCP model including the maximum dose, whole heart volume, and lung volume was shown to be the optimal predictive model for RVD (Rs = 0.573, P<.001, AUC = 0.83). When we analyzed the cardiac chambers individually, for the left atrium and for the left ventricle, an NTCP model based on 3 variables including the percentage volume exceeding 30 Gy (V30), cardiac chamber volume, and lung volume was selected as the most predictive model (Rs = 0.539, P<.001, AUC = 0.83; and Rs = 0.557, P<.001, AUC = 0.82, respectively). The NTCP values increase as heart maximum dose or cardiac chambers V30 increase. They also increase with larger volumes of the heart or cardiac chambers and decrease when lung volume is larger. Conclusions: We propose logistic NTCP models for RVD considering not only heart irradiation dose but also the combined effects of lung and heart volumes. Our study establishes the statistical evidence of the indirect effect of lung size on radio-induced heart toxicity.« less

  19. Topology in two dimensions. IV - CDM models with non-Gaussian initial conditions

    NASA Astrophysics Data System (ADS)

    Coles, Peter; Moscardini, Lauro; Plionis, Manolis; Lucchin, Francesco; Matarrese, Sabino; Messina, Antonio

    1993-02-01

    The results of N-body simulations with both Gaussian and non-Gaussian initial conditions are used here to generate projected galaxy catalogs with the same selection criteria as the Shane-Wirtanen counts of galaxies. The Euler-Poincare characteristic is used to compare the statistical nature of the projected galaxy clustering in these simulated data sets with that of the observed galaxy catalog. All the models produce a topology dominated by a meatball shift when normalized to the known small-scale clustering properties of galaxies. Models characterized by a positive skewness of the distribution of primordial density perturbations are inconsistent with the Lick data, suggesting problems in reconciling models based on cosmic textures with observations. Gaussian CDM models fit the distribution of cell counts only if they have a rather high normalization but possess too low a coherence length compared with the Lick counts. This suggests that a CDM model with extra large scale power would probably fit the available data.

  20. Inverse Gaussian gamma distribution model for turbulence-induced fading in free-space optical communication.

    PubMed

    Cheng, Mingjian; Guo, Ya; Li, Jiangting; Zheng, Xiaotong; Guo, Lixin

    2018-04-20

    We introduce an alternative distribution to the gamma-gamma (GG) distribution, called inverse Gaussian gamma (IGG) distribution, which can efficiently describe moderate-to-strong irradiance fluctuations. The proposed stochastic model is based on a modulation process between small- and large-scale irradiance fluctuations, which are modeled by gamma and inverse Gaussian distributions, respectively. The model parameters of the IGG distribution are directly related to atmospheric parameters. The accuracy of the fit among the IGG, log-normal, and GG distributions with the experimental probability density functions in moderate-to-strong turbulence are compared, and results indicate that the newly proposed IGG model provides an excellent fit to the experimental data. As the receiving diameter is comparable with the atmospheric coherence radius, the proposed IGG model can reproduce the shape of the experimental data, whereas the GG and LN models fail to match the experimental data. The fundamental channel statistics of a free-space optical communication system are also investigated in an IGG-distributed turbulent atmosphere, and a closed-form expression for the outage probability of the system is derived with Meijer's G-function.

  1. Estimation in a discrete tail rate family of recapture sampling models

    NASA Technical Reports Server (NTRS)

    Gupta, Rajan; Lee, Larry D.

    1990-01-01

    In the context of recapture sampling design for debugging experiments the problem of estimating the error or hitting rate of the faults remaining in a system is considered. Moment estimators are derived for a family of models in which the rate parameters are assumed proportional to the tail probabilities of a discrete distribution on the positive integers. The estimators are shown to be asymptotically normal and fully efficient. Their fixed sample properties are compared, through simulation, with those of the conditional maximum likelihood estimators.

  2. Quantitative evaluation of Alzheimer's disease

    NASA Astrophysics Data System (ADS)

    Duchesne, S.; Frisoni, G. B.

    2009-02-01

    We propose a single, quantitative metric called the disease evaluation factor (DEF) and assess its efficiency at estimating disease burden in normal, control subjects (CTRL) and probable Alzheimer's disease (AD) patients. The study group consisted in 75 patients with a diagnosis of probable AD and 75 age-matched normal CTRL without neurological or neuropsychological deficit. We calculated a reference eigenspace of MRI appearance from reference data, in which our CTRL and probable AD subjects were projected. We then calculated the multi-dimensional hyperplane separating the CTRL and probable AD groups. The DEF was estimated via a multidimensional weighted distance of eigencoordinates for a given subject and the CTRL group mean, along salient principal components forming the separating hyperplane. We used quantile plots, Kolmogorov-Smirnov and χ2 tests to compare the DEF values and test that their distribution was normal. We used a linear discriminant test to separate CTRL from probable AD based on the DEF factor, and reached an accuracy of 87%. A quantitative biomarker in AD would act as an important surrogate marker of disease status and progression.

  3. Approximating Multivariate Normal Orthant Probabilities. ONR Technical Report. [Biometric Lab Report No. 90-1.

    ERIC Educational Resources Information Center

    Gibbons, Robert D.; And Others

    The probability integral of the multivariate normal distribution (ND) has received considerable attention since W. F. Sheppard's (1900) and K. Pearson's (1901) seminal work on the bivariate ND. This paper evaluates the formula that represents the "n x n" correlation matrix of the "chi(sub i)" and the standardized multivariate…

  4. CAN'T MISS--conquer any number task by making important statistics simple. Part 2. Probability, populations, samples, and normal distributions.

    PubMed

    Hansen, John P

    2003-01-01

    Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 2, describes probability, populations, and samples. The uses of descriptive and inferential statistics are outlined. The article also discusses the properties and probability of normal distributions, including the standard normal distribution.

  5. Back to Normal! Gaussianizing posterior distributions for cosmological probes

    NASA Astrophysics Data System (ADS)

    Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.

    2014-05-01

    We present a method to map multivariate non-Gaussian posterior probability densities into Gaussian ones via nonlinear Box-Cox transformations, and generalizations thereof. This is analogous to the search for normal parameters in the CMB, but can in principle be applied to any probability density that is continuous and unimodal. The search for the optimally Gaussianizing transformation amongst the Box-Cox family is performed via a maximum likelihood formalism. We can judge the quality of the found transformation a posteriori: qualitatively via statistical tests of Gaussianity, and more illustratively by how well it reproduces the credible regions. The method permits an analytical reconstruction of the posterior from a sample, e.g. a Markov chain, and simplifies the subsequent joint analysis with other experiments. Furthermore, it permits the characterization of a non-Gaussian posterior in a compact and efficient way. The expression for the non-Gaussian posterior can be employed to find analytic formulae for the Bayesian evidence, and consequently be used for model comparison.

  6. Superstatistical generalised Langevin equation: non-Gaussian viscoelastic anomalous diffusion

    NASA Astrophysics Data System (ADS)

    Ślęzak, Jakub; Metzler, Ralf; Magdziarz, Marcin

    2018-02-01

    Recent advances in single particle tracking and supercomputing techniques demonstrate the emergence of normal or anomalous, viscoelastic diffusion in conjunction with non-Gaussian distributions in soft, biological, and active matter systems. We here formulate a stochastic model based on a generalised Langevin equation in which non-Gaussian shapes of the probability density function and normal or anomalous diffusion have a common origin, namely a random parametrisation of the stochastic force. We perform a detailed analysis demonstrating how various types of parameter distributions for the memory kernel result in exponential, power law, or power-log law tails of the memory functions. The studied system is also shown to exhibit a further unusual property: the velocity has a Gaussian one point probability density but non-Gaussian joint distributions. This behaviour is reflected in the relaxation from a Gaussian to a non-Gaussian distribution observed for the position variable. We show that our theoretical results are in excellent agreement with stochastic simulations.

  7. Flow behaviour in normal and Meniere’s disease of endolymphatic fluid inside the inner ear

    NASA Astrophysics Data System (ADS)

    Paisal, Muhammad Sufyan Amir; Azmi Wahab, Muhamad; Taib, Ishkrizat; Mat Isa, Norasikin; Ramli, Yahaya; Seri, Suzairin Md; Darlis, Nofrizalidris; Osman, Kahar; Khudzari, Ahmad Zahran Md; Nordin, Normayati

    2017-09-01

    Meniere’s disease is a rare disorder that affects the inner ear which might be more severe if not treated. This is due to fluctuating pressure of the fluid in the endolymphatic sac and dysfunction of cochlea which causing the stretching of vestibular membrane. However, the pattern of the flow recirculation in endolymphatic region is still not fully understood. Thus, this study aims to investigate the correlation between the increasing volume of endolymphatic fluid and flow characteristics such as velocity, pressure and wall shear stress. Three dimensional model of simplified endolymphatic region is modeled using computer aided design (CAD) software and simulated using computational fluid dynamic (CFD) software. There are three different models are investigated; normal (N) model, Meniere’s disease model with less severity (M1) and Meniere’s disease model with high severity (M2). From the observed, the pressure drop between inlet and outlet of inner ear becomes decreases as the outlet pressure along with endolymphatic volume increases. However, constant flow rate imposed at the inlet of endolymphatic showing the lowest velocity. Flow recirculation near to endolymphatic region is occurred as the volume in endolympathic increases. Overall, high velocity is monitored near to cochlear duct, ductus reuniens and endolymphatic duct. Hence, these areas show high distributions of wall shear stress (WSS) that indicating a high probability of endolymphatic wall membrane dilation. Thus, more severe conditions of Meniere’s disease, more complex of flow characteristic is occurred. This phenomenon presenting high probability of rupture is predicted at the certain area in the anatomy of vestibular system.

  8. Risk stratification using stress echocardiography: incremental prognostic value over historic, clinical, and stress electrocardiographic variables across a wide spectrum of bayesian pretest probabilities for coronary artery disease.

    PubMed

    Bangalore, Sripal; Gopinath, Devi; Yao, Siu-Sun; Chaudhry, Farooq A

    2007-03-01

    We sought to evaluate the risk stratification ability and incremental prognostic value of stress echocardiography over historic, clinical, and stress electrocardiographic (ECG) variables, over a wide spectrum of bayesian pretest probabilities of coronary artery disease (CAD). Stress echocardiography is an established technique for the diagnosis of CAD. However, data on incremental prognostic value of stress echocardiography over historic, clinical, and stress ECG variables in patients with known or suggested CAD is limited. We evaluated 3259 patients (60 +/- 13 years, 48% men) undergoing stress echocardiography. Patients were grouped into low (<15%), intermediate (15-85%), and high (>85%) pretest CAD likelihood subgroups using standard software. The historical, clinical, stress ECG, and stress echocardiographic variables were recorded for the entire cohort. Follow-up (2.7 +/- 1.1 years) for confirmed myocardial infarction (n = 66) and cardiac death (n = 105) was obtained. For the entire cohort, an ischemic stress echocardiography study confers a 5.0 times higher cardiac event rate than the normal stress echocardiography group (4.0% vs 0.8%/y, P < .0001). Furthermore, Cox proportional hazard regression model showed incremental prognostic value of stress echocardiography variables over historic, clinical, and stress ECG variables across all pretest probability subgroups (global chi2 increased from 5.1 to 8.5 to 20.1 in the low pretest group, P = .44 and P = .01; from 20.9 to 28.2 to 116 in the intermediate pretest group, P = .47 and P < .0001; and from 17.5 to 36.6 to 61.4 in the high pretest group, P < .0001 for both groups). A normal stress echocardiography portends a benign prognosis (<1% event rate/y) in all pretest probability subgroups and even in patients with high pretest probability and yields incremental prognostic value over historic, clinical, and stress ECG variables across all pretest probability subgroups. The best incremental value is, however, in the intermediate pretest probability subgroup.

  9. Assessment of normal tissue complications following prostate cancer irradiation: Comparison of radiation treatment modalities using NTCP models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takam, Rungdham; Bezak, Eva; Yeoh, Eric E.

    2010-09-15

    Purpose: Normal tissue complication probability (NTCP) of the rectum, bladder, urethra, and femoral heads following several techniques for radiation treatment of prostate cancer were evaluated applying the relative seriality and Lyman models. Methods: Model parameters from literature were used in this evaluation. The treatment techniques included external (standard fractionated, hypofractionated, and dose-escalated) three-dimensional conformal radiotherapy (3D-CRT), low-dose-rate (LDR) brachytherapy (I-125 seeds), and high-dose-rate (HDR) brachytherapy (Ir-192 source). Dose-volume histograms (DVHs) of the rectum, bladder, and urethra retrieved from corresponding treatment planning systems were converted to biological effective dose-based and equivalent dose-based DVHs, respectively, in order to account for differences inmore » radiation treatment modality and fractionation schedule. Results: Results indicated that with hypofractionated 3D-CRT (20 fractions of 2.75 Gy/fraction delivered five times/week to total dose of 55 Gy), NTCP of the rectum, bladder, and urethra were less than those for standard fractionated 3D-CRT using a four-field technique (32 fractions of 2 Gy/fraction delivered five times/week to total dose of 64 Gy) and dose-escalated 3D-CRT. Rectal and bladder NTCPs (5.2% and 6.6%, respectively) following the dose-escalated four-field 3D-CRT (2 Gy/fraction to total dose of 74 Gy) were the highest among analyzed treatment techniques. The average NTCP for the rectum and urethra were 0.6% and 24.7% for LDR-BT and 0.5% and 11.2% for HDR-BT. Conclusions: Although brachytherapy techniques resulted in delivering larger equivalent doses to normal tissues, the corresponding NTCPs were lower than those of external beam techniques other than the urethra because of much smaller volumes irradiated to higher doses. Among analyzed normal tissues, the femoral heads were found to have the lowest probability of complications as most of their volume was irradiated to lower equivalent doses compared to other tissues.« less

  10. Semiparametric Bayesian classification with longitudinal markers

    PubMed Central

    De la Cruz-Mesía, Rolando; Quintana, Fernando A.; Müller, Peter

    2013-01-01

    Summary We analyse data from a study involving 173 pregnant women. The data are observed values of the β human chorionic gonadotropin hormone measured during the first 80 days of gestational age, including from one up to six longitudinal responses for each woman. The main objective in this study is to predict normal versus abnormal pregnancy outcomes from data that are available at the early stages of pregnancy. We achieve the desired classification with a semiparametric hierarchical model. Specifically, we consider a Dirichlet process mixture prior for the distribution of the random effects in each group. The unknown random-effects distributions are allowed to vary across groups but are made dependent by using a design vector to select different features of a single underlying random probability measure. The resulting model is an extension of the dependent Dirichlet process model, with an additional probability model for group classification. The model is shown to perform better than an alternative model which is based on independent Dirichlet processes for the groups. Relevant posterior distributions are summarized by using Markov chain Monte Carlo methods. PMID:24368871

  11. The multi temporal/multi-model approach to predictive uncertainty assessment in real-time flood forecasting

    NASA Astrophysics Data System (ADS)

    Barbetta, Silvia; Coccia, Gabriele; Moramarco, Tommaso; Brocca, Luca; Todini, Ezio

    2017-08-01

    This work extends the multi-temporal approach of the Model Conditional Processor (MCP-MT) to the multi-model case and to the four Truncated Normal Distributions (TNDs) approach, demonstrating the improvement on the single-temporal one. The study is framed in the context of probabilistic Bayesian decision-making that is appropriate to take rational decisions on uncertain future outcomes. As opposed to the direct use of deterministic forecasts, the probabilistic forecast identifies a predictive probability density function that represents a fundamental knowledge on future occurrences. The added value of MCP-MT is the identification of the probability that a critical situation will happen within the forecast lead-time and when, more likely, it will occur. MCP-MT is thoroughly tested for both single-model and multi-model configurations at a gauged site on the Tiber River, central Italy. The stages forecasted by two operative deterministic models, STAFOM-RCM and MISDc, are considered for the study. The dataset used for the analysis consists of hourly data from 34 flood events selected on a time series of six years. MCP-MT improves over the original models' forecasts: the peak overestimation and the rising limb delayed forecast, characterizing MISDc and STAFOM-RCM respectively, are significantly mitigated, with a reduced mean error on peak stage from 45 to 5 cm and an increased coefficient of persistence from 0.53 up to 0.75. The results show that MCP-MT outperforms the single-temporal approach and is potentially useful for supporting decision-making because the exceedance probability of hydrometric thresholds within a forecast horizon and the most probable flooding time can be estimated.

  12. Representation of the contextual statistical model by hyperbolic amplitudes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khrennikov, Andrei

    We continue the development of a so-called contextual statistical model (here context has the meaning of a complex of physical conditions). It is shown that, besides contexts producing the conventional trigonometric cos-interference, there exist contexts producing the hyperbolic cos-interference. Starting with the corresponding interference formula of total probability we represent such contexts by hyperbolic probabilistic amplitudes or in the abstract formalism by normalized vectors of a hyperbolic analogue of the Hilbert space. There is obtained a hyperbolic Born's rule. Incompatible observables are represented by noncommutative operators. This paper can be considered as the first step towards hyperbolic quantum probability. Wemore » also discuss possibilities of experimental verification of hyperbolic quantum mechanics: in physics of elementary particles, string theory as well as in experiments with nonphysical systems, e.g., in psychology, cognitive sciences, and economy.« less

  13. Representation of the contextual statistical model by hyperbolic amplitudes

    NASA Astrophysics Data System (ADS)

    Khrennikov, Andrei

    2005-06-01

    We continue the development of a so-called contextual statistical model (here context has the meaning of a complex of physical conditions). It is shown that, besides contexts producing the conventional trigonometric cos-interference, there exist contexts producing the hyperbolic cos-interference. Starting with the corresponding interference formula of total probability we represent such contexts by hyperbolic probabilistic amplitudes or in the abstract formalism by normalized vectors of a hyperbolic analogue of the Hilbert space. There is obtained a hyperbolic Born's rule. Incompatible observables are represented by noncommutative operators. This paper can be considered as the first step towards hyperbolic quantum probability. We also discuss possibilities of experimental verification of hyperbolic quantum mechanics: in physics of elementary particles, string theory as well as in experiments with nonphysical systems, e.g., in psychology, cognitive sciences, and economy.

  14. Mathematical Modelling for Patient Selection in Proton Therapy.

    PubMed

    Mee, T; Kirkby, N F; Kirkby, K J

    2018-05-01

    Proton beam therapy (PBT) is still relatively new in cancer treatment and the clinical evidence base is relatively sparse. Mathematical modelling offers assistance when selecting patients for PBT and predicting the demand for service. Discrete event simulation, normal tissue complication probability, quality-adjusted life-years and Markov Chain models are all mathematical and statistical modelling techniques currently used but none is dominant. As new evidence and outcome data become available from PBT, comprehensive models will emerge that are less dependent on the specific technologies of radiotherapy planning and delivery. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  15. Random Partition Distribution Indexed by Pairwise Information

    PubMed Central

    Dahl, David B.; Day, Ryan; Tsai, Jerry W.

    2017-01-01

    We propose a random partition distribution indexed by pairwise similarity information such that partitions compatible with the similarities are given more probability. The use of pairwise similarities, in the form of distances, is common in some clustering algorithms (e.g., hierarchical clustering), but we show how to use this type of information to define a prior partition distribution for flexible Bayesian modeling. A defining feature of the distribution is that it allocates probability among partitions within a given number of subsets, but it does not shift probability among sets of partitions with different numbers of subsets. Our distribution places more probability on partitions that group similar items yet keeps the total probability of partitions with a given number of subsets constant. The distribution of the number of subsets (and its moments) is available in closed-form and is not a function of the similarities. Our formulation has an explicit probability mass function (with a tractable normalizing constant) so the full suite of MCMC methods may be used for posterior inference. We compare our distribution with several existing partition distributions, showing that our formulation has attractive properties. We provide three demonstrations to highlight the features and relative performance of our distribution. PMID:29276318

  16. Estimating and testing interactions when explanatory variables are subject to non-classical measurement error.

    PubMed

    Murad, Havi; Kipnis, Victor; Freedman, Laurence S

    2016-10-01

    Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ≥200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. © The Author(s) 2013.

  17. Specificity and mechanism of action of alpha-helical membrane-active peptides interacting with model and biological membranes by single-molecule force spectroscopy.

    PubMed

    Sun, Shiyu; Zhao, Guangxu; Huang, Yibing; Cai, Mingjun; Shan, Yuping; Wang, Hongda; Chen, Yuxin

    2016-07-01

    In this study, to systematically investigate the targeting specificity of membrane-active peptides on different types of cell membranes, we evaluated the effects of peptides on different large unilamellar vesicles mimicking prokaryotic, normal eukaryotic, and cancer cell membranes by single-molecule force spectroscopy and spectrum technology. We revealed that cationic membrane-active peptides can exclusively target negatively charged prokaryotic and cancer cell model membranes rather than normal eukaryotic cell model membranes. Using Acholeplasma laidlawii, 3T3-L1, and HeLa cells to represent prokaryotic cells, normal eukaryotic cells, and cancer cells in atomic force microscopy experiments, respectively, we further studied that the single-molecule targeting interaction between peptides and biological membranes. Antimicrobial and anticancer activities of peptides exhibited strong correlations with the interaction probability determined by single-molecule force spectroscopy, which illustrates strong correlations of peptide biological activities and peptide hydrophobicity and charge. Peptide specificity significantly depends on the lipid compositions of different cell membranes, which validates the de novo design of peptide therapeutics against bacteria and cancers.

  18. Lorenz system in the thermodynamic modelling of leukaemia malignancy.

    PubMed

    Alexeev, Igor

    2017-05-01

    The core idea of the proposed thermodynamic modelling of malignancy in leukaemia is entropy arising within normal haematopoiesis. Mathematically its description is supposed to be similar to the Lorenz system of ordinary differential equations for simplified processes of heat flow in fluids. The hypothetical model provides a description of remission and relapse in leukaemia as two hierarchical and qualitatively different states of normal haematopoiesis with their own phase spaces. Phase space transition is possible through pitchfork bifurcation, which is considered the common symmetrical scenario for relapse, induced remission and the spontaneous remission of leukaemia. Cytopenia is regarded as an adaptive reaction of haematopoiesis to an increase in entropy caused by leukaemia clones. The following predictions are formulated: a) the percentage of leukaemia cells in marrow as a criterion of remission or relapse is not necessarily constant but is a variable value; b) the probability of remission depends upon normal haematopoiesis reaching bifurcation; c) the duration of remission depends upon the eradication of leukaemia cells through induction or consolidation therapies; d) excessively high doses of chemotherapy in consolidation may induce relapse. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Predicting Grade 3 Acute Diarrhea During Radiation Therapy for Rectal Cancer Using a Cutoff-Dose Logistic Regression Normal Tissue Complication Probability Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, John M., E-mail: jrobertson@beaumont.ed; Soehn, Matthias; Yan Di

    Purpose: Understanding the dose-volume relationship of small bowel irradiation and severe acute diarrhea may help reduce the incidence of this side effect during adjuvant treatment for rectal cancer. Methods and Materials: Consecutive patients treated curatively for rectal cancer were reviewed, and the maximum grade of acute diarrhea was determined. The small bowel was outlined on the treatment planning CT scan, and a dose-volume histogram was calculated for the initial pelvic treatment (45 Gy). Logistic regression models were fitted for varying cutoff-dose levels from 5 to 45 Gy in 5-Gy increments. The model with the highest LogLikelihood was used to developmore » a cutoff-dose normal tissue complication probability (NTCP) model. Results: There were a total of 152 patients (48% preoperative, 47% postoperative, 5% other), predominantly treated prone (95%) with a three-field technique (94%) and a protracted venous infusion of 5-fluorouracil (78%). Acute Grade 3 diarrhea occurred in 21%. The largest LogLikelihood was found for the cutoff-dose logistic regression model with 15 Gy as the cutoff-dose, although the models for 20 Gy and 25 Gy had similar significance. According to this model, highly significant correlations (p <0.001) between small bowel volumes receiving at least 15 Gy and toxicity exist in the considered patient population. Similar findings applied to both the preoperatively (p = 0.001) and postoperatively irradiated groups (p = 0.001). Conclusion: The incidence of Grade 3 diarrhea was significantly correlated with the volume of small bowel receiving at least 15 Gy using a cutoff-dose NTCP model.« less

  20. Monte Carlo calculation of the maximum therapeutic gain of tumor antivascular alpha therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Chen-Yu; Oborn, Bradley M.; Guatelli, Susanna

    Purpose: Metastatic melanoma lesions experienced marked regression after systemic targeted alpha therapy in a phase 1 clinical trial. This unexpected response was ascribed to tumor antivascular alpha therapy (TAVAT), in which effective tumor regression is achieved by killing endothelial cells (ECs) in tumor capillaries and, thus, depriving cancer cells of nutrition and oxygen. The purpose of this paper is to quantitatively analyze the therapeutic efficacy and safety of TAVAT by building up the testing Monte Carlo microdosimetric models. Methods: Geant4 was adapted to simulate the spatial nonuniform distribution of the alpha emitter {sup 213}Bi. The intraluminal model was designed tomore » simulate the background dose to normal tissue capillary ECs from the nontargeted activity in the blood. The perivascular model calculates the EC dose from the activity bound to the perivascular cancer cells. The key parameters are the probability of an alpha particle traversing an EC nucleus, the energy deposition, the lineal energy transfer, and the specific energy. These results were then applied to interpret the clinical trial. Cell survival rate and therapeutic gain were determined. Results: The specific energy for an alpha particle hitting an EC nucleus in the intraluminal and perivascular models is 0.35 and 0.37 Gy, respectively. As the average probability of traversal in these models is 2.7% and 1.1%, the mean specific energy per decay drops to 1.0 cGy and 0.4 cGy, which demonstrates that the source distribution has a significant impact on the dose. Using the melanoma clinical trial activity of 25 mCi, the dose to tumor EC nucleus is found to be 3.2 Gy and to a normal capillary EC nucleus to be 1.8 cGy. These data give a maximum therapeutic gain of about 180 and validate the TAVAT concept. Conclusions: TAVAT can deliver a cytotoxic dose to tumor capillaries without being toxic to normal tissue capillaries.« less

  1. The re-incarnation, re-interpretation and re-demise of the transition probability model.

    PubMed

    Koch, A L

    1999-05-28

    There are two classes of models for the cell cycle that have both a deterministic and a stochastic part; they are the transition probability (TP) models and sloppy size control (SSC) models. The hallmark of the basic TP model are two graphs: the alpha and beta plots. The former is the semi-logarithmic plot of the percentage of cell divisions yet to occur, this results in a horizontal line segment at 100% corresponding to the deterministic phase and a straight line sloping tail corresponding to the stochastic part. The beta plot concerns the differences of the age-at-division of sisters (the beta curve) and gives a straight line parallel to the tail of the alpha curve. For the SC models the deterministic part is the time needed for the cell to accumulate a critical amount of some substance(s). The variable part differs in the various variants of the general model, but they do not give alpha and beta curves with linear tails as postulated by the TP model. This paper argues against TP and for an elaboration of SSC type of model. The main argument against TP is that it assumes that the probability of the transition from the stochastic phase is time invariant even though it is certain that the cells are growing and metabolizing throughout the cell cycle; a fact that should make the transition probability be variable. The SSC models presume that cell division is triggered by the cell's success in growing and not simply the result of elapsed time. The extended model proposed here to accommodate the predictions of the SSC to the straight tailed parts of the alpha and beta plots depends on the existence of a few percent of the cell in a growing culture that are not growing normally, these are growing much slower or are temporarily quiescent. The bulk of the cells, however, grow nearly exponentially. Evidence for a slow growing component comes from experimental analyses of population size distributions for a variety of cell types by the Collins-Richmond technique. These subpopulations existence is consistent with the new concept that there are a large class of rapidly reversible mutations occurring in many organisms and at many loci serving a large range of purposes to enable the cell to survive environmental challenges. These mutations yield special subpopulations of cells within a population. The reversible mutational changes, relevant to the elaboration of SSC models, produce slow-growing cells that are either very large or very small in size; these later revert to normal growth and division. The subpopulations, however, distort the population distribution in such a way as to fit better the exponential tails of the alpha and beta curves of the TP model.

  2. Accuracy of Physical Examination, Ankle-Brachial Index, and Ultrasonography in the Diagnosis of Arterial Injury in Patients With Penetrating Extremity Trauma: A Systematic Review and Meta-analysis.

    PubMed

    deSouza, Ian S; Benabbas, Roshanak; McKee, Sean; Zangbar, Bardiya; Jain, Ashika; Paladino, Lorenzo; Boudourakis, Leon; Sinert, Richard

    2017-08-01

    Penetrating Extremity Trauma (PET) may result in arterial injury, a rare but limb- and life-threatening surgical emergency. Timely, accurate diagnosis is essential for potential intervention in order to prevent significant morbidity. Using a systematic review/meta-analytic approach, we determined the utility of physical examination, Ankle-Brachial Index (ABI), and Ultrasonography (US) in the diagnosis of arterial injury in emergency department (ED) patients who have sustained PET. We applied a test-treatment threshold model to determine which evaluations may obviate CT Angiography (CTA). We searched PubMed, Embase, and Scopus from inception to November 2016 for studies of ED patients with PET. We included studies on adult and pediatric subjects. We defined the reference standard to include CTA, catheter angiography, or surgical exploration. When low-risk patients did not undergo the reference standard, trials must have specified that patients were observed for at least 24 hours. We used the Quality Assessment Tool for Diagnostic Accuracy Studies (QUADAS-2) to evaluate bias and applicability of the included studies. We calculated positive and negative likelihood ratios (LR+ and LR-) of physical examination ("hard signs" of vascular injury), US, and ABI. Using established CTA test characteristics (sensitivity = 96.2%, specificity = 99.2%) and applying the Pauker-Kassirer method, we developed a test-treatment threshold model (testing threshold = 0.14%, treatment threshold = 72.9%). We included eight studies (n = 2,161, arterial injury prevalence = 15.5%). Studies had variable quality with most at high risk for partial and double verification bias. Some studies investigated multiple index tests: physical examination (hard signs) in three studies (n = 1,170), ABI in five studies (n = 1,040), and US in four studies (n = 173). Due to high heterogeneity (I 2  > 75%) of the results, we could not calculate LR+ or LR- for hard signs or LR+ for ABI. The weighted prevalence of arterial injury for ABI was 14.3% and LR- was 0.59 (95% confidence interval [CI] = 0.48-0.71) resulting in a posttest probability of 9% for arterial injury. Ultrasonography had weighted prevalence of 18.9%, LR+ of 35.4 (95% CI = 8.3-151), and LR- of 0.24 (95% CI = 0.08-0.72); posttest probabilities for arterial injury were 89% and 5% after positive or negative US, respectively. The posttest probability of arterial injury with positive US (89%) exceeded the CTA treatment threshold (72.9%). The posttest probabilities of arterial injury with negative US (5%) and normal ABI (9%) exceeded the CTA testing threshold (0.14%). Normal examination (no hard or soft signs) with normal ABI in combination had LR- of 0.01 (95% CI = 0.0-0.10) resulting in an arterial injury posttest probability of 0%. In PET patients, positive US may obviate CTA. In patients with a normal examination (no hard or soft signs) and a normal ABI, arterial injury can be ruled out. However, a normal ABI or negative US cannot independently exclude arterial injury. Due to high study heterogeneity, we cannot make recommendations when hard signs are present or absent or when ABI is abnormal. In these situations, one should use clinical judgment to determine the need for further observation, CTA or catheter angiography, or surgical exploration. © 2017 by the Society for Academic Emergency Medicine.

  3. Autonomous learning derived from experimental modeling of physical laws.

    PubMed

    Grabec, Igor

    2013-05-01

    This article deals with experimental description of physical laws by probability density function of measured data. The Gaussian mixture model specified by representative data and related probabilities is utilized for this purpose. The information cost function of the model is described in terms of information entropy by the sum of the estimation error and redundancy. A new method is proposed for searching the minimum of the cost function. The number of the resulting prototype data depends on the accuracy of measurement. Their adaptation resembles a self-organized, highly non-linear cooperation between neurons in an artificial NN. A prototype datum corresponds to the memorized content, while the related probability corresponds to the excitability of the neuron. The method does not include any free parameters except objectively determined accuracy of the measurement system and is therefore convenient for autonomous execution. Since representative data are generally less numerous than the measured ones, the method is applicable for a rather general and objective compression of overwhelming experimental data in automatic data-acquisition systems. Such compression is demonstrated on analytically determined random noise and measured traffic flow data. The flow over a day is described by a vector of 24 components. The set of 365 vectors measured over one year is compressed by autonomous learning to just 4 representative vectors and related probabilities. These vectors represent the flow in normal working days and weekends or holidays, while the related probabilities correspond to relative frequencies of these days. This example reveals that autonomous learning yields a new basis for interpretation of representative data and the optimal model structure. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Disjunctive Normal Shape and Appearance Priors with Applications to Image Segmentation.

    PubMed

    Mesadi, Fitsum; Cetin, Mujdat; Tasdizen, Tolga

    2015-10-01

    The use of appearance and shape priors in image segmentation is known to improve accuracy; however, existing techniques have several drawbacks. Active shape and appearance models require landmark points and assume unimodal shape and appearance distributions. Level set based shape priors are limited to global shape similarity. In this paper, we present a novel shape and appearance priors for image segmentation based on an implicit parametric shape representation called disjunctive normal shape model (DNSM). DNSM is formed by disjunction of conjunctions of half-spaces defined by discriminants. We learn shape and appearance statistics at varying spatial scales using nonparametric density estimation. Our method can generate a rich set of shape variations by locally combining training shapes. Additionally, by studying the intensity and texture statistics around each discriminant of our shape model, we construct a local appearance probability map. Experiments carried out on both medical and natural image datasets show the potential of the proposed method.

  5. Flight Investigation of the Stability and Control Characteristics of a 0.13-Scale Model of the Convair XFY-1 Vertically Rising Airplane During Constant-Altitude Transitions, TED No. NACA DE 368

    NASA Technical Reports Server (NTRS)

    Lovell, Powell M., Jr.; Kibry, Robert H.; Smith, Charles C., Jr.

    1953-01-01

    An investigation is being conducted to determine the dynamic stability and control characteristics of a 0.13-scale flying model of the Convair XFY-1 vertically rising airplane. This paper presents the results of flight tests to determine the stability and control characteristics of the model during constant-altitude slow transitions from hovering to normal unstalled forward flight. The tests indicated that the airplane can be flown through the transition range fairly easily although some difficulty will probably encountered in controlling the yawing motions at angles of attack between about 60 and 40. An increase in the size of the vertical tail will not materially improve the controllability of the yawing motions in this range of angle of attack but the use of a yaw damper will make the yawing motions easy to control throughout the entire transitional flight range. The tests also indicated that the airplane can probably be flown sideways satisfactorily at speeds up to approximately 33 knots (full scale) with the normal control system and up to approximately 37 knots (full scale) with both elevons and rudders rigged to move differentially for roll control. At sideways speeds above these values, the airplane will have a strong tendency to diverge uncontrollably in roll.

  6. Errors in the estimation of the variance: implications for multiple-probability fluctuation analysis.

    PubMed

    Saviane, Chiara; Silver, R Angus

    2006-06-15

    Synapses play a crucial role in information processing in the brain. Amplitude fluctuations of synaptic responses can be used to extract information about the mechanisms underlying synaptic transmission and its modulation. In particular, multiple-probability fluctuation analysis can be used to estimate the number of functional release sites, the mean probability of release and the amplitude of the mean quantal response from fits of the relationship between the variance and mean amplitude of postsynaptic responses, recorded at different probabilities. To determine these quantal parameters, calculate their uncertainties and the goodness-of-fit of the model, it is important to weight the contribution of each data point in the fitting procedure. We therefore investigated the errors associated with measuring the variance by determining the best estimators of the variance of the variance and have used simulations of synaptic transmission to test their accuracy and reliability under different experimental conditions. For central synapses, which generally have a low number of release sites, the amplitude distribution of synaptic responses is not normal, thus the use of a theoretical variance of the variance based on the normal assumption is not a good approximation. However, appropriate estimators can be derived for the population and for limited sample sizes using a more general expression that involves higher moments and introducing unbiased estimators based on the h-statistics. Our results are likely to be relevant for various applications of fluctuation analysis when few channels or release sites are present.

  7. Dissociative chemisorption of methane on metal surfaces: Tests of dynamical assumptions using quantum models and ab initio molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jackson, Bret, E-mail: jackson@chem.umass.edu; Nattino, Francesco; Kroes, Geert-Jan

    The dissociative chemisorption of methane on metal surfaces is of great practical and fundamental importance. Not only is it the rate-limiting step in the steam reforming of natural gas, the reaction exhibits interesting mode-selective behavior and a strong dependence on the temperature of the metal. We present a quantum model for this reaction on Ni(100) and Ni(111) surfaces based on the reaction path Hamiltonian. The dissociative sticking probabilities computed using this model agree well with available experimental data with regard to variation with incident energy, substrate temperature, and the vibrational state of the incident molecule. We significantly expand the vibrationalmore » basis set relative to earlier studies, which allows reaction probabilities to be calculated for doubly excited initial vibrational states, though it does not lead to appreciable changes in the reaction probabilities for singly excited initial states. Sudden models used to treat the center of mass motion parallel to the surface are compared with results from ab initio molecular dynamics and found to be reasonable. Similar comparisons for molecular rotation suggest that our rotationally adiabatic model is incorrect, and that sudden behavior is closer to reality. Such a model is proposed and tested. A model for predicting mode-selective behavior is tested, with mixed results, though we find it is consistent with experimental studies of normal vs. total (kinetic) energy scaling. Models for energy transfer into lattice vibrations are also examined.« less

  8. Radiobiological Determination of Dose Escalation and Normal Tissue Toxicity in Definitive Chemoradiation Therapy for Esophageal Cancer☆

    PubMed Central

    Warren, Samantha; Partridge, Mike; Carrington, Rhys; Hurt, Chris; Crosby, Thomas; Hawkins, Maria A.

    2014-01-01

    Purpose This study investigated the trade-off in tumor coverage and organ-at-risk sparing when applying dose escalation for concurrent chemoradiation therapy (CRT) of mid-esophageal cancer, using radiobiological modeling to estimate local control and normal tissue toxicity. Methods and Materials Twenty-one patients with mid-esophageal cancer were selected from the SCOPE1 database (International Standard Randomised Controlled Trials number 47718479), with a mean planning target volume (PTV) of 327 cm3. A boost volume, PTV2 (GTV + 0.5 cm margin), was created. Radiobiological modeling of tumor control probability (TCP) estimated the dose required for a clinically significant (+20%) increase in local control as 62.5 Gy/25 fractions. A RapidArc (RA) plan with a simultaneously integrated boost (SIB) to PTV2 (RA62.5) was compared to a standard dose plan of 50 Gy/25 fractions (RA50). Dose-volume metrics and estimates of normal tissue complication probability (NTCP) for heart and lungs were compared. Results Clinically acceptable dose escalation was feasible for 16 of 21 patients, with significant gains (>18%) in tumor control from 38.2% (RA50) to 56.3% (RA62.5), and only a small increase in predicted toxicity: median heart NTCP 4.4% (RA50) versus 5.6% (RA62.5) P<.001 and median lung NTCP 6.5% (RA50) versus 7.5% (RA62.5) P<.001. Conclusions Dose escalation to the GTV to improve local control is possible when overlap between PTV and organ-at-risk (<8% heart volume and <2.5% lung volume overlap for this study) generates only negligible increase in lung or heart toxicity. These predictions from radiobiological modeling should be tested in future clinical trials. PMID:25304796

  9. Probability of regenerating a normal limb after bite injury in the Mexican axolotl (Ambystoma mexicanum)

    PubMed Central

    Thompson, Sierra; Muzinic, Laura; Muzinic, Christopher; Niemiller, Matthew L.

    2014-01-01

    Abstract Multiple factors are thought to cause limb abnormalities in amphibian populations by altering processes of limb development and regeneration. We examined adult and juvenile axolotls (Ambystoma mexicanum) in the Ambystoma Genetic Stock Center (AGSC) for limb and digit abnormalities to investigate the probability of normal regeneration after bite injury. We observed that 80% of larval salamanders show evidence of bite injury at the time of transition from group housing to solitary housing. Among 717 adult axolotls that were surveyed, which included solitary‐housed males and group‐housed females, approximately half presented abnormalities, including examples of extra or missing digits and limbs, fused digits, and digits growing from atypical anatomical positions. Bite injury probably explains these limb defects, and not abnormal development, because limbs with normal anatomy regenerated after performing rostral amputations. We infer that only 43% of AGSC larvae will present four anatomically normal looking adult limbs after incurring a bite injury. Our results show regeneration of normal limb anatomy to be less than perfect after bite injury. PMID:25745564

  10. Critical spreading dynamics of parity conserving annihilating random walks with power-law branching

    NASA Astrophysics Data System (ADS)

    Laise, T.; dos Anjos, F. C.; Argolo, C.; Lyra, M. L.

    2018-09-01

    We investigate the critical spreading of the parity conserving annihilating random walks model with Lévy-like branching. The random walks are considered to perform normal diffusion with probability p on the sites of a one-dimensional lattice, annihilating in pairs by contact. With probability 1 - p, each particle can also produce two offspring which are placed at a distance r from the original site following a power-law Lévy-like distribution P(r) ∝ 1 /rα. We perform numerical simulations starting from a single particle. A finite-time scaling analysis is employed to locate the critical diffusion probability pc below which a finite density of particles is developed in the long-time limit. Further, we estimate the spreading dynamical exponents related to the increase of the average number of particles at the critical point and its respective fluctuations. The critical exponents deviate from those of the counterpart model with short-range branching for small values of α. The numerical data suggest that continuously varying spreading exponents sets up while the branching process still results in a diffusive-like spreading.

  11. The Quest for Evidence for Proton Therapy: Model-Based Approach and Precision Medicine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Widder, Joachim, E-mail: j.widder@umcg.nl; Schaaf, Arjen van der; Lambin, Philippe

    Purpose: Reducing dose to normal tissues is the advantage of protons versus photons. We aimed to describe a method for translating this reduction into a clinically relevant benefit. Methods and Materials: Dutch scientific and health care governance bodies have recently issued landmark reports regarding generation of relevant evidence for new technologies in health care including proton therapy. An approach based on normal tissue complication probability (NTCP) models has been adopted to select patients who are most likely to experience fewer (serious) adverse events achievable by state-of-the-art proton treatment. Results: By analogy with biologically targeted therapies, the technology needs to be testedmore » in enriched cohorts of patients exhibiting the decisive predictive marker: difference in normal tissue dosimetric signatures between proton and photon treatment plans. Expected clinical benefit is then estimated by virtue of multifactorial NTCP models. In this sense, high-tech radiation therapy falls under precision medicine. As a consequence, randomizing nonenriched populations between photons and protons is predictably inefficient and likely to produce confusing results. Conclusions: Validating NTCP models in appropriately composed cohorts treated with protons should be the primary research agenda leading to urgently needed evidence for proton therapy.« less

  12. Normal tissue complication probability model parameter estimation for xerostomia in head and neck cancer patients based on scintigraphy and quality of life assessments

    PubMed Central

    2012-01-01

    Background With advances in modern radiotherapy (RT), many patients with head and neck (HN) cancer can be effectively cured. However, xerostomia is a common complication in patients after RT for HN cancer. The purpose of this study was to use the Lyman–Kutcher–Burman (LKB) model to derive parameters for the normal tissue complication probability (NTCP) for xerostomia based on scintigraphy assessments and quality of life (QoL) questionnaires. We performed validation tests of the Quantitative Analysis of Normal Tissue Effects in the Clinic (QUANTEC) guidelines against prospectively collected QoL and salivary scintigraphic data. Methods Thirty-one patients with HN cancer were enrolled. Salivary excretion factors (SEFs) measured by scintigraphy and QoL data from self-reported questionnaires were used for NTCP modeling to describe the incidence of grade 3+ xerostomia. The NTCP parameters estimated from the QoL and SEF datasets were compared. Model performance was assessed using Pearson’s chi-squared test, Nagelkerke’s R2, the area under the receiver operating characteristic curve, and the Hosmer–Lemeshow test. The negative predictive value (NPV) was checked for the rate of correctly predicting the lack of incidence. Pearson’s chi-squared test was used to test the goodness of fit and association. Results Using the LKB NTCP model and assuming n=1, the dose for uniform irradiation of the whole or partial volume of the parotid gland that results in 50% probability of a complication (TD50) and the slope of the dose–response curve (m) were determined from the QoL and SEF datasets, respectively. The NTCP-fitted parameters for local disease were TD50=43.6 Gy and m=0.18 with the SEF data, and TD50=44.1 Gy and m=0.11 with the QoL data. The rate of grade 3+ xerostomia for treatment plans meeting the QUANTEC guidelines was specifically predicted, with a NPV of 100%, using either the QoL or SEF dataset. Conclusions Our study shows the agreement between the NTCP parameter modeling based on SEF and QoL data, which gave a NPV of 100% with each dataset, and the QUANTEC guidelines, thus validating the cut-off values of 20 and 25 Gy. Based on these results, we believe that the QUANTEC 25/20-Gy spared-gland mean-dose guidelines are clinically useful for avoiding xerostomia in the HN cohort. PMID:23206972

  13. Normal tissue complication probability model parameter estimation for xerostomia in head and neck cancer patients based on scintigraphy and quality of life assessments.

    PubMed

    Lee, Tsair-Fwu; Chao, Pei-Ju; Wang, Hung-Yu; Hsu, Hsuan-Chih; Chang, PaoShu; Chen, Wen-Cheng

    2012-12-04

    With advances in modern radiotherapy (RT), many patients with head and neck (HN) cancer can be effectively cured. However, xerostomia is a common complication in patients after RT for HN cancer. The purpose of this study was to use the Lyman-Kutcher-Burman (LKB) model to derive parameters for the normal tissue complication probability (NTCP) for xerostomia based on scintigraphy assessments and quality of life (QoL) questionnaires. We performed validation tests of the Quantitative Analysis of Normal Tissue Effects in the Clinic (QUANTEC) guidelines against prospectively collected QoL and salivary scintigraphic data. Thirty-one patients with HN cancer were enrolled. Salivary excretion factors (SEFs) measured by scintigraphy and QoL data from self-reported questionnaires were used for NTCP modeling to describe the incidence of grade 3+ xerostomia. The NTCP parameters estimated from the QoL and SEF datasets were compared. Model performance was assessed using Pearson's chi-squared test, Nagelkerke's R2, the area under the receiver operating characteristic curve, and the Hosmer-Lemeshow test. The negative predictive value (NPV) was checked for the rate of correctly predicting the lack of incidence. Pearson's chi-squared test was used to test the goodness of fit and association. Using the LKB NTCP model and assuming n=1, the dose for uniform irradiation of the whole or partial volume of the parotid gland that results in 50% probability of a complication (TD50) and the slope of the dose-response curve (m) were determined from the QoL and SEF datasets, respectively. The NTCP-fitted parameters for local disease were TD50=43.6 Gy and m=0.18 with the SEF data, and TD50=44.1 Gy and m=0.11 with the QoL data. The rate of grade 3+ xerostomia for treatment plans meeting the QUANTEC guidelines was specifically predicted, with a NPV of 100%, using either the QoL or SEF dataset. Our study shows the agreement between the NTCP parameter modeling based on SEF and QoL data, which gave a NPV of 100% with each dataset, and the QUANTEC guidelines, thus validating the cut-off values of 20 and 25 Gy. Based on these results, we believe that the QUANTEC 25/20-Gy spared-gland mean-dose guidelines are clinically useful for avoiding xerostomia in the HN cohort.

  14. Derivation of a Multiparameter Gamma Model for Analyzing the Residence-Time Distribution Function for Nonideal Flow Systems as an Alternative to the Advection-Dispersion Equation

    DOE PAGES

    Embry, Irucka; Roland, Victor; Agbaje, Oluropo; ...

    2013-01-01

    A new residence-time distribution (RTD) function has been developed and applied to quantitative dye studies as an alternative to the traditional advection-dispersion equation (AdDE). The new method is based on a jointly combined four-parameter gamma probability density function (PDF). The gamma residence-time distribution (RTD) function and its first and second moments are derived from the individual two-parameter gamma distributions of randomly distributed variables, tracer travel distance, and linear velocity, which are based on their relationship with time. The gamma RTD function was used on a steady-state, nonideal system modeled as a plug-flow reactor (PFR) in the laboratory to validate themore » effectiveness of the model. The normalized forms of the gamma RTD and the advection-dispersion equation RTD were compared with the normalized tracer RTD. The normalized gamma RTD had a lower mean-absolute deviation (MAD) (0.16) than the normalized form of the advection-dispersion equation (0.26) when compared to the normalized tracer RTD. The gamma RTD function is tied back to the actual physical site due to its randomly distributed variables. The results validate using the gamma RTD as a suitable alternative to the advection-dispersion equation for quantitative tracer studies of non-ideal flow systems.« less

  15. Random-Walk Type Model with Fat Tails for Financial Markets

    NASA Astrophysics Data System (ADS)

    Matuttis, Hans-Geors

    Starting from the random-walk model, practices of financial markets are included into the random-walk so that fat tail distributions like those in the high frequency data of the SP500 index are reproduced, though the individual mechanisms are modeled by normally distributed data. The incorporation of local correlation narrows the distribution for "frequent" events, whereas global correlations due to technical analysis leads to fat tails. Delay of market transactions in the trading process shifts the fat tail probabilities downwards. Such an inclusion of reactions to market fluctuations leads to mini-trends which are distributed with unit variance.

  16. Performance of synchronous optical receivers using atmospheric compensation techniques.

    PubMed

    Belmonte, Aniceto; Khan, Joseph

    2008-09-01

    We model the impact of atmospheric turbulence-induced phase and amplitude fluctuations on free-space optical links using synchronous detection. We derive exact expressions for the probability density function of the signal-to-noise ratio in the presence of turbulence. We consider the effects of log-normal amplitude fluctuations and Gaussian phase fluctuations, in addition to local oscillator shot noise, for both passive receivers and those employing active modal compensation of wave-front phase distortion. We compute error probabilities for M-ary phase-shift keying, and evaluate the impact of various parameters, including the ratio of receiver aperture diameter to the wave-front coherence diameter, and the number of modes compensated.

  17. Activated adsorption of methane on clean and oxygen-modified Pt?111? and Pd?110?

    NASA Astrophysics Data System (ADS)

    Valden, M.; Pere, J.; Hirsimäki, M.; Suhonen, S.; Pessa, M.

    1997-04-01

    Activated adsorption of CH 4 on clean and oxygen modified Pt{111} and Pd{110} has been studied using molecular beam surface scattering. The absolute dissociation probability of CH 4 was measured as a function of the incident normal energy ( E) and the surface temperature ( Ts). The results from clean Pt{111} and Pd{110} are consistent with a direct dissociation mechanism. The dissociative chemisorption dynamics of CH 4 is addressed by using quantum mechanical and statistical models. The influence of adsorbed oxygen on the dissociative adsorption of CH 4 on both Pt{111} and Pd{110} shows that the dissociation probability decreases linearly with increasing oxygen coverage.

  18. MASTER: a model to improve and standardize clinical breakpoints for antimicrobial susceptibility testing using forecast probabilities.

    PubMed

    Blöchliger, Nicolas; Keller, Peter M; Böttger, Erik C; Hombach, Michael

    2017-09-01

    The procedure for setting clinical breakpoints (CBPs) for antimicrobial susceptibility has been poorly standardized with respect to population data, pharmacokinetic parameters and clinical outcome. Tools to standardize CBP setting could result in improved antibiogram forecast probabilities. We propose a model to estimate probabilities for methodological categorization errors and defined zones of methodological uncertainty (ZMUs), i.e. ranges of zone diameters that cannot reliably be classified. The impact of ZMUs on methodological error rates was used for CBP optimization. The model distinguishes theoretical true inhibition zone diameters from observed diameters, which suffer from methodological variation. True diameter distributions are described with a normal mixture model. The model was fitted to observed inhibition zone diameters of clinical Escherichia coli strains. Repeated measurements for a quality control strain were used to quantify methodological variation. For 9 of 13 antibiotics analysed, our model predicted error rates of < 0.1% applying current EUCAST CBPs. Error rates were > 0.1% for ampicillin, cefoxitin, cefuroxime and amoxicillin/clavulanic acid. Increasing the susceptible CBP (cefoxitin) and introducing ZMUs (ampicillin, cefuroxime, amoxicillin/clavulanic acid) decreased error rates to < 0.1%. ZMUs contained low numbers of isolates for ampicillin and cefuroxime (3% and 6%), whereas the ZMU for amoxicillin/clavulanic acid contained 41% of all isolates and was considered not practical. We demonstrate that CBPs can be improved and standardized by minimizing methodological categorization error rates. ZMUs may be introduced if an intermediate zone is not appropriate for pharmacokinetic/pharmacodynamic or drug dosing reasons. Optimized CBPs will provide a standardized antibiotic susceptibility testing interpretation at a defined level of probability. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Evaluation of tranche in securitization and long-range Ising model

    NASA Astrophysics Data System (ADS)

    Kitsukawa, K.; Mori, S.; Hisakado, M.

    2006-08-01

    This econophysics work studies the long-range Ising model of a finite system with N spins and the exchange interaction J/N and the external field H as a model for homogeneous credit portfolio of assets with default probability Pd and default correlation ρd. Based on the discussion on the (J,H) phase diagram, we develop a perturbative calculation method for the model and obtain explicit expressions for Pd,ρd and the normalization factor Z in terms of the model parameters N and J,H. The effect of the default correlation ρd on the probabilities P(Nd,ρd) for Nd defaults and on the cumulative distribution function D(i,ρd) are discussed. The latter means the average loss rate of the“tranche” (layered structure) of the securities (e.g. CDO), which are synthesized from a pool of many assets. We show that the expected loss rate of the subordinated tranche decreases with ρd and that of the senior tranche increases linearly, which are important in their pricing and ratings.

  20. 49 CFR 173.50 - Class 1-Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... insensitive that there is very little probability of initiation or of transition from burning to detonation under normal conditions of transport. 1 The probability of transition from burning to detonation is... contain only extremely insensitive detonating substances and which demonstrate a negligible probability of...

  1. Tangled nature model of evolutionary dynamics reconsidered: Structural and dynamical effects of trait inheritance

    NASA Astrophysics Data System (ADS)

    Andersen, Christian Walther; Sibani, Paolo

    2016-05-01

    Based on the stochastic dynamics of interacting agents which reproduce, mutate, and die, the tangled nature model (TNM) describes key emergent features of biological and cultural ecosystems' evolution. While trait inheritance is not included in many applications, i.e., the interactions of an agent and those of its mutated offspring are taken to be uncorrelated, in the family of TNMs introduced in this work correlations of varying strength are parametrized by a positive integer K . We first show that the interactions generated by our rule are nearly independent of K . Consequently, the structural and dynamical effects of trait inheritance can be studied independently of effects related to the form of the interactions. We then show that changing K strengthens the core structure of the ecology, leads to population abundance distributions better approximated by log-normal probability densities, and increases the probability that a species extant at time tw also survives at t >tw . Finally, survival probabilities of species are shown to decay as powers of the ratio t /tw , a so-called pure aging behavior usually seen in glassy systems of physical origin. We find a quantitative dynamical effect of trait inheritance, namely, that increasing the value of K numerically decreases the decay exponent of the species survival probability.

  2. Tangled nature model of evolutionary dynamics reconsidered: Structural and dynamical effects of trait inheritance.

    PubMed

    Andersen, Christian Walther; Sibani, Paolo

    2016-05-01

    Based on the stochastic dynamics of interacting agents which reproduce, mutate, and die, the tangled nature model (TNM) describes key emergent features of biological and cultural ecosystems' evolution. While trait inheritance is not included in many applications, i.e., the interactions of an agent and those of its mutated offspring are taken to be uncorrelated, in the family of TNMs introduced in this work correlations of varying strength are parametrized by a positive integer K. We first show that the interactions generated by our rule are nearly independent of K. Consequently, the structural and dynamical effects of trait inheritance can be studied independently of effects related to the form of the interactions. We then show that changing K strengthens the core structure of the ecology, leads to population abundance distributions better approximated by log-normal probability densities, and increases the probability that a species extant at time t_{w} also survives at t>t_{w}. Finally, survival probabilities of species are shown to decay as powers of the ratio t/t_{w}, a so-called pure aging behavior usually seen in glassy systems of physical origin. We find a quantitative dynamical effect of trait inheritance, namely, that increasing the value of K numerically decreases the decay exponent of the species survival probability.

  3. An evaluation of procedures to estimate monthly precipitation probabilities

    NASA Astrophysics Data System (ADS)

    Legates, David R.

    1991-01-01

    Many frequency distributions have been used to evaluate monthly precipitation probabilities. Eight of these distributions (including Pearson type III, extreme value, and transform normal probability density functions) are comparatively examined to determine their ability to represent accurately variations in monthly precipitation totals for global hydroclimatological analyses. Results indicate that a modified version of the Box-Cox transform-normal distribution more adequately describes the 'true' precipitation distribution than does any of the other methods. This assessment was made using a cross-validation procedure for a global network of 253 stations for which at least 100 years of monthly precipitation totals were available.

  4. Soccer Matches as Experiments - How Often Does the 'Best' Team Win?

    NASA Technical Reports Server (NTRS)

    Skinner, Gerald K.; Freeman, G. H.

    2009-01-01

    Models in which the number of goals scored by a team in a soccer match follow a Poisson distribution or a closely related one, have been widely discussed. We here consider a soccer match as an experiment to assess which of two teams is superior and examine the probability that the outcome of the experiment (match) truly represents the relative abilities of the two teams. Given a final score it is possible by using a Bayesian approach to quantify the probability that it was or was not the case that the best team won. For typical scores, the probability of a misleading result is significant. Modifying the rules of the game to increase thc typical number of goals scored would improve the situation, but a level of confidence that would normally be regarded as satisfactory could not be obtained unless the character of the game were radically changed.

  5. Estimation of transition probabilities of credit ratings

    NASA Astrophysics Data System (ADS)

    Peng, Gan Chew; Hin, Pooi Ah

    2015-12-01

    The present research is based on the quarterly credit ratings of ten companies over 15 years taken from the database of the Taiwan Economic Journal. The components in the vector mi (mi1, mi2,⋯, mi10) may first be used to denote the credit ratings of the ten companies in the i-th quarter. The vector mi+1 in the next quarter is modelled to be dependent on the vector mi via a conditional distribution which is derived from a 20-dimensional power-normal mixture distribution. The transition probability Pkl (i ,j ) for getting mi+1,j = l given that mi, j = k is then computed from the conditional distribution. It is found that the variation of the transition probability Pkl (i ,j ) as i varies is able to give indication for the possible transition of the credit rating of the j-th company in the near future.

  6. Postfragmentation density function for bacterial aggregates in laminar flow

    PubMed Central

    Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John

    2014-01-01

    The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. PMID:21599205

  7. Are Soccer Matches Badly Designed Experiments?

    NASA Technical Reports Server (NTRS)

    Skinner, G. K.; Freeman, G. H.

    2008-01-01

    Models in which the number of goals scored by a team in a soccer match follow a Poisson distribution, or a closely related one. have been widely discussed. We here consider a soccer match as an experiment to assess which of two teams is superior and examine the probability that the outcome of the experiment (match) truly represents the relative abilities of the two teams. Given a final score it is possible by using a Bayesian approach to quantify the probability that it was or was not the case that 'the best team won'. For typical scores, the probability of a misleading result is significant. Modifying the rules of the game to increase the typical number of goals scored would improve the situation, but a level of confidence that would normally be regarded as satisfactory could not be obtained unless the character of the game were radically changed.

  8. An Empirical Comparison of Selected Two-Sample Hypothesis Testing Procedures Which Are Locally Most Powerful Under Certain Conditions.

    ERIC Educational Resources Information Center

    Hoover, H. D.; Plake, Barbara

    The relative power of the Mann-Whitney statistic, the t-statistic, the median test, a test based on exceedances (A,B), and two special cases of (A,B) the Tukey quick test and the revised Tukey quick test, was investigated via a Monte Carlo experiment. These procedures were compared across four population probability models: uniform, beta, normal,…

  9. Individualized statistical learning from medical image databases: application to identification of brain lesions.

    PubMed

    Erus, Guray; Zacharaki, Evangelia I; Davatzikos, Christos

    2014-04-01

    This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a "target-specific" feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject's images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an "estimability" criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Individualized Statistical Learning from Medical Image Databases: Application to Identification of Brain Lesions

    PubMed Central

    Erus, Guray; Zacharaki, Evangelia I.; Davatzikos, Christos

    2014-01-01

    This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a “target-specific” feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject’s images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an “estimability” criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. PMID:24607564

  11. Volume effects of late term normal tissue toxicity in prostate cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Bonta, Dacian Viorel

    Modeling of volume effects for treatment toxicity is paramount for optimization of radiation therapy. This thesis proposes a new model for calculating volume effects in gastro-intestinal and genito-urinary normal tissue complication probability (NTCP) following radiation therapy for prostate carcinoma. The radiobiological and the pathological basis for this model and its relationship to other models are detailed. A review of the radiobiological experiments and published clinical data identified salient features and specific properties a biologically adequate model has to conform to. The new model was fit to a set of actual clinical data. In order to verify the goodness of fit, two established NTCP models and a non-NTCP measure for complication risk were fitted to the same clinical data. The method of fit for the model parameters was maximum likelihood estimation. Within the framework of the maximum likelihood approach I estimated the parameter uncertainties for each complication prediction model. The quality-of-fit was determined using the Aikaike Information Criterion. Based on the model that provided the best fit, I identified the volume effects for both types of toxicities. Computer-based bootstrap resampling of the original dataset was used to estimate the bias and variance for the fitted parameter values. Computer simulation was also used to estimate the population size that generates a specific uncertainty level (3%) in the value of predicted complication probability. The same method was used to estimate the size of the patient population needed for accurate choice of the model underlying the NTCP. The results indicate that, depending on the number of parameters of a specific NTCP model, 100 (for two parameter models) and 500 patients (for three parameter models) are needed for accurate parameter fit. Correlation of complication occurrence in patients was also investigated. The results suggest that complication outcomes are correlated in a patient, although the correlation coefficient is rather small.

  12. Pricing foreign equity option under stochastic volatility tempered stable Lévy processes

    NASA Astrophysics Data System (ADS)

    Gong, Xiaoli; Zhuang, Xintian

    2017-10-01

    Considering that financial assets returns exhibit leptokurtosis, asymmetry properties as well as clustering and heteroskedasticity effect, this paper substitutes the logarithm normal jumps in Heston stochastic volatility model by the classical tempered stable (CTS) distribution and normal tempered stable (NTS) distribution to construct stochastic volatility tempered stable Lévy processes (TSSV) model. The TSSV model framework permits infinite activity jump behaviors of return dynamics and time varying volatility consistently observed in financial markets through subordinating tempered stable process to stochastic volatility process, capturing leptokurtosis, fat tailedness and asymmetry features of returns. By employing the analytical characteristic function and fast Fourier transform (FFT) technique, the formula for probability density function (PDF) of TSSV returns is derived, making the analytical formula for foreign equity option (FEO) pricing available. High frequency financial returns data are employed to verify the effectiveness of proposed models in reflecting the stylized facts of financial markets. Numerical analysis is performed to investigate the relationship between the corresponding parameters and the implied volatility of foreign equity option.

  13. An immune reaction may be necessary for cancer development

    PubMed Central

    Prehn, Richmond T

    2006-01-01

    Background The hypothesis of immunosurveillance suggests that new neoplasms arise very frequently, but most are destroyed almost at their inception by an immune response. Its correctness has been debated for many years. In its support, it has been shown that the incidences of many tumor types, though apparently not all, tend to be increased in immunodeficient animals or humans, but this observation does not end the debate. Alternative model There is an alternative to the surveillance hypothesis; numerous studies have shown that the effect of an immune reaction on a tumor is biphasic. For each tumor, there is some quantitatively low level of immune reaction that, relative to no reaction, is facilitating, perhaps even necessary for the tumor's growth in vivo. The optimum level of this facilitating reaction may often be less than the level of immunity that the tumor might engender in a normal subject. Conclusion The failure of a tumor to grow as well in the normal as it does in the immunosuppressed host is probably not caused by a lack of tumor-cell killing in the suppressed host. Instead, the higher level of immune response in a normal animal, even if it does not rise to tumor-inhibitory levels, probably gives less positive support to tumor growth. This seems more than a semantic distinction. PMID:16457723

  14. Statistical hypothesis tests of some micrometeorological observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SethuRaman, S.; Tichler, J.

    Chi-square goodness-of-fit is used to test the hypothesis that the medium scale of turbulence in the atmospheric surface layer is normally distributed. Coefficients of skewness and excess are computed from the data. If the data are not normal, these coefficients are used in Edgeworth's asymptotic expansion of Gram-Charlier series to determine an altrnate probability density function. The observed data are then compared with the modified probability densities and the new chi-square values computed.Seventy percent of the data analyzed was either normal or approximatley normal. The coefficient of skewness g/sub 1/ has a good correlation with the chi-square values. Events withmore » vertical-barg/sub 1/vertical-bar<0.21 were normal to begin with and those with 0.21« less

  15. The probability distribution model of air pollution index and its dominants in Kuala Lumpur

    NASA Astrophysics Data System (ADS)

    AL-Dhurafi, Nasr Ahmed; Razali, Ahmad Mahir; Masseran, Nurulkamal; Zamzuri, Zamira Hasanah

    2016-11-01

    This paper focuses on the statistical modeling for the distributions of air pollution index (API) and its sub-indexes data observed at Kuala Lumpur in Malaysia. Five pollutants or sub-indexes are measured including, carbon monoxide (CO); sulphur dioxide (SO2); nitrogen dioxide (NO2), and; particulate matter (PM10). Four probability distributions are considered, namely log-normal, exponential, Gamma and Weibull in search for the best fit distribution to the Malaysian air pollutants data. In order to determine the best distribution for describing the air pollutants data, five goodness-of-fit criteria's are applied. This will help in minimizing the uncertainty in pollution resource estimates and improving the assessment phase of planning. The conflict in criterion results for selecting the best distribution was overcome by using the weight of ranks method. We found that the Gamma distribution is the best distribution for the majority of air pollutants data in Kuala Lumpur.

  16. Rockfall travel distances theoretical distributions

    NASA Astrophysics Data System (ADS)

    Jaboyedoff, Michel; Derron, Marc-Henri; Pedrazzini, Andrea

    2017-04-01

    The probability of propagation of rockfalls is a key part of hazard assessment, because it permits to extrapolate the probability of propagation of rockfall either based on partial data or simply theoretically. The propagation can be assumed frictional which permits to describe on average the propagation by a line of kinetic energy which corresponds to the loss of energy along the path. But loss of energy can also be assumed as a multiplicative process or a purely random process. The distributions of the rockfall block stop points can be deduced from such simple models, they lead to Gaussian, Inverse-Gaussian, Log-normal or exponential negative distributions. The theoretical background is presented, and the comparisons of some of these models with existing data indicate that these assumptions are relevant. The results are either based on theoretical considerations or by fitting results. They are potentially very useful for rockfall hazard zoning and risk assessment. This approach will need further investigations.

  17. Diffusing diffusivity: Rotational diffusion in two and three dimensions

    NASA Astrophysics Data System (ADS)

    Jain, Rohit; Sebastian, K. L.

    2017-06-01

    We consider the problem of calculating the probability distribution function (pdf) of angular displacement for rotational diffusion in a crowded, rearranging medium. We use the diffusing diffusivity model and following our previous work on translational diffusion [R. Jain and K. L. Sebastian, J. Phys. Chem. B 120, 3988 (2016)], we show that the problem can be reduced to that of calculating the survival probability of a particle undergoing Brownian motion, in the presence of a sink. We use the approach to calculate the pdf for the rotational motion in two and three dimensions. We also propose new dimensionless, time dependent parameters, αr o t ,2 D and αr o t ,3 D, which can be used to analyze the experimental/simulation data to find the extent of deviation from the normal behavior, i.e., constant diffusivity, and obtain explicit analytical expressions for them, within our model.

  18. Method for Automatic Selection of Parameters in Normal Tissue Complication Probability Modeling.

    PubMed

    Christophides, Damianos; Appelt, Ane L; Gusnanto, Arief; Lilley, John; Sebag-Montefiore, David

    2018-07-01

    To present a fully automatic method to generate multiparameter normal tissue complication probability (NTCP) models and compare its results with those of a published model, using the same patient cohort. Data were analyzed from 345 rectal cancer patients treated with external radiation therapy to predict the risk of patients developing grade 1 or ≥2 cystitis. In total, 23 clinical factors were included in the analysis as candidate predictors of cystitis. Principal component analysis was used to decompose the bladder dose-volume histogram into 8 principal components, explaining more than 95% of the variance. The data set of clinical factors and principal components was divided into training (70%) and test (30%) data sets, with the training data set used by the algorithm to compute an NTCP model. The first step of the algorithm was to obtain a bootstrap sample, followed by multicollinearity reduction using the variance inflation factor and genetic algorithm optimization to determine an ordinal logistic regression model that minimizes the Bayesian information criterion. The process was repeated 100 times, and the model with the minimum Bayesian information criterion was recorded on each iteration. The most frequent model was selected as the final "automatically generated model" (AGM). The published model and AGM were fitted on the training data sets, and the risk of cystitis was calculated. The 2 models had no significant differences in predictive performance, both for the training and test data sets (P value > .05) and found similar clinical and dosimetric factors as predictors. Both models exhibited good explanatory performance on the training data set (P values > .44), which was reduced on the test data sets (P values < .05). The predictive value of the AGM is equivalent to that of the expert-derived published model. It demonstrates potential in saving time, tackling problems with a large number of parameters, and standardizing variable selection in NTCP modeling. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.

  19. EUCLID: an outcome analysis tool for high-dimensional clinical studies

    NASA Astrophysics Data System (ADS)

    Gayou, Olivier; Parda, David S.; Miften, Moyed

    2007-03-01

    Treatment management decisions in three-dimensional conformal radiation therapy (3DCRT) and intensity-modulated radiation therapy (IMRT) are usually made based on the dose distributions in the target and surrounding normal tissue. These decisions may include, for example, the choice of one treatment over another and the level of tumour dose escalation. Furthermore, biological predictors such as tumour control probability (TCP) and normal tissue complication probability (NTCP), whose parameters available in the literature are only population-based estimates, are often used to assess and compare plans. However, a number of other clinical, biological and physiological factors also affect the outcome of radiotherapy treatment and are often not considered in the treatment planning and evaluation process. A statistical outcome analysis tool, EUCLID, for direct use by radiation oncologists and medical physicists was developed. The tool builds a mathematical model to predict an outcome probability based on a large number of clinical, biological, physiological and dosimetric factors. EUCLID can first analyse a large set of patients, such as from a clinical trial, to derive regression correlation coefficients between these factors and a given outcome. It can then apply such a model to an individual patient at the time of treatment to derive the probability of that outcome, allowing the physician to individualize the treatment based on medical evidence that encompasses a wide range of factors. The software's flexibility allows the clinicians to explore several avenues to select the best predictors of a given outcome. Its link to record-and-verify systems and data spreadsheets allows for a rapid and practical data collection and manipulation. A wide range of statistical information about the study population, including demographics and correlations between different factors, is available. A large number of one- and two-dimensional plots, histograms and survival curves allow for an easy visual analysis of the population. Several visual and analytical methods are available to quantify the predictive power of the multivariate regression model. The EUCLID tool can be readily integrated with treatment planning and record-and-verify systems.

  20. EUCLID: an outcome analysis tool for high-dimensional clinical studies.

    PubMed

    Gayou, Olivier; Parda, David S; Miften, Moyed

    2007-03-21

    Treatment management decisions in three-dimensional conformal radiation therapy (3DCRT) and intensity-modulated radiation therapy (IMRT) are usually made based on the dose distributions in the target and surrounding normal tissue. These decisions may include, for example, the choice of one treatment over another and the level of tumour dose escalation. Furthermore, biological predictors such as tumour control probability (TCP) and normal tissue complication probability (NTCP), whose parameters available in the literature are only population-based estimates, are often used to assess and compare plans. However, a number of other clinical, biological and physiological factors also affect the outcome of radiotherapy treatment and are often not considered in the treatment planning and evaluation process. A statistical outcome analysis tool, EUCLID, for direct use by radiation oncologists and medical physicists was developed. The tool builds a mathematical model to predict an outcome probability based on a large number of clinical, biological, physiological and dosimetric factors. EUCLID can first analyse a large set of patients, such as from a clinical trial, to derive regression correlation coefficients between these factors and a given outcome. It can then apply such a model to an individual patient at the time of treatment to derive the probability of that outcome, allowing the physician to individualize the treatment based on medical evidence that encompasses a wide range of factors. The software's flexibility allows the clinicians to explore several avenues to select the best predictors of a given outcome. Its link to record-and-verify systems and data spreadsheets allows for a rapid and practical data collection and manipulation. A wide range of statistical information about the study population, including demographics and correlations between different factors, is available. A large number of one- and two-dimensional plots, histograms and survival curves allow for an easy visual analysis of the population. Several visual and analytical methods are available to quantify the predictive power of the multivariate regression model. The EUCLID tool can be readily integrated with treatment planning and record-and-verify systems.

  1. Common-Cause Failure Treatment in Event Assessment: Basis for a Proposed New Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dana Kelly; Song-Hua Shen; Gary DeMoss

    2010-06-01

    Event assessment is an application of probabilistic risk assessment in which observed equipment failures and outages are mapped into the risk model to obtain a numerical estimate of the event’s risk significance. In this paper, we focus on retrospective assessments to estimate the risk significance of degraded conditions such as equipment failure accompanied by a deficiency in a process such as maintenance practices. In modeling such events, the basic events in the risk model that are associated with observed failures and other off-normal situations are typically configured to be failed, while those associated with observed successes and unchallenged components aremore » assumed capable of failing, typically with their baseline probabilities. This is referred to as the failure memory approach to event assessment. The conditioning of common-cause failure probabilities for the common cause component group associated with the observed component failure is particularly important, as it is insufficient to simply leave these probabilities at their baseline values, and doing so may result in a significant underestimate of risk significance for the event. Past work in this area has focused on the mathematics of the adjustment. In this paper, we review the Basic Parameter Model for common-cause failure, which underlies most current risk modelling, discuss the limitations of this model with respect to event assessment, and introduce a proposed new framework for common-cause failure, which uses a Bayesian network to model underlying causes of failure, and which has the potential to overcome the limitations of the Basic Parameter Model with respect to event assessment.« less

  2. Integration of Geophysical Methods By A Generalised Probability Tomography Approach

    NASA Astrophysics Data System (ADS)

    Mauriello, P.; Patella, D.

    In modern science, the propensity interpretative approach stands on the assumption that any physical system consists of two kinds of reality: actual and potential. Also geophysical data systems have potentialities that extend far beyond the few actual models normally attributed to them. Indeed, any geophysical data set is in itself quite inherently ambiguous. Classical deterministic inversion, including tomography, usu- ally forces a measured data set to collapse into a few rather subjective models based on some available a priori information. Classical interpretation is thus an intrinsically limited approach requiring a very deep logical extension. We think that a way to high- light a system full potentiality is to introduce probability as the leading paradigm in dealing with field data systems. Probability tomography has been recently introduced as a completely new approach to data interpretation. Probability tomography has been originally formulated for the self-potential method. It has been then extended to geo- electric, natural source electromagnetic induction, gravity and magnetic methods. Fol- lowing the same rationale, in this paper we generalize the probability tomography the- ory to a generic geophysical anomaly vector field, including the treatment for scalar fields as a particular case. This generalization makes then possible to address for the first time the problem of the integration of different methods by a conjoint probabil- ity tomography imaging procedure. The aim is to infer the existence of an unknown buried object through the analysis of an ad hoc occurrence probability function, blend- ing the physical messages brought forth by a set of singularly observed anomalies.

  3. Importance of vesicle release stochasticity in neuro-spike communication.

    PubMed

    Ramezani, Hamideh; Akan, Ozgur B

    2017-07-01

    Aim of this paper is proposing a stochastic model for vesicle release process, a part of neuro-spike communication. Hence, we study biological events occurring in this process and use microphysiological simulations to observe functionality of these events. Since the most important source of variability in vesicle release probability is opening of voltage dependent calcium channels (VDCCs) followed by influx of calcium ions through these channels, we propose a stochastic model for this event, while using a deterministic model for other variability sources. To capture the stochasticity of calcium influx to pre-synaptic neuron in our model, we study its statistics and find that it can be modeled by a distribution defined based on Normal and Logistic distributions.

  4. Randomized path optimization for thevMitigated counter detection of UAVS

    DTIC Science & Technology

    2017-06-01

    using Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the...Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the true terminal...algorithm’s success. A recursive Bayesian filtering scheme is used to assimilate noisy measurements of the UAVs position to predict its terminal location. We

  5. Malignant induction probability maps for radiotherapy using X-ray and proton beams.

    PubMed

    Timlin, C; Houston, M; Jones, B

    2011-12-01

    The aim of this study was to display malignant induction probability (MIP) maps alongside dose distribution maps for radiotherapy using X-ray and charged particles such as protons. Dose distributions for X-rays and protons are used in an interactive MATLAB® program (MathWorks, Natick, MA). The MIP is calculated using a published linear quadratic model, which incorporates fractionation effects, cell killing and cancer induction as a function of dose, as well as relative biological effect. Two virtual situations are modelled: (a) a tumour placed centrally in a cubic volume of normal tissue and (b) the same tumour placed closer to the skin surface. The MIP is calculated for a variety of treatment field options. The results show that, for protons, the MIP increases with field numbers. In such cases, proton MIP can be higher than that for X-rays. Protons produce the lowest MIPs for superficial targets because of the lack of exit dose. The addition of a dose bath to all normal tissues increases the MIP by up to an order of magnitude. This exploratory study shows that it is possible to achieve three-dimensional displays of carcinogenesis risk. The importance of treatment geometry, including the length and volume of tissue traversed by each beam, can all influence MIP. Reducing the volume of tissue irradiated is advantageous, as reducing the number of cells at risk reduces the total MIP. This finding lends further support to the use of treatment gantries as well as the use of simpler field arrangements for particle therapy provided normal tissue tolerances are respected.

  6. Bayes classification of terrain cover using normalized polarimetric data

    NASA Technical Reports Server (NTRS)

    Yueh, H. A.; Swartz, A. A.; Kong, J. A.; Shin, R. T.; Novak, L. M.

    1988-01-01

    The normalized polarimetric classifier (NPC) which uses only the relative magnitudes and phases of the polarimetric data is proposed for discrimination of terrain elements. The probability density functions (PDFs) of polarimetric data are assumed to have a complex Gaussian distribution, and the marginal PDF of the normalized polarimetric data is derived by adopting the Euclidean norm as the normalization function. The general form of the distance measure for the NPC is also obtained. It is demonstrated that for polarimetric data with an arbitrary PDF, the distance measure of NPC will be independent of the normalization function selected even when the classifier is mistrained. A complex Gaussian distribution is assumed for the polarimetric data consisting of grass and tree regions. The probability of error for the NPC is compared with those of several other single-feature classifiers. The classification error of NPCs is shown to be independent of the normalization function.

  7. A Compendium of Wind Statistics and Models for the NASA Space Shuttle and Other Aerospace Vehicle Programs

    NASA Technical Reports Server (NTRS)

    Smith, O. E.; Adelfang, S. I.

    1998-01-01

    The wind profile with all of its variations with respect to altitude has been, is now, and will continue to be important for aerospace vehicle design and operations. Wind profile databases and models are used for the vehicle ascent flight design for structural wind loading, flight control systems, performance analysis, and launch operations. This report presents the evolution of wind statistics and wind models from the empirical scalar wind profile model established for the Saturn Program through the development of the vector wind profile model used for the Space Shuttle design to the variations of this wind modeling concept for the X-33 program. Because wind is a vector quantity, the vector wind models use the rigorous mathematical probability properties of the multivariate normal probability distribution. When the vehicle ascent steering commands (ascent guidance) are wind biased to the wind profile measured on the day-of-launch, ascent structural wind loads are reduced and launch probability is increased. This wind load alleviation technique is recommended in the initial phase of vehicle development. The vehicle must fly through the largest load allowable versus altitude to achieve its mission. The Gumbel extreme value probability distribution is used to obtain the probability of exceeding (or not exceeding) the load allowable. The time conditional probability function is derived from the Gumbel bivariate extreme value distribution. This time conditional function is used for calculation of wind loads persistence increments using 3.5-hour Jimsphere wind pairs. These increments are used to protect the commit-to-launch decision. Other topics presented include the Shuttle Shuttle load-response to smoothed wind profiles, a new gust model, and advancements in wind profile measuring systems. From the lessons learned and knowledge gained from past vehicle programs, the development of future launch vehicles can be accelerated. However, new vehicle programs by their very nature will require specialized support for new databases and analyses for wind, atmospheric parameters (pressure, temperature, and density versus altitude), and weather. It is for this reason that project managers are encouraged to collaborate with natural environment specialists early in the conceptual design phase. Such action will give the lead time necessary to meet the natural environment design and operational requirements, and thus, reduce development costs.

  8. Investigation of possibility of surface rupture derived from PFDHA and calculation of surface displacement based on dislocation

    NASA Astrophysics Data System (ADS)

    Inoue, N.; Kitada, N.; Irikura, K.

    2013-12-01

    A probability of surface rupture is important to configure the seismic source, such as area sources or fault models, for a seismic hazard evaluation. In Japan, Takemura (1998) estimated the probability based on the historical earthquake data. Kagawa et al. (2004) evaluated the probability based on a numerical simulation of surface displacements. The estimated probability indicates a sigmoid curve and increases between Mj (the local magnitude defined and calculated by Japan Meteorological Agency) =6.5 and Mj=7.0. The probability of surface rupture is also used in a probabilistic fault displacement analysis (PFDHA). The probability is determined from the collected earthquake catalog, which were classified into two categories: with surface rupture or without surface rupture. The logistic regression is performed for the classified earthquake data. Youngs et al. (2003), Ross and Moss (2011) and Petersen et al. (2011) indicate the logistic curves of the probability of surface rupture by normal, reverse and strike-slip faults, respectively. Takao et al. (2013) shows the logistic curve derived from only Japanese earthquake data. The Japanese probability curve shows the sharply increasing in narrow magnitude range by comparison with other curves. In this study, we estimated the probability of surface rupture applying the logistic analysis to the surface displacement derived from a surface displacement calculation. A source fault was defined in according to the procedure of Kagawa et al. (2004), which determined a seismic moment from a magnitude and estimated the area size of the asperity and the amount of slip. Strike slip and reverse faults were considered as source faults. We applied Wang et al. (2003) for calculations. The surface displacements with defined source faults were calculated by varying the depth of the fault. A threshold value as 5cm of surface displacement was used to evaluate whether a surface rupture reach or do not reach to the surface. We carried out the logistic regression analysis to the calculated displacements, which were classified by the above threshold. The estimated probability curve indicated the similar trend to the result of Takao et al. (2013). The probability of revere faults is larger than that of strike slip faults. On the other hand, PFDHA results show different trends. The probability of reverse faults at higher magnitude is lower than that of strike slip and normal faults. Ross and Moss (2011) suggested that the sediment and/or rock over the fault compress and not reach the displacement to the surface enough. The numerical theory applied in this study cannot deal with a complex initial situation such as topography.

  9. Functional Data Analysis Applied to Modeling of Severe Acute Mucositis and Dysphagia Resulting From Head and Neck Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dean, Jamie A., E-mail: jamie.dean@icr.ac.uk; Wong, Kee H.; Gay, Hiram

    Purpose: Current normal tissue complication probability modeling using logistic regression suffers from bias and high uncertainty in the presence of highly correlated radiation therapy (RT) dose data. This hinders robust estimates of dose-response associations and, hence, optimal normal tissue–sparing strategies from being elucidated. Using functional data analysis (FDA) to reduce the dimensionality of the dose data could overcome this limitation. Methods and Materials: FDA was applied to modeling of severe acute mucositis and dysphagia resulting from head and neck RT. Functional partial least squares regression (FPLS) and functional principal component analysis were used for dimensionality reduction of the dose-volume histogrammore » data. The reduced dose data were input into functional logistic regression models (functional partial least squares–logistic regression [FPLS-LR] and functional principal component–logistic regression [FPC-LR]) along with clinical data. This approach was compared with penalized logistic regression (PLR) in terms of predictive performance and the significance of treatment covariate–response associations, assessed using bootstrapping. Results: The area under the receiver operating characteristic curve for the PLR, FPC-LR, and FPLS-LR models was 0.65, 0.69, and 0.67, respectively, for mucositis (internal validation) and 0.81, 0.83, and 0.83, respectively, for dysphagia (external validation). The calibration slopes/intercepts for the PLR, FPC-LR, and FPLS-LR models were 1.6/−0.67, 0.45/0.47, and 0.40/0.49, respectively, for mucositis (internal validation) and 2.5/−0.96, 0.79/−0.04, and 0.79/0.00, respectively, for dysphagia (external validation). The bootstrapped odds ratios indicated significant associations between RT dose and severe toxicity in the mucositis and dysphagia FDA models. Cisplatin was significantly associated with severe dysphagia in the FDA models. None of the covariates was significantly associated with severe toxicity in the PLR models. Dose levels greater than approximately 1.0 Gy/fraction were most strongly associated with severe acute mucositis and dysphagia in the FDA models. Conclusions: FPLS and functional principal component analysis marginally improved predictive performance compared with PLR and provided robust dose-response associations. FDA is recommended for use in normal tissue complication probability modeling.« less

  10. Functional Data Analysis Applied to Modeling of Severe Acute Mucositis and Dysphagia Resulting From Head and Neck Radiation Therapy.

    PubMed

    Dean, Jamie A; Wong, Kee H; Gay, Hiram; Welsh, Liam C; Jones, Ann-Britt; Schick, Ulrike; Oh, Jung Hun; Apte, Aditya; Newbold, Kate L; Bhide, Shreerang A; Harrington, Kevin J; Deasy, Joseph O; Nutting, Christopher M; Gulliford, Sarah L

    2016-11-15

    Current normal tissue complication probability modeling using logistic regression suffers from bias and high uncertainty in the presence of highly correlated radiation therapy (RT) dose data. This hinders robust estimates of dose-response associations and, hence, optimal normal tissue-sparing strategies from being elucidated. Using functional data analysis (FDA) to reduce the dimensionality of the dose data could overcome this limitation. FDA was applied to modeling of severe acute mucositis and dysphagia resulting from head and neck RT. Functional partial least squares regression (FPLS) and functional principal component analysis were used for dimensionality reduction of the dose-volume histogram data. The reduced dose data were input into functional logistic regression models (functional partial least squares-logistic regression [FPLS-LR] and functional principal component-logistic regression [FPC-LR]) along with clinical data. This approach was compared with penalized logistic regression (PLR) in terms of predictive performance and the significance of treatment covariate-response associations, assessed using bootstrapping. The area under the receiver operating characteristic curve for the PLR, FPC-LR, and FPLS-LR models was 0.65, 0.69, and 0.67, respectively, for mucositis (internal validation) and 0.81, 0.83, and 0.83, respectively, for dysphagia (external validation). The calibration slopes/intercepts for the PLR, FPC-LR, and FPLS-LR models were 1.6/-0.67, 0.45/0.47, and 0.40/0.49, respectively, for mucositis (internal validation) and 2.5/-0.96, 0.79/-0.04, and 0.79/0.00, respectively, for dysphagia (external validation). The bootstrapped odds ratios indicated significant associations between RT dose and severe toxicity in the mucositis and dysphagia FDA models. Cisplatin was significantly associated with severe dysphagia in the FDA models. None of the covariates was significantly associated with severe toxicity in the PLR models. Dose levels greater than approximately 1.0 Gy/fraction were most strongly associated with severe acute mucositis and dysphagia in the FDA models. FPLS and functional principal component analysis marginally improved predictive performance compared with PLR and provided robust dose-response associations. FDA is recommended for use in normal tissue complication probability modeling. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  11. Quantitative Lymphoscintigraphy to Predict the Possibility of Lymphedema Development After Breast Cancer Surgery: Retrospective Clinical Study.

    PubMed

    Kim, Paul; Lee, Ju Kang; Lim, Oh Kyung; Park, Heung Kyu; Park, Ki Deok

    2017-12-01

    To predict the probability of lymphedema development in breast cancer patients in the early post-operation stage, we investigated the ability of quantitative lymphoscintigraphic assessment. This retrospective study included 201 patients without lymphedema after unilateral breast cancer surgery. Lymphoscintigraphy was performed between 4 and 8 weeks after surgery to evaluate the lymphatic system in the early postoperative stage. Quantitative lymphoscintigraphy was performed using four methods: ratio of radiopharmaceutical clearance rate of the affected to normal hand; ratio of radioactivity of the affected to normal hand; ratio of radiopharmaceutical uptake rate of the affected to normal axilla (RUA); and ratio of radioactivity of the affected to normal axilla (RRA). During a 1-year follow-up, patients with a circumferential interlimb difference of 2 cm at any measurement location and a 200-mL interlimb volume difference were diagnosed with lymphedema. We investigated the difference in quantitative lymphoscintigraphic assessment between the non-lymphedema and lymphedema groups. Quantitative lymphoscintigraphic assessment revealed that the RUA and RRA were significantly lower in the lymphedema group than in the non-lymphedema group. After adjusting the model for all significant variables (body mass index, N-stage, T-stage, type of surgery, and type of lymph node surgery), RRA was associated with lymphedema (odds ratio=0.14; 95% confidence interval, 0.04-0.46; p=0.001). In patients in the early postoperative stage after unilateral breast cancer surgery, quantitative lymphoscintigraphic assessment can be used to predict the probability of developing lymphedema.

  12. TH-A-BRF-02: BEST IN PHYSICS (JOINT IMAGING-THERAPY) - Modeling Tumor Evolution for Adaptive Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y; Lee, CG; Chan, TCY

    2014-06-15

    Purpose: To develop mathematical models of tumor geometry changes under radiotherapy that may support future adaptive paradigms. Methods: A total of 29 cervical patients were scanned using MRI, once for planning and weekly thereafter for treatment monitoring. Using the tumor volumes contoured by a radiologist, three mathematical models were investigated based on the assumption of a stochastic process of tumor evolution. The “weekly MRI” model predicts tumor geometry for the following week from the last two consecutive MRI scans, based on the voxel transition probability. The other two models use only the first pair of consecutive MRI scans, and themore » transition probabilities were estimated via tumor type classified from the entire data set. The classification is based on either measuring the tumor volume (the “weekly volume” model), or implementing an auxiliary “Markov chain” model. These models were compared to a constant volume approach that represents the current clinical practice, using various model parameters; e.g., the threshold probability β converts the probability map into a tumor shape (larger threshold implies smaller tumor). Model performance was measured using volume conformity index (VCI), i.e., the union of the actual target and modeled target volume squared divided by product of these two volumes. Results: The “weekly MRI” model outperforms the constant volume model by 26% on average, and by 103% for the worst 10% of cases in terms of VCI under a wide range of β. The “weekly volume” and “Markov chain” models outperform the constant volume model by 20% and 16% on average, respectively. They also perform better than the “weekly MRI” model when β is large. Conclusion: It has been demonstrated that mathematical models can be developed to predict tumor geometry changes for cervical cancer undergoing radiotherapy. The models can potentially support adaptive radiotherapy paradigm by reducing normal tissue dose. This research was supported in part by the Ontario Consortium for Adaptive Interventions in Radiation Oncology (OCAIRO) funded by the Ontario Research Fund (ORF) and the MITACS Accelerate Internship Program.« less

  13. Size distribution of submarine landslides along the U.S. Atlantic margin

    USGS Publications Warehouse

    Chaytor, J.D.; ten Brink, Uri S.; Solow, A.R.; Andrews, B.D.

    2009-01-01

    Assessment of the probability for destructive landslide-generated tsunamis depends on the knowledge of the number, size, and frequency of large submarine landslides. This paper investigates the size distribution of submarine landslides along the U.S. Atlantic continental slope and rise using the size of the landslide source regions (landslide failure scars). Landslide scars along the margin identified in a detailed bathymetric Digital Elevation Model (DEM) have areas that range between 0.89??km2 and 2410??km2 and volumes between 0.002??km3 and 179??km3. The area to volume relationship of these failure scars is almost linear (inverse power-law exponent close to 1), suggesting a fairly uniform failure thickness of a few 10s of meters in each event, with only rare, deep excavating landslides. The cumulative volume distribution of the failure scars is very well described by a log-normal distribution rather than by an inverse power-law, the most commonly used distribution for both subaerial and submarine landslides. A log-normal distribution centered on a volume of 0.86??km3 may indicate that landslides preferentially mobilize a moderate amount of material (on the order of 1??km3), rather than large landslides or very small ones. Alternatively, the log-normal distribution may reflect an inverse power law distribution modified by a size-dependent probability of observing landslide scars in the bathymetry data. If the latter is the case, an inverse power-law distribution with an exponent of 1.3 ?? 0.3, modified by a size-dependent conditional probability of identifying more failure scars with increasing landslide size, fits the observed size distribution. This exponent value is similar to the predicted exponent of 1.2 ?? 0.3 for subaerial landslides in unconsolidated material. Both the log-normal and modified inverse power-law distributions of the observed failure scar volumes suggest that large landslides, which have the greatest potential to generate damaging tsunamis, occur infrequently along the margin. ?? 2008 Elsevier B.V.

  14. Cumulative detection probabilities and range accuracy of a pulsed Geiger-mode avalanche photodiode laser ranging system

    NASA Astrophysics Data System (ADS)

    Luo, Hanjun; Ouyang, Zhengbiao; Liu, Qiang; Chen, Zhiliang; Lu, Hualan

    2017-10-01

    Cumulative pulses detection with appropriate cumulative pulses number and threshold has the ability to improve the detection performance of the pulsed laser ranging system with GM-APD. In this paper, based on Poisson statistics and multi-pulses cumulative process, the cumulative detection probabilities and their influence factors are investigated. With the normalized probability distribution of each time bin, the theoretical model of the range accuracy and precision is established, and the factors limiting the range accuracy and precision are discussed. The results show that the cumulative pulses detection can produce higher target detection probability and lower false alarm probability. However, for a heavy noise level and extremely weak echo intensity, the false alarm suppression performance of the cumulative pulses detection deteriorates quickly. The range accuracy and precision is another important parameter evaluating the detection performance, the echo intensity and pulse width are main influence factors on the range accuracy and precision, and higher range accuracy and precision is acquired with stronger echo intensity and narrower echo pulse width, for 5-ns echo pulse width, when the echo intensity is larger than 10, the range accuracy and precision lower than 7.5 cm can be achieved.

  15. A model for hematopoietic death in man from irradiation of bone marrow during radioimmunotherapy.

    PubMed

    Scott, B R; Dillehay, L E

    1990-11-01

    There are numerous institutions worldwide performing clinical trials of radioimmunotherapy (RIT) for cancer. For RIT, an exponentially decaying radionuclide is attached by using a chelating agent to a specific monoclonal or polyclonal tumour antibody (e.g. antiferritin IgG). The major limitation to RIT is toxicity to normal tissue in organs other than the one containing the tumour (e.g. bone marrow). The focus of this manuscript is on modelling the risk (or probability) of hematopoietic death in man for exponentially decaying patterns of high-energy beta irradiation (e.g. 90Y) of bone marrow by radioimmunoglobulin injected into the blood. The analytical solutions presented are only applicable to protocols for which significant uptake of radioactivity by the bone marrow does not occur, and only for high energy beta emitters. However, the generic equation used to obtain the analytical solutions is applicable to any continuous pattern of high energy beta irradiation. A model called the "normalized dose model" was used to generate calculated values for the LD50 as a function of the effective half-time for the radioimmunoglobulin in the blood. A less complicated empirical model was used to describe the calculated values. This model is presumed to be valid for effective half-times in blood of up to about 20 days. For longer effective half-times, the LD50 can be estimated using the normalized-dose model presented. In this manuscript, we also provide a modified Weibull model that allows estimation of the risk of hematopoietic death for single or multiple injections (in one cycle) of radioimmunoglobulin, for patients with normal susceptibility to irradiation and for patients with heightened susceptibility. With the modified Weibull model, the risk of hematopoietic death depends on the level of medical treatment provided to mitigate radiation injuries.

  16. Modelling volatility recurrence intervals in the Chinese commodity futures market

    NASA Astrophysics Data System (ADS)

    Zhou, Weijie; Wang, Zhengxin; Guo, Haiming

    2016-09-01

    The law of extreme event occurrence attracts much research. The volatility recurrence intervals of Chinese commodity futures market prices are studied: the results show that the probability distributions of the scaled volatility recurrence intervals have a uniform scaling curve for different thresholds q. So we can deduce the probability distribution of extreme events from normal events. The tail of a scaling curve can be well fitted by a Weibull form, which is significance-tested by KS measures. Both short-term and long-term memories are present in the recurrence intervals with different thresholds q, which denotes that the recurrence intervals can be predicted. In addition, similar to volatility, volatility recurrence intervals also have clustering features. Through Monte Carlo simulation, we artificially synthesise ARMA, GARCH-class sequences similar to the original data, and find out the reason behind the clustering. The larger the parameter d of the FIGARCH model, the stronger the clustering effect is. Finally, we use the Fractionally Integrated Autoregressive Conditional Duration model (FIACD) to analyse the recurrence interval characteristics. The results indicated that the FIACD model may provide a method to analyse volatility recurrence intervals.

  17. Bayesian inference for the genetic control of water deficit tolerance in spring wheat by stochastic search variable selection.

    PubMed

    Safari, Parviz; Danyali, Syyedeh Fatemeh; Rahimi, Mehdi

    2018-06-02

    Drought is the main abiotic stress seriously influencing wheat production. Information about the inheritance of drought tolerance is necessary to determine the most appropriate strategy to develop tolerant cultivars and populations. In this study, generation means analysis to identify the genetic effects controlling grain yield inheritance in water deficit and normal conditions was considered as a model selection problem in a Bayesian framework. Stochastic search variable selection (SSVS) was applied to identify the most important genetic effects and the best fitted models using different generations obtained from two crosses applying two water regimes in two growing seasons. The SSVS is used to evaluate the effect of each variable on the dependent variable via posterior variable inclusion probabilities. The model with the highest posterior probability is selected as the best model. In this study, the grain yield was controlled by the main effects (additive and non-additive effects) and epistatic. The results demonstrate that breeding methods such as recurrent selection and subsequent pedigree method and hybrid production can be useful to improve grain yield.

  18. Contributory fault and level of personal injury to drivers involved in head-on collisions: Application of copula-based bivariate ordinal models.

    PubMed

    Wali, Behram; Khattak, Asad J; Xu, Jingjing

    2018-01-01

    The main objective of this study is to simultaneously investigate the degree of injury severity sustained by drivers involved in head-on collisions with respect to fault status designation. This is complicated to answer due to many issues, one of which is the potential presence of correlation between injury outcomes of drivers involved in the same head-on collision. To address this concern, we present seemingly unrelated bivariate ordered response models by analyzing the joint injury severity probability distribution of at-fault and not-at-fault drivers. Moreover, the assumption of bivariate normality of residuals and the linear form of stochastic dependence implied by such models may be unduly restrictive. To test this, Archimedean copula structures and normal mixture marginals are integrated into the joint estimation framework, which can characterize complex forms of stochastic dependencies and non-normality in residual terms. The models are estimated using 2013 Virginia police reported two-vehicle head-on collision data, where exactly one driver is at-fault. The results suggest that both at-fault and not-at-fault drivers sustained serious/fatal injuries in 8% of crashes, whereas, in 4% of the cases, the not-at-fault driver sustained a serious/fatal injury with no injury to the at-fault driver at all. Furthermore, if the at-fault driver is fatigued, apparently asleep, or has been drinking the not-at-fault driver is more likely to sustain a severe/fatal injury, controlling for other factors and potential correlations between the injury outcomes. While not-at-fault vehicle speed affects injury severity of at-fault driver, the effect is smaller than the effect of at-fault vehicle speed on at-fault injury outcome. Contrarily, and importantly, the effect of at-fault vehicle speed on injury severity of not-at-fault driver is almost equal to the effect of not-at-fault vehicle speed on injury outcome of not-at-fault driver. Compared to traditional ordered probability models, the study provides evidence that copula based bivariate models can provide more reliable estimates and richer insights. Practical implications of the results are discussed. Published by Elsevier Ltd.

  19. Predicting the probability of abnormal stimulated growth hormone response in children after radiotherapy for brain tumors.

    PubMed

    Hua, Chiaho; Wu, Shengjie; Chemaitilly, Wassim; Lukose, Renin C; Merchant, Thomas E

    2012-11-15

    To develop a mathematical model utilizing more readily available measures than stimulation tests that identifies brain tumor survivors with high likelihood of abnormal growth hormone secretion after radiotherapy (RT), to avoid late recognition and a consequent delay in growth hormone replacement therapy. We analyzed 191 prospectively collected post-RT evaluations of peak growth hormone level (arginine tolerance/levodopa stimulation test), serum insulin-like growth factor 1 (IGF-1), IGF-binding protein 3, height, weight, growth velocity, and body mass index in 106 children and adolescents treated for ependymoma (n=72), low-grade glioma (n=28) or craniopharyngioma (n=6), who had normal growth hormone levels before RT. Normal level in this study was defined as the peak growth hormone response to the stimulation test≥7 ng/mL. Independent predictor variables identified by multivariate logistic regression with high statistical significance (p<0.0001) included IGF-1 z score, weight z score, and hypothalamic dose. The developed predictive model demonstrated a strong discriminatory power with an area under the receiver operating characteristic curve of 0.883. At a potential cutoff point of probability of 0.3 the sensitivity was 80% and specificity 78%. Without unpleasant and expensive frequent stimulation tests, our model provides a quantitative approach to closely follow the growth hormone secretory capacity of brain tumor survivors. It allows identification of high-risk children for subsequent confirmatory tests and in-depth workup for diagnosis of growth hormone deficiency. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Multivariable normal tissue complication probability model-based treatment plan optimization for grade 2-4 dysphagia and tube feeding dependence in head and neck radiotherapy.

    PubMed

    Kierkels, Roel G J; Wopken, Kim; Visser, Ruurd; Korevaar, Erik W; van der Schaaf, Arjen; Bijl, Hendrik P; Langendijk, Johannes A

    2016-12-01

    Radiotherapy of the head and neck is challenged by the relatively large number of organs-at-risk close to the tumor. Biologically-oriented objective functions (OF) could optimally distribute the dose among the organs-at-risk. We aimed to explore OFs based on multivariable normal tissue complication probability (NTCP) models for grade 2-4 dysphagia (DYS) and tube feeding dependence (TFD). One hundred head and neck cancer patients were studied. Additional to the clinical plan, two more plans (an OF DYS and OF TFD -plan) were optimized per patient. The NTCP models included up to four dose-volume parameters and other non-dosimetric factors. A fully automatic plan optimization framework was used to optimize the OF NTCP -based plans. All OF NTCP -based plans were reviewed and classified as clinically acceptable. On average, the Δdose and ΔNTCP were small comparing the OF DYS -plan, OF TFD -plan, and clinical plan. For 5% of patients NTCP TFD reduced >5% using OF TFD -based planning compared to the OF DYS -plans. Plan optimization using NTCP DYS - and NTCP TFD -based objective functions resulted in clinically acceptable plans. For patients with considerable risk factors of TFD, the OF TFD steered the optimizer to dose distributions which directly led to slightly lower predicted NTCP TFD values as compared to the other studied plans. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. How to determine an optimal threshold to classify real-time crash-prone traffic conditions?

    PubMed

    Yang, Kui; Yu, Rongjie; Wang, Xuesong; Quddus, Mohammed; Xue, Lifang

    2018-08-01

    One of the proactive approaches in reducing traffic crashes is to identify hazardous traffic conditions that may lead to a traffic crash, known as real-time crash prediction. Threshold selection is one of the essential steps of real-time crash prediction. And it provides the cut-off point for the posterior probability which is used to separate potential crash warnings against normal traffic conditions, after the outcome of the probability of a crash occurring given a specific traffic condition on the basis of crash risk evaluation models. There is however a dearth of research that focuses on how to effectively determine an optimal threshold. And only when discussing the predictive performance of the models, a few studies utilized subjective methods to choose the threshold. The subjective methods cannot automatically identify the optimal thresholds in different traffic and weather conditions in real application. Thus, a theoretical method to select the threshold value is necessary for the sake of avoiding subjective judgments. The purpose of this study is to provide a theoretical method for automatically identifying the optimal threshold. Considering the random effects of variable factors across all roadway segments, the mixed logit model was utilized to develop the crash risk evaluation model and further evaluate the crash risk. Cross-entropy, between-class variance and other theories were employed and investigated to empirically identify the optimal threshold. And K-fold cross-validation was used to validate the performance of proposed threshold selection methods with the help of several evaluation criteria. The results indicate that (i) the mixed logit model can obtain a good performance; (ii) the classification performance of the threshold selected by the minimum cross-entropy method outperforms the other methods according to the criteria. This method can be well-behaved to automatically identify thresholds in crash prediction, by minimizing the cross entropy between the original dataset with continuous probability of a crash occurring and the binarized dataset after using the thresholds to separate potential crash warnings against normal traffic conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Probable autosomal recessive Marfan syndrome.

    PubMed Central

    Fried, K; Krakowsky, D

    1977-01-01

    A probable autosomal recessive mode of inheritance is described in a family with two affected sisters. The sisters showed the typical picture of Marfan syndrome and were of normal intelligence. Both parents and all four grandparents were personally examined and found to be normal. Homocystinuria was ruled out on repeated examinations. This family suggests genetic heterogeneity in Marfan syndrome and that in some rare families the mode of inheritance may be autosomal recessive. Images PMID:592353

  3. q-Gaussian distributions of leverage returns, first stopping times, and default risk valuations

    NASA Astrophysics Data System (ADS)

    Katz, Yuri A.; Tian, Li

    2013-10-01

    We study the probability distributions of daily leverage returns of 520 North American industrial companies that survive de-listing during the financial crisis, 2006-2012. We provide evidence that distributions of unbiased leverage returns of all individual firms belong to the class of q-Gaussian distributions with the Tsallis entropic parameter within the interval 1

  4. The 2011 M = 9.0 Tohoku oki earthquake more than doubled the probability of large shocks beneath Tokyo

    USGS Publications Warehouse

    Toda, Shinji; Stein, Ross S.

    2013-01-01

    1] The Kanto seismic corridor surrounding Tokyo has hosted four to five M ≥ 7 earthquakes in the past 400 years. Immediately after the Tohoku earthquake, the seismicity rate in the corridor jumped 10-fold, while the rate of normal focal mechanisms dropped in half. The seismicity rate decayed for 6–12 months, after which it steadied at three times the pre-Tohoku rate. The seismicity rate jump and decay to a new rate, as well as the focal mechanism change, can be explained by the static stress imparted by the Tohoku rupture and postseismic creep to Kanto faults. We therefore fit the seismicity observations to a rate/state Coulomb model, which we use to forecast the time-dependent probability of large earthquakes in the Kanto seismic corridor. We estimate a 17% probability of a M ≥ 7.0 shock over the 5 year prospective period 11 March 2013 to 10 March 2018, two-and-a-half times the probability had the Tohoku earthquake not struck

  5. A Probabilistic Framework for Peptide and Protein Quantification from Data-Dependent and Data-Independent LC-MS Proteomics Experiments

    PubMed Central

    Richardson, Keith; Denny, Richard; Hughes, Chris; Skilling, John; Sikora, Jacek; Dadlez, Michał; Manteca, Angel; Jung, Hye Ryung; Jensen, Ole Nørregaard; Redeker, Virginie; Melki, Ronald; Langridge, James I.; Vissers, Johannes P.C.

    2013-01-01

    A probability-based quantification framework is presented for the calculation of relative peptide and protein abundance in label-free and label-dependent LC-MS proteomics data. The results are accompanied by credible intervals and regulation probabilities. The algorithm takes into account data uncertainties via Poisson statistics modified by a noise contribution that is determined automatically during an initial normalization stage. Protein quantification relies on assignments of component peptides to the acquired data. These assignments are generally of variable reliability and may not be present across all of the experiments comprising an analysis. It is also possible for a peptide to be identified to more than one protein in a given mixture. For these reasons the algorithm accepts a prior probability of peptide assignment for each intensity measurement. The model is constructed in such a way that outliers of any type can be automatically reweighted. Two discrete normalization methods can be employed. The first method is based on a user-defined subset of peptides, while the second method relies on the presence of a dominant background of endogenous peptides for which the concentration is assumed to be unaffected. Normalization is performed using the same computational and statistical procedures employed by the main quantification algorithm. The performance of the algorithm will be illustrated on example data sets, and its utility demonstrated for typical proteomics applications. The quantification algorithm supports relative protein quantification based on precursor and product ion intensities acquired by means of data-dependent methods, originating from all common isotopically-labeled approaches, as well as label-free ion intensity-based data-independent methods. PMID:22871168

  6. Diagnosing a Strong-Fault Model by Conflict and Consistency

    PubMed Central

    Zhou, Gan; Feng, Wenquan

    2018-01-01

    The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model’s prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS) with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF). Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain—the heat control unit of a spacecraft—where the proposed methods are significantly better than best first and conflict directly with A* search methods. PMID:29596302

  7. Modified cuspal relationships of mandibular molar teeth in children with Down's syndrome

    PubMed Central

    PERETZ, BENJAMIN; SHAPIRA, JOSEPH; FARBSTEIN, HANNA; ARIELI, ELIAHU; SMITH, PATRICIA

    1998-01-01

    A total of 50 permanent mandibular 1st molars of 26 children with Down's syndrome (DS) were examined from dental casts and 59 permanent mandibular 1st molars of normal children were examined from 33 individuals. The following measurements were performed on both right and left molars (teeth 46 and 36 respectively): (a) the intercusp distances (mb-db, mb-d, mb-dl, db-ml, db-d, db-dl, db-ml, d-dl, d-ml, dl-ml); (b) the db-mb-ml, mb-db-ml, mb-ml-db, d-mb-dl, mb-d-dl, mb-dl-d angles; (c) the area of the pentagon formed by connecting the cusp tips. All intercusp distances were significantly smaller in the DS group. Stepwise logistic regression, applied to all the intercusp distances, was used to design a multivariate probability model for DS and normals. A model based on 2 distances only, mb-dl and mb-db, proved sufficient to discriminate between the teeth of DS and the normal population. The model for tooth 36 for example was as follows: formula here A similar model for tooth 46 was also created, as well as a model which incorporated both teeth. With respect to the angles, significant differences between DS and normals were found in 3 out of the 6 angles which were measured: the d-mb-dl angle was smaller than in normals, the mb-d-dl angle was higher, and the mb-dl-d angle was smaller. The dl cusp was located closer to the centre of the tooth. The change in size occurs at an early stage, while the change in shape occurs in a later stage of tooth formation in the DS population. PMID:10029186

  8. Assessment and visualization of uncertainty for countrywide soil organic matter map of Hungary using local entropy

    NASA Astrophysics Data System (ADS)

    Szatmári, Gábor; Pásztor, László

    2016-04-01

    Uncertainty is a general term expressing our imperfect knowledge in describing an environmental process and we are aware of it (Bárdossy and Fodor, 2004). Sampling, laboratory measurements, models and so on are subject to uncertainty. Effective quantification and visualization of uncertainty would be indispensable to stakeholders (e.g. policy makers, society). Soil related features and their spatial models should be stressfully targeted to uncertainty assessment because their inferences are further used in modelling and decision making process. The aim of our present study was to assess and effectively visualize the local uncertainty of the countrywide soil organic matter (SOM) spatial distribution model of Hungary using geostatistical tools and concepts. The Hungarian Soil Information and Monitoring System's SOM data (approximately 1,200 observations) and environmental related, spatially exhaustive secondary information (i.e. digital elevation model, climatic maps, MODIS satellite images and geological map) were used to model the countrywide SOM spatial distribution by regression kriging. It would be common to use the calculated estimation (or kriging) variance as a measure of uncertainty, however the normality and homoscedasticity hypotheses have to be refused according to our preliminary analysis on the data. Therefore, a normal score transformation and a sequential stochastic simulation approach was introduced to be able to model and assess the local uncertainty. Five hundred equally probable realizations (i.e. stochastic images) were generated. The number of the stochastic images is fairly enough to provide a model of uncertainty at each location, which is a complete description of uncertainty in geostatistics (Deutsch and Journel, 1998). Furthermore, these models can be applied e.g. to contour the probability of any events, which can be regarded as goal oriented digital soil maps and are of interest for agricultural management and decision making as well. A standardized measure of the local entropy was used to visualize uncertainty, where entropy values close to 1 correspond to high uncertainty, whilst values close to 0 correspond low uncertainty. The advantage of the usage of local entropy in this context is that it combines probabilities from multiple members into a single number for each location of the model. In conclusion, it is straightforward to use a sequential stochastic simulation approach to the assessment of uncertainty, when normality and homoscedasticity are violated. The visualization of uncertainty using the local entropy is effective and communicative to stakeholders because it represents the uncertainty through a single number within a [0, 1] scale. References: Bárdossy, Gy. & Fodor, J., 2004. Evaluation of Uncertainties and Risks in Geology. Springer-Verlag, Berlin Heidelberg. Deutsch, C.V. & Journel, A.G., 1998. GSLIB: geostatistical software library and user's guide. Oxford University Press, New York. Acknowledgement: Our work was supported by the Hungarian National Scientific Research Foundation (OTKA, Grant No. K105167).

  9. The stochastic distribution of available coefficient of friction for human locomotion of five different floor surfaces.

    PubMed

    Chang, Wen-Ruey; Matz, Simon; Chang, Chien-Chi

    2014-05-01

    The maximum coefficient of friction that can be supported at the shoe and floor interface without a slip is usually called the available coefficient of friction (ACOF) for human locomotion. The probability of a slip could be estimated using a statistical model by comparing the ACOF with the required coefficient of friction (RCOF), assuming that both coefficients have stochastic distributions. An investigation of the stochastic distributions of the ACOF of five different floor surfaces under dry, water and glycerol conditions is presented in this paper. One hundred friction measurements were performed on each floor surface under each surface condition. The Kolmogorov-Smirnov goodness-of-fit test was used to determine if the distribution of the ACOF was a good fit with the normal, log-normal and Weibull distributions. The results indicated that the ACOF distributions had a slightly better match with the normal and log-normal distributions than with the Weibull in only three out of 15 cases with a statistical significance. The results are far more complex than what had heretofore been published and different scenarios could emerge. Since the ACOF is compared with the RCOF for the estimate of slip probability, the distribution of the ACOF in seven cases could be considered a constant for this purpose when the ACOF is much lower or higher than the RCOF. A few cases could be represented by a normal distribution for practical reasons based on their skewness and kurtosis values without a statistical significance. No representation could be found in three cases out of 15. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  10. Stress accumulation in the Marmara Sea estimated through ground-motion simulations from dynamic rupture scenarios

    NASA Astrophysics Data System (ADS)

    Aochi, Hideo; Douglas, John; Ulrich, Thomas

    2017-03-01

    We compare ground motions simulated from dynamic rupture scenarios, for the seismic gap along the North Anatolian Fault under the Marmara Sea (Turkey), to estimates from empirical ground motion prediction equations (GMPEs). Ground motions are simulated using a finite difference method and a 3-D model of the local crustal structure. They are analyzed at more than a thousand locations in terms of horizontal peak ground velocity. Characteristics of probable earthquake scenarios are strongly dependent on the hypothesized level of accumulated stress, in terms of a normalized stress parameter T. With respect to the GMPEs, it is found that simulations for many scenarios systematically overestimate the ground motions at all distances. Simulations for only some scenarios, corresponding to moderate stress accumulation, match the estimates from the GMPEs. The difference between the simulations and the GMPEs is used to quantify the relative probabilities of each scenario and, therefore, to revise the probability of the stress field. A magnitude Mw7+ operating at moderate prestress field (0.6 < T ≤ 0.7) is statistically more probable, as previously assumed in the logic tree of probabilistic assessment of rupture scenarios. This approach of revising the mechanical hypothesis by means of comparison to an empirical statistical model (e.g., a GMPE) is useful not only for practical seismic hazard assessments but also to understand crustal dynamics.

  11. Selecting the right statistical model for analysis of insect count data by using information theoretic measures.

    PubMed

    Sileshi, G

    2006-10-01

    Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.

  12. Single-cell-based computer simulation of the oxygen-dependent tumour response to irradiation

    NASA Astrophysics Data System (ADS)

    Harting, Christine; Peschke, Peter; Borkenstein, Klaus; Karger, Christian P.

    2007-08-01

    Optimization of treatment plans in radiotherapy requires the knowledge of tumour control probability (TCP) and normal tissue complication probability (NTCP). Mathematical models may help to obtain quantitative estimates of TCP and NTCP. A single-cell-based computer simulation model is presented, which simulates tumour growth and radiation response on the basis of the response of the constituting cells. The model contains oxic, hypoxic and necrotic tumour cells as well as capillary cells which are considered as sources of a radial oxygen profile. Survival of tumour cells is calculated by the linear quadratic model including the modified response due to the local oxygen concentration. The model additionally includes cell proliferation, hypoxia-induced angiogenesis, apoptosis and resorption of inactivated tumour cells. By selecting different degrees of angiogenesis, the model allows the simulation of oxic as well as hypoxic tumours having distinctly different oxygen distributions. The simulation model showed that poorly oxygenated tumours exhibit an increased radiation tolerance. Inter-tumoural variation of radiosensitivity flattens the dose response curve. This effect is enhanced by proliferation between fractions. Intra-tumoural radiosensitivity variation does not play a significant role. The model may contribute to the mechanistic understanding of the influence of biological tumour parameters on TCP. It can in principle be validated in radiation experiments with experimental tumours.

  13. Boundary Layer Effect on Behavior of Discrete Models.

    PubMed

    Eliáš, Jan

    2017-02-10

    The paper studies systems of rigid bodies with randomly generated geometry interconnected by normal and tangential bonds. The stiffness of these bonds determines the macroscopic elastic modulus while the macroscopic Poisson's ratio of the system is determined solely by the normal/tangential stiffness ratio. Discrete models with no directional bias have the same probability of element orientation for any direction and therefore the same mechanical properties in a statistical sense at any point and direction. However, the layers of elements in the vicinity of the boundary exhibit biased orientation, preferring elements parallel with the boundary. As a consequence, when strain occurs in this direction, the boundary layer becomes stiffer than the interior for the normal/tangential stiffness ratio larger than one, and vice versa. Nonlinear constitutive laws are typically such that the straining of an element in shear results in higher strength and ductility than straining in tension. Since the boundary layer tends, due to the bias in the elemental orientation, to involve more tension than shear at the contacts, it also becomes weaker and less ductile. The paper documents these observations and compares them to the results of theoretical analysis.

  14. A NEW METHOD OF PEAK DETECTION FOR ANALYSIS OF COMPREHENSIVE TWO-DIMENSIONAL GAS CHROMATOGRAPHY MASS SPECTROMETRY DATA.

    PubMed

    Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang

    2014-06-01

    We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models.

  15. Bivariate sub-Gaussian model for stock index returns

    NASA Astrophysics Data System (ADS)

    Jabłońska-Sabuka, Matylda; Teuerle, Marek; Wyłomańska, Agnieszka

    2017-11-01

    Financial time series are commonly modeled with methods assuming data normality. However, the real distribution can be nontrivial, also not having an explicitly formulated probability density function. In this work we introduce novel parameter estimation and high-powered distribution testing methods which do not rely on closed form densities, but use the characteristic functions for comparison. The approach applied to a pair of stock index returns demonstrates that such a bivariate vector can be a sample coming from a bivariate sub-Gaussian distribution. The methods presented here can be applied to any nontrivially distributed financial data, among others.

  16. Statistical analysis of experimental data for mathematical modeling of physical processes in the atmosphere

    NASA Astrophysics Data System (ADS)

    Karpushin, P. A.; Popov, Yu B.; Popova, A. I.; Popova, K. Yu; Krasnenko, N. P.; Lavrinenko, A. V.

    2017-11-01

    In this paper, the probabilities of faultless operation of aerologic stations are analyzed, the hypothesis of normality of the empirical data required for using the Kalman filter algorithms is tested, and the spatial correlation functions of distributions of meteorological parameters are determined. The results of a statistical analysis of two-term (0, 12 GMT) radiosonde observations of the temperature and wind velocity components at some preset altitude ranges in the troposphere in 2001-2016 are presented. These data can be used in mathematical modeling of physical processes in the atmosphere.

  17. A Prior for Neural Networks utilizing Enclosing Spheres for Normalization

    NASA Astrophysics Data System (ADS)

    v. Toussaint, U.; Gori, S.; Dose, V.

    2004-11-01

    Neural Networks are famous for their advantageous flexibility for problems when there is insufficient knowledge to set up a proper model. On the other hand this flexibility can cause over-fitting and can hamper the generalization properties of neural networks. Many approaches to regularize NN have been suggested but most of them based on ad-hoc arguments. Employing the principle of transformation invariance we derive a general prior in accordance with the Bayesian probability theory for a class of feedforward networks. Optimal networks are determined by Bayesian model comparison verifying the applicability of this approach.

  18. Determination of a Testing Threshold for Lumbar Puncture in the Diagnosis of Subarachnoid Hemorrhage after a Negative Head Computed Tomography: A Decision Analysis.

    PubMed

    Taylor, Richard Andrew; Singh Gill, Harman; Marcolini, Evie G; Meyers, H Pendell; Faust, Jeremy Samuel; Newman, David H

    2016-10-01

    The objective was to determine the testing threshold for lumbar puncture (LP) in the evaluation of aneurysmal subarachnoid hemorrhage (SAH) after a negative head computed tomography (CT). As a secondary aim we sought to identify clinical variables that have the greatest impact on this threshold. A decision analytic model was developed to estimate the testing threshold for patients with normal neurologic findings, being evaluated for SAH, after a negative CT of the head. The testing threshold was calculated as the pretest probability of disease where the two strategies (LP or no LP) are balanced in terms of quality-adjusted life-years. Two-way and probabilistic sensitivity analyses (PSAs) were performed. For the base-case scenario the testing threshold for performing an LP after negative head CT was 4.3%. Results for the two-way sensitivity analyses demonstrated that the test threshold ranged from 1.9% to 15.6%, dominated by the uncertainty in the probability of death from initial missed SAH. In the PSA the mean testing threshold was 4.3% (95% confidence interval = 1.4% to 9.3%). Other significant variables in the model included probability of aneurysmal versus nonaneurysmal SAH after negative head CT, probability of long-term morbidity from initial missed SAH, and probability of renal failure from contrast-induced nephropathy. Our decision analysis results suggest a testing threshold for LP after negative CT to be approximately 4.3%, with a range of 1.4% to 9.3% on robust PSA. In light of these data, and considering the low probability of aneurysmal SAH after a negative CT, classical teaching and current guidelines addressing testing for SAH should be revisited. © 2016 by the Society for Academic Emergency Medicine.

  19. Study on probability distribution of prices in electricity market: A case study of zhejiang province, china

    NASA Astrophysics Data System (ADS)

    Zhou, H.; Chen, B.; Han, Z. X.; Zhang, F. Q.

    2009-05-01

    The study on probability density function and distribution function of electricity prices contributes to the power suppliers and purchasers to estimate their own management accurately, and helps the regulator monitor the periods deviating from normal distribution. Based on the assumption of normal distribution load and non-linear characteristic of the aggregate supply curve, this paper has derived the distribution of electricity prices as the function of random variable of load. The conclusion has been validated with the electricity price data of Zhejiang market. The results show that electricity prices obey normal distribution approximately only when supply-demand relationship is loose, whereas the prices deviate from normal distribution and present strong right-skewness characteristic. Finally, the real electricity markets also display the narrow-peak characteristic when undersupply occurs.

  20. Grammaticality, Acceptability, and Probability: A Probabilistic View of Linguistic Knowledge.

    PubMed

    Lau, Jey Han; Clark, Alexander; Lappin, Shalom

    2017-07-01

    The question of whether humans represent grammatical knowledge as a binary condition on membership in a set of well-formed sentences, or as a probabilistic property has been the subject of debate among linguists, psychologists, and cognitive scientists for many decades. Acceptability judgments present a serious problem for both classical binary and probabilistic theories of grammaticality. These judgements are gradient in nature, and so cannot be directly accommodated in a binary formal grammar. However, it is also not possible to simply reduce acceptability to probability. The acceptability of a sentence is not the same as the likelihood of its occurrence, which is, in part, determined by factors like sentence length and lexical frequency. In this paper, we present the results of a set of large-scale experiments using crowd-sourced acceptability judgments that demonstrate gradience to be a pervasive feature in acceptability judgments. We then show how one can predict acceptability judgments on the basis of probability by augmenting probabilistic language models with an acceptability measure. This is a function that normalizes probability values to eliminate the confounding factors of length and lexical frequency. We describe a sequence of modeling experiments with unsupervised language models drawn from state-of-the-art machine learning methods in natural language processing. Several of these models achieve very encouraging levels of accuracy in the acceptability prediction task, as measured by the correlation between the acceptability measure scores and mean human acceptability values. We consider the relevance of these results to the debate on the nature of grammatical competence, and we argue that they support the view that linguistic knowledge can be intrinsically probabilistic. Copyright © 2016 Cognitive Science Society, Inc.

  1. Load-Based Lower Neck Injury Criteria for Females from Rear Impact from Cadaver Experiments.

    PubMed

    Yoganandan, Narayan; Pintar, Frank A; Banerjee, Anjishnu

    2017-05-01

    The objectives of this study were to derive lower neck injury metrics/criteria and injury risk curves for the force, moment, and interaction criterion in rear impacts for females. Biomechanical data were obtained from previous intact and isolated post mortem human subjects and head-neck complexes subjected to posteroanterior accelerative loading. Censored data were used in the survival analysis model. The primary shear force, sagittal bending moment, and interaction (lower neck injury criterion, LN ic ) metrics were significant predictors of injury. The most optimal distribution was selected (Weibulll, log normal, or log logistic) using the Akaike information criterion according to the latest ISO recommendations for deriving risk curves. The Kolmogorov-Smirnov test was used to quantify robustness of the assumed parametric model. The intercepts for the interaction index were extracted from the primary risk curves. Normalized confidence interval sizes (NCIS) were reported at discrete probability levels, along with the risk curves and 95% confidence intervals. The mean force of 214 N, moment of 54 Nm, and 0.89 LN ic were associated with a five percent probability of injury. The NCIS for these metrics were 0.90, 0.95, and 0.85. These preliminary results can be used as a first step in the definition of lower neck injury criteria for women under posteroanterior accelerative loading in crashworthiness evaluations.

  2. Differential diagnosis of normal pressure hydrocephalus by MRI mean diffusivity histogram analysis.

    PubMed

    Ivkovic, M; Liu, B; Ahmed, F; Moore, D; Huang, C; Raj, A; Kovanlikaya, I; Heier, L; Relkin, N

    2013-01-01

    Accurate diagnosis of normal pressure hydrocephalus is challenging because the clinical symptoms and radiographic appearance of NPH often overlap those of other conditions, including age-related neurodegenerative disorders such as Alzheimer and Parkinson diseases. We hypothesized that radiologic differences between NPH and AD/PD can be characterized by a robust and objective MR imaging DTI technique that does not require intersubject image registration or operator-defined regions of interest, thus avoiding many pitfalls common in DTI methods. We collected 3T DTI data from 15 patients with probable NPH and 25 controls with AD, PD, or dementia with Lewy bodies. We developed a parametric model for the shape of intracranial mean diffusivity histograms that separates brain and ventricular components from a third component composed mostly of partial volume voxels. To accurately fit the shape of the third component, we constructed a parametric function named the generalized Voss-Dyke function. We then examined the use of the fitting parameters for the differential diagnosis of NPH from AD, PD, and DLB. Using parameters for the MD histogram shape, we distinguished clinically probable NPH from the 3 other disorders with 86% sensitivity and 96% specificity. The technique yielded 86% sensitivity and 88% specificity when differentiating NPH from AD only. An adequate parametric model for the shape of intracranial MD histograms can distinguish NPH from AD, PD, or DLB with high sensitivity and specificity.

  3. Study of sea-surface slope distribution and its effect on radar backscatter based on Global Precipitation Measurement Ku-band precipitation radar measurements

    NASA Astrophysics Data System (ADS)

    Yan, Qiushuang; Zhang, Jie; Fan, Chenqing; Wang, Jing; Meng, Junmin

    2018-01-01

    The collocated normalized radar backscattering cross-section measurements from the Global Precipitation Measurement (GPM) Ku-band precipitation radar (KuPR) and the winds from the moored buoys are used to study the effect of different sea-surface slope probability density functions (PDFs), including the Gaussian PDF, the Gram-Charlier PDF, and the Liu PDF, on the geometrical optics (GO) model predictions of the radar backscatter at low incidence angles (0 deg to 18 deg) at different sea states. First, the peakedness coefficient in the Liu distribution is determined using the collocations at the normal incidence angle, and the results indicate that the peakedness coefficient is a nonlinear function of the wind speed. Then, the performance of the modified Liu distribution, i.e., Liu distribution using the obtained peakedness coefficient estimate; the Gaussian distribution; and the Gram-Charlier distribution is analyzed. The results show that the GO model predictions with the modified Liu distribution agree best with the KuPR measurements, followed by the predictions with the Gaussian distribution, while the predictions with the Gram-Charlier distribution have larger differences as the total or the slick filtered, not the radar filtered, probability density is included in the distribution. The best-performing distribution changes with incidence angle and changes with wind speed.

  4. Avoidance of voiding cystourethrography in infants younger than 3 months with Escherichia coli urinary tract infection and normal renal ultrasound.

    PubMed

    Pauchard, Jean-Yves; Chehade, Hassib; Kies, Chafika Zohra; Girardin, Eric; Cachat, Francois; Gehri, Mario

    2017-09-01

    Urinary tract infection (UTI) represents the most common bacterial infection in infants, and its prevalence increases with the presence of high-grade vesicoureteral reflux (VUR). However, voiding cystourethrography (VCUG) is invasive, and its indication in infants <3 months is not yet defined. This study aims to investigate, in infants aged 0-3 months, if the presence of Escherichia coli versus non- E. coli bacteria and/or normal or abnormal renal ultrasound (US) could avoid the use of VCUG. One hundred and twenty-two infants with a first febrile UTI were enrolled. High-grade VUR was defined by the presence of VUR grade ≥III. The presence of high-grade VUR was recorded using VCUG, and correlated with the presence of E. coli /non- E. coli UTI and with the presence of normal/abnormal renal US. The Bayes theorem was used to calculate pretest and post-test probability. The probability of high-grade VUR was 3% in the presence of urinary E. coli infection. Adding a normal renal US finding decreased this probability to 1%. However, in the presence of non- E. coli bacteria, the probability of high-grade VUR was 26%, and adding an abnormal US finding increased further this probability to 55%. In infants aged 0-3 months with a first febrile UTI, the presence of E. coli and normal renal US findings allow to safely avoid VCUG. Performing VCUG only in infants with UTI secondary to non- E. coli bacteria and/or abnormal US would save many unnecessary invasive procedures, limit radiation exposure, with a very low risk (<1%) of missing a high-grade VUR. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  5. The evolution of tectonic features on Ganymede

    NASA Technical Reports Server (NTRS)

    Squyres, S. W.

    1982-01-01

    The bands of bright resurfaced terrain on Ganymede are probably broad grabens formed by global expansion and filled with deposits of ice. Grooves within the bands are thought to be extensional features formed during the same episode of expansion. The crust of Ganymede is modeled as a viscoelastic material subjected to extensional strain. With sufficiently high strain rates and stresses, deep normal faulting will occur, creating broad grabens that may then be filled. Continuing deformation at high strain rates and stresses will cause propagation of deep faults up into the flood deposits and normal faulting at the surface, while lower strain rates and stresses will cause formation of open extension fractures or, if the crustal strength is very low, grabens at the surface. The spacing between adjacent fractures may reflect the geothermal gradient at the time of deformation. Surface topography resulting from fracturing and normal faulting will decay with time as a result of viscous relaxation and mass-wasting.

  6. Dose and detectability for a cone-beam C-arm CT system revisited

    PubMed Central

    Ganguly, Arundhuti; Yoon, Sungwon; Fahrig, Rebecca

    2010-01-01

    Purpose: The authors had previously published measurements of the detectability of disk-shaped contrast objects in images obtained from a C-arm CT system. A simple approach based on Rose’s criterion was used to scale the date, assuming the threshold for the smallest diameter detected should be inversely proportional to (dose)1∕2. A more detailed analysis based on recent theoretical modeling of C-arm CT images is presented in this work. Methods: The signal and noise propagations in a C-arm based CT system have been formulated by other authors using cascaded systems analysis. They established a relationship between detectability and the noise equivalent quanta. Based on this model, the authors obtained a relation between x-ray dose and the diameter of the smallest disks detected. A closed form solution was established by assuming no rebinning and no resampling of data, with low additive noise and using a ramp filter. For the case when no such assumptions were made, a numerically calculated solution using previously reported imaging and reconstruction parameters was obtained. The detection probabilities for a range of dose and kVp values had been measured previously. These probabilities were normalized to a single dose of 56.6 mGy using the Rose-criteria-based relation to obtain a universal curve. Normalizations based on the new numerically calculated relationship were compared to the measured results. Results: The theoretical and numerical calculations have similar results and predict the detected diameter size to be inversely proportional to (dose)1∕3 and (dose)1∕2.8, respectively. The normalized experimental curves and the associated universal plot using the new relation were not significantly different from those obtained using the Rose-criterion-based normalization. Conclusions: From numerical simulations, the authors found that the diameter of detected disks depends inversely on the cube root of the dose. For observer studies for disks larger than 4 mm, the cube root as well as square root relations appear to give similar results when used for normalization. PMID:20527560

  7. The significance of the choice of radiobiological (NTCP) models in treatment plan objective functions.

    PubMed

    Miller, J; Fuller, M; Vinod, S; Suchowerska, N; Holloway, L

    2009-06-01

    A Clinician's discrimination between radiation therapy treatment plans is traditionally a subjective process, based on experience and existing protocols. A more objective and quantitative approach to distinguish between treatment plans is to use radiobiological or dosimetric objective functions, based on radiobiological or dosimetric models. The efficacy of models is not well understood, nor is the correlation of the rank of plans resulting from the use of models compared to the traditional subjective approach. One such radiobiological model is the Normal Tissue Complication Probability (NTCP). Dosimetric models or indicators are more accepted in clinical practice. In this study, three radiobiological models, Lyman NTCP, critical volume NTCP and relative seriality NTCP, and three dosimetric models, Mean Lung Dose (MLD) and the Lung volumes irradiated at 10Gy (V10) and 20Gy (V20), were used to rank a series of treatment plans using, harm to normal (Lung) tissue as the objective criterion. None of the models considered in this study showed consistent correlation with the Radiation Oncologists plan ranking. If radiobiological or dosimetric models are to be used in objective functions for lung treatments, based on this study it is recommended that the Lyman NTCP model be used because it will provide most consistency with traditional clinician ranking.

  8. An assessment on the use of bivariate, multivariate and soft computing techniques for collapse susceptibility in GIS environ

    NASA Astrophysics Data System (ADS)

    Yilmaz, Işik; Marschalko, Marian; Bednarik, Martin

    2013-04-01

    The paper presented herein compares and discusses the use of bivariate, multivariate and soft computing techniques for collapse susceptibility modelling. Conditional probability (CP), logistic regression (LR) and artificial neural networks (ANN) models representing the bivariate, multivariate and soft computing techniques were used in GIS based collapse susceptibility mapping in an area from Sivas basin (Turkey). Collapse-related factors, directly or indirectly related to the causes of collapse occurrence, such as distance from faults, slope angle and aspect, topographical elevation, distance from drainage, topographic wetness index (TWI), stream power index (SPI), Normalized Difference Vegetation Index (NDVI) by means of vegetation cover, distance from roads and settlements were used in the collapse susceptibility analyses. In the last stage of the analyses, collapse susceptibility maps were produced from the models, and they were then compared by means of their validations. However, Area Under Curve (AUC) values obtained from all three models showed that the map obtained from soft computing (ANN) model looks like more accurate than the other models, accuracies of all three models can be evaluated relatively similar. The results also showed that the conditional probability is an essential method in preparation of collapse susceptibility map and highly compatible with GIS operating features.

  9. Chaotic advection at large Péclet number: Electromagnetically driven experiments, numerical simulations, and theoretical predictions

    NASA Astrophysics Data System (ADS)

    Figueroa, Aldo; Meunier, Patrice; Cuevas, Sergio; Villermaux, Emmanuel; Ramos, Eduardo

    2014-01-01

    We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, "The diffusive strip method for scalar mixing in two-dimensions," J. Fluid Mech. 662, 134-172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.

  10. A Skill Score of Trajectory Model Evaluation Using Reinitialized Series of Normalized Cumulative Lagrangian Separation

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Weisberg, R. H.

    2017-12-01

    The Lagrangian separation distance between the endpoints of simulated and observed drifter trajectories is often used to assess the performance of numerical particle trajectory models. However, the separation distance fails to indicate relative model performance in weak and strong current regions, such as a continental shelf and its adjacent deep ocean. A skill score is proposed based on the cumulative Lagrangian separation distances normalized by the associated cumulative trajectory lengths. The new metrics correctly indicates the relative performance of the Global HYCOM in simulating the strong currents of the Gulf of Mexico Loop Current and the weaker currents of the West Florida Shelf in the eastern Gulf of Mexico. In contrast, the Lagrangian separation distance alone gives a misleading result. Also, the observed drifter position series can be used to reinitialize the trajectory model and evaluate its performance along the observed trajectory, not just at the drifter end position. The proposed dimensionless skill score is particularly useful when the number of drifter trajectories is limited and neither a conventional Eulerian-based velocity nor a Lagrangian-based probability density function may be estimated.

  11. Postfragmentation density function for bacterial aggregates in laminar flow.

    PubMed

    Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John; Bortz, David M

    2011-04-01

    The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. ©2011 American Physical Society

  12. Investigation of the relation between the return periods of major drought characteristics using copula functions

    NASA Astrophysics Data System (ADS)

    Hüsami Afşar, Mehdi; Unal Şorman, Ali; Tugrul Yilmaz, Mustafa

    2016-04-01

    Different drought characteristics (e.g. duration, average severity, and average areal extent) often have monotonic relation that increased magnitude of one often follows a similar increase in the magnitude of the other drought characteristic. Hence it is viable to establish a relationship between different drought characteristics with the goal of predicting one using other ones. Copula functions that relate different variables using their joint and conditional cumulative probability distributions are often used to statistically model the drought characteristics. In this study bivariate and trivariate joint probabilities of these characteristics are obtained over Ankara (Turkey) between 1960 and 2013. Copula-based return period estimation of drought characteristics of duration, average severity, and average areal extent show joint probabilities of these characteristics can be satisfactorily achieved. Among different copula families investigated in this study, elliptical family (i.e. including normal and t-student copula functions) resulted in the lowest root mean square error. "This study was supported by TUBITAK fund #114Y676)."

  13. Primary Epstein-Barr virus infection and probable parvovirus B19 reactivation resulting in fulminant hepatitis and fulfilling five of eight criteria for hemophagocytic lymphohistiocytosis.

    PubMed

    Karrasch, Matthias; Felber, Jörg; Keller, Peter M; Kletta, Christine; Egerer, Renate; Bohnert, Jürgen; Hermann, Beate; Pfister, Wolfgang; Theis, Bernhard; Petersen, Iver; Stallmach, Andreas; Baier, Michael

    2014-11-01

    A case of primary Epstein-Barr virus (EBV) infection/parvovirus B19 reactivation fulfilling five of eight criteria for hemophagocytic lymphohistiocytosis (HLH) is presented. Despite two coinciding viral infections, massive splenomegaly, and fulminant hepatitis, the patient had a good clinical outcome, probably due to an early onset form of HLH with normal leukocyte count, normal natural killer (NK) cell function, and a lack of hemophagocytosis.

  14. Earthquake scaling laws for rupture geometry and slip heterogeneity

    NASA Astrophysics Data System (ADS)

    Thingbaijam, Kiran K. S.; Mai, P. Martin; Goda, Katsuichiro

    2016-04-01

    We analyze an extensive compilation of finite-fault rupture models to investigate earthquake scaling of source geometry and slip heterogeneity to derive new relationships for seismic and tsunami hazard assessment. Our dataset comprises 158 earthquakes with a total of 316 rupture models selected from the SRCMOD database (http://equake-rc.info/srcmod). We find that fault-length does not saturate with earthquake magnitude, while fault-width reveals inhibited growth due to the finite seismogenic thickness. For strike-slip earthquakes, fault-length grows more rapidly with increasing magnitude compared to events of other faulting types. Interestingly, our derived relationship falls between the L-model and W-model end-members. In contrast, both reverse and normal dip-slip events are more consistent with self-similar scaling of fault-length. However, fault-width scaling relationships for large strike-slip and normal dip-slip events, occurring on steeply dipping faults (δ~90° for strike-slip faults, and δ~60° for normal faults), deviate from self-similarity. Although reverse dip-slip events in general show self-similar scaling, the restricted growth of down-dip fault extent (with upper limit of ~200 km) can be seen for mega-thrust subduction events (M~9.0). Despite this fact, for a given earthquake magnitude, subduction reverse dip-slip events occupy relatively larger rupture area, compared to shallow crustal events. In addition, we characterize slip heterogeneity in terms of its probability distribution and spatial correlation structure to develop a complete stochastic random-field characterization of earthquake slip. We find that truncated exponential law best describes the probability distribution of slip, with observable scale parameters determined by the average and maximum slip. Applying Box-Cox transformation to slip distributions (to create quasi-normal distributed data) supports cube-root transformation, which also implies distinctive non-Gaussian slip distributions. To further characterize the spatial correlations of slip heterogeneity, we analyze the power spectral decay of slip applying the 2-D von Karman auto-correlation function (parameterized by the Hurst exponent, H, and correlation lengths along strike and down-slip). The Hurst exponent is scale invariant, H = 0.83 (± 0.12), while the correlation lengths scale with source dimensions (seismic moment), thus implying characteristic physical scales of earthquake ruptures. Our self-consistent scaling relationships allow constraining the generation of slip-heterogeneity scenarios for physics-based ground-motion and tsunami simulations.

  15. Why the chameleon has spiral-shaped muscle fibres in its tongue

    PubMed Central

    Leeuwen, J. L. van

    1997-01-01

    The intralingual accelerator muscle is the primary actuator for the remarkable ballistic tongue projection of the chameleon. At rest, this muscle envelopes the elongated entoglossal process, a cylindrically shaped bone with a tapering distal end. During tongue projection, the accelerator muscle elongates and slides forward along the entoglossal process until the entire muscle extends beyond the distal end of the process. The accelerator muscle fibres are arranged in transverse planes (small deviations are possible), and form (hitherto unexplained) spiral-shaped arcs from the peripheral to the internal boundary. To initiate tongue projection, the muscle fibres probably generate a high intramuscular pressure. The resulting negative pressure gradient (from base to tip) causes the muscle to elongate and to accelerate forward. Effective forward sliding is made possible by a lubricant and a relatively low normal stress exerted on the proximal cylindrical part of the entoglossal process. A relatively high normal stress is, however, probably required for an effective acceleration of muscle tissue over the tapered end of the process. For optimal performance, the fast extension movement should occur without significant (energy absorbing) torsional motion of the tongue. In addition, the tongue extension movement is aided by a close packing of the muscles fibres (required for a high power density) and a uniform strain and work output in every cross-section of the muscle. A quantitative model of the accelerator muscle was developed that predicts internal muscle fibre arrangements based on the functional requirements above and the physical principle of mechanical stability. The curved shapes and orientations of the muscle fibres typically found in the accelerator muscle were accurately predicted by the model. Furthermore, the model predicts that the reduction of the entoglossal radius towards the tip (and thus the internal radius of the muscle) tends to increase the normal stress on the entoglossal bone.

  16. Radiobiological Determination of Dose Escalation and Normal Tissue Toxicity in Definitive Chemoradiation Therapy for Esophageal Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren, Samantha, E-mail: Samantha.warren@oncology.ox.ac.uk; Partridge, Mike; Carrington, Rhys

    2014-10-01

    Purpose: This study investigated the trade-off in tumor coverage and organ-at-risk sparing when applying dose escalation for concurrent chemoradiation therapy (CRT) of mid-esophageal cancer, using radiobiological modeling to estimate local control and normal tissue toxicity. Methods and Materials: Twenty-one patients with mid-esophageal cancer were selected from the SCOPE1 database (International Standard Randomised Controlled Trials number 47718479), with a mean planning target volume (PTV) of 327 cm{sup 3}. A boost volume, PTV2 (GTV + 0.5 cm margin), was created. Radiobiological modeling of tumor control probability (TCP) estimated the dose required for a clinically significant (+20%) increase in local control as 62.5more » Gy/25 fractions. A RapidArc (RA) plan with a simultaneously integrated boost (SIB) to PTV2 (RA{sub 62.5}) was compared to a standard dose plan of 50 Gy/25 fractions (RA{sub 50}). Dose-volume metrics and estimates of normal tissue complication probability (NTCP) for heart and lungs were compared. Results: Clinically acceptable dose escalation was feasible for 16 of 21 patients, with significant gains (>18%) in tumor control from 38.2% (RA{sub 50}) to 56.3% (RA{sub 62.5}), and only a small increase in predicted toxicity: median heart NTCP 4.4% (RA{sub 50}) versus 5.6% (RA{sub 62.5}) P<.001 and median lung NTCP 6.5% (RA{sub 50}) versus 7.5% (RA{sub 62.5}) P<.001. Conclusions: Dose escalation to the GTV to improve local control is possible when overlap between PTV and organ-at-risk (<8% heart volume and <2.5% lung volume overlap for this study) generates only negligible increase in lung or heart toxicity. These predictions from radiobiological modeling should be tested in future clinical trials.« less

  17. Radiobiological concepts for treatment planning of schemes that combines external beam radiotherapy and systemic targeted radiotherapy

    NASA Astrophysics Data System (ADS)

    Fabián Calderón Marín, Carlos; González González, Joaquín Jorge; Laguardia, Rodolfo Alfonso

    2017-09-01

    The combination of radiotherapy modalities with external bundles and systemic radiotherapy (CIERT) could be a reliable alternative for patients with multiple lesions or those where treatment planning maybe difficult because organ(s)-at-risk (OARs) constraints. Radiobiological models should have the capacity for predicting the biological irradiation response considering the differences in the temporal pattern of dose delivering in both modalities. Two CIERT scenarios were studied: sequential combination in which one modality is executed following the other one and concurrent combination when both modalities are running simultaneously. Expressions are provided for calculation of the dose-response magnitudes Tumor Control Probability (TCP) and Normal Tissue Complication Probability (NTCP). General results on radiobiological modeling using the linear-quadratic (LQ) model are also discussed. Inter-subject variation of radiosensitivity and volume irradiation effect in CIERT are studied. OARs should be under control during the planning in concurrent CIERT treatment as the administered activity is increased. The formulation presented here may be used for biological evaluation of prescriptions and biological treatment planning of CIERT schemes in clinical situation.

  18. A Simple Model of Cirrus Horizontal Inhomogeneity and Cloud Fraction

    NASA Technical Reports Server (NTRS)

    Smith, Samantha A.; DelGenio, Anthony D.

    1998-01-01

    A simple model of horizontal inhomogeneity and cloud fraction in cirrus clouds has been formulated on the basis that all internal horizontal inhomogeneity in the ice mixing ratio is due to variations in the cloud depth, which are assumed to be Gaussian. The use of such a model was justified by the observed relationship between the normalized variability of the ice water mixing ratio (and extinction) and the normalized variability of cloud depth. Using radar cloud depth data as input, the model reproduced well the in-cloud ice water mixing ratio histograms obtained from horizontal runs during the FIRE2 cirrus campaign. For totally overcast cases the histograms were almost Gaussian, but changed as cloud fraction decreased to exponential distributions which peaked at the lowest nonzero ice value for cloud fractions below 90%. Cloud fractions predicted by the model were always within 28% of the observed value. The predicted average ice water mixing ratios were within 34% of the observed values. This model could be used in a GCM to produce the ice mixing ratio probability distribution function and to estimate cloud fraction. It only requires basic meteorological parameters, the depth of the saturated layer and the standard deviation of cloud depth as input.

  19. Usefulness of the novel risk estimation software, Heart Risk View, for the prediction of cardiac events in patients with normal myocardial perfusion SPECT.

    PubMed

    Sakatani, Tomohiko; Shimoo, Satoshi; Takamatsu, Kazuaki; Kyodo, Atsushi; Tsuji, Yumika; Mera, Kayoko; Koide, Masahiro; Isodono, Koji; Tsubakimoto, Yoshinori; Matsuo, Akiko; Inoue, Keiji; Fujita, Hiroshi

    2016-12-01

    Myocardial perfusion single-photon emission-computed tomography (SPECT) can predict cardiac events in patients with coronary artery disease with high accuracy; however, pseudo-negative cases sometimes occur. Heart Risk View, which is based on the prospective cohort study (J-ACCESS), is a software for evaluating cardiac event probability. We examined whether Heart Risk View was useful to evaluate the cardiac risk in patients with normal myocardial perfusion SPECT (MPS). We studied 3461 consecutive patients who underwent MPS to detect myocardial ischemia and those who had normal MPS were enrolled in this study (n = 698). We calculated cardiac event probability by Heart Risk View and followed-up for 3.8 ± 2.4 years. The cardiac events were defined as cardiac death, non-fatal myocardial infarction, and heart failure requiring hospitalization. During the follow-up period, 21 patients (3.0 %) had cardiac events. The event probability calculated by Heart Risk View was higher in the event group (5.5 ± 2.6 vs. 2.9 ± 2.6 %, p < 0.001). According to the receiver-operating characteristics curve, the cut-off point of the event probability for predicting cardiac events was 3.4 % (sensitivity 0.76, specificity 0.72, and AUC 0.85). Kaplan-Meier curves revealed that a higher event rate was observed in the high-event probability group by the log-rank test (p < 0.001). Although myocardial perfusion SPECT is useful for the prediction of cardiac events, risk estimation by Heart Risk View adds more prognostic information, especially in patients with normal MPS.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muller, L; Soldner, A; Kirk, M

    Purpose: The beam range uncertainty presents a special challenge for proton therapy. Novel technologies currently under development offer strategies to reduce the range uncertainty [1,2]. This work quantifies the potential advantages that could be realized by such a reduction for dosimetrically challenging chordomas at the base of skull. Therapeutic improvement was assessed by evaluating tumor control probabilities (TCP) and normal tissue complication probabilities (NTCP). Methods: Treatment plans were made for a modulated-scanned proton delivery technique using the Eclipse treatment planning system. The prescription dose was 7920 cGy to the CTV. Three different range uncertainty scenarios were considered: 5 mm (3.5%more » of the beam range + 1 mm, representing current clinical practice, “Curr”), 2 mm (1.3%), and 1 mm (0.7%). For each of 4 patients, 3 different PTVs were defined via uniform expansion of the CTV by the value of the range uncertainty. Tumor control probability (TCP) and normal tissue complication probabilities (NTCPs) for organs-at-risk (OARs) were calculated using the Lyman-Kutcher-Burman[3] formalism and published model parameters [ref Terahara[4], quantec S10, Burman Red Journal v21 pp 123]. Our plan optimization strategy was to achieve PTV close to prescription while maintaining OAR NTCP values at or better than the Curr plan. Results: The average TCP values for the 5, 2, and 1 mm range uncertainty scenarios are 51%, 55% and 65%. The improvement in TCP for patients was between 4 and 30%, depending primarily on the proximity of the GTV to OAR. The average NTCPs for the brainstem and cord were about 4% and 1%, respectively, for all target margins. Conclusion: For base of skull chordomas, reduced target margins can substantially increase the TCP without increasing the NTCP. This work demonstrates the potential significance of a reduction in the range uncertainty for proton beams.« less

  1. The Benefits of Including Clinical Factors in Rectal Normal Tissue Complication Probability Modeling After Radiotherapy for Prostate Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Defraene, Gilles, E-mail: gilles.defraene@uzleuven.be; Van den Bergh, Laura; Al-Mamgani, Abrahim

    2012-03-01

    Purpose: To study the impact of clinical predisposing factors on rectal normal tissue complication probability modeling using the updated results of the Dutch prostate dose-escalation trial. Methods and Materials: Toxicity data of 512 patients (conformally treated to 68 Gy [n = 284] and 78 Gy [n = 228]) with complete follow-up at 3 years after radiotherapy were studied. Scored end points were rectal bleeding, high stool frequency, and fecal incontinence. Two traditional dose-based models (Lyman-Kutcher-Burman (LKB) and Relative Seriality (RS) and a logistic model were fitted using a maximum likelihood approach. Furthermore, these model fits were improved by including themore » most significant clinical factors. The area under the receiver operating characteristic curve (AUC) was used to compare the discriminating ability of all fits. Results: Including clinical factors significantly increased the predictive power of the models for all end points. In the optimal LKB, RS, and logistic models for rectal bleeding and fecal incontinence, the first significant (p = 0.011-0.013) clinical factor was 'previous abdominal surgery.' As second significant (p = 0.012-0.016) factor, 'cardiac history' was included in all three rectal bleeding fits, whereas including 'diabetes' was significant (p = 0.039-0.048) in fecal incontinence modeling but only in the LKB and logistic models. High stool frequency fits only benefitted significantly (p = 0.003-0.006) from the inclusion of the baseline toxicity score. For all models rectal bleeding fits had the highest AUC (0.77) where it was 0.63 and 0.68 for high stool frequency and fecal incontinence, respectively. LKB and logistic model fits resulted in similar values for the volume parameter. The steepness parameter was somewhat higher in the logistic model, also resulting in a slightly lower D{sub 50}. Anal wall DVHs were used for fecal incontinence, whereas anorectal wall dose best described the other two endpoints. Conclusions: Comparable prediction models were obtained with LKB, RS, and logistic NTCP models. Including clinical factors improved the predictive power of all models significantly.« less

  2. Anharmonic vibrations around a triaxial nuclear deformation “frozen” to γ = 30°

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buganu, Petrica, E-mail: buganu@theory.nipne.ro; Budaca, Radu

    2015-12-07

    The Davydov-Chaban Hamiltonian with a sextic oscillator potential for the variable β and γ fixed to 30° is exactly solved for the ground and β bands and approximately for the γ band. The model is called Z(4)-Sextic in connection with the already established Z(4) solution. The energy spectra, normalized to the energy of the first excited state, and several B(E2) transition probabilities, normalized to the B(E2) transition from the first excited state to the ground state, depend on a single parameter α. By varying α within a sufficiently large interval, a shape phase transition from an approximately spherical shape tomore » a deformed one is evidenced.« less

  3. Modeling the clinical and economic implications of obesity using microsimulation.

    PubMed

    Su, W; Huang, J; Chen, F; Iacobucci, W; Mocarski, M; Dall, T M; Perreault, L

    2015-01-01

    The obesity epidemic has raised considerable public health concerns, but there are few validated longitudinal simulation models examining the human and economic cost of obesity. This paper describes a microsimulation model as a comprehensive tool to understand the relationship between body weight, health, and economic outcomes. Patient health and economic outcomes were simulated annually over 10 years using a Markov-based microsimulation model. The obese population examined is nationally representative of obese adults in the US from the 2005-2012 National Health and Nutrition Examination Surveys, while a matched normal weight population was constructed to have similar demographics as the obese population during the same period. Prediction equations for onset of obesity-related comorbidities, medical expenditures, economic outcomes, mortality, and quality-of-life came from published trials and studies supplemented with original research. Model validation followed International Society for Pharmacoeconomics and Outcomes Research practice guidelines. Among surviving adults, relative to a matched normal weight population, obese adults averaged $3900 higher medical expenditures in the initial year, growing to $4600 higher expenditures in year 10. Obese adults had higher initial prevalence and higher simulated onset of comorbidities as they aged. Over 10 years, excess medical expenditures attributed to obesity averaged $4280 annually-ranging from $2820 for obese category I to $5100 for obese category II, and $8710 for obese category III. Each excess kilogram of weight contributed to $140 higher annual costs, on average, ranging from $136 (obese I) to $152 (obese III). Poor health associated with obesity increased work absenteeism and mortality, and lowered employment probability, personal income, and quality-of-life. This validated model helps illustrate why obese adults have higher medical and indirect costs relative to normal weight adults, and shows that medical costs for obese adults rise more rapidly with aging relative to normal weight adults.

  4. Universal Inverse Power-Law Distribution for Fractal Fluctuations in Dynamical Systems: Applications for Predictability of Inter-Annual Variability of Indian and USA Region Rainfall

    NASA Astrophysics Data System (ADS)

    Selvam, A. M.

    2017-01-01

    Dynamical systems in nature exhibit self-similar fractal space-time fluctuations on all scales indicating long-range correlations and, therefore, the statistical normal distribution with implicit assumption of independence, fixed mean and standard deviation cannot be used for description and quantification of fractal data sets. The author has developed a general systems theory based on classical statistical physics for fractal fluctuations which predicts the following. (1) The fractal fluctuations signify an underlying eddy continuum, the larger eddies being the integrated mean of enclosed smaller-scale fluctuations. (2) The probability distribution of eddy amplitudes and the variance (square of eddy amplitude) spectrum of fractal fluctuations follow the universal Boltzmann inverse power law expressed as a function of the golden mean. (3) Fractal fluctuations are signatures of quantum-like chaos since the additive amplitudes of eddies when squared represent probability densities analogous to the sub-atomic dynamics of quantum systems such as the photon or electron. (4) The model predicted distribution is very close to statistical normal distribution for moderate events within two standard deviations from the mean but exhibits a fat long tail that are associated with hazardous extreme events. Continuous periodogram power spectral analyses of available GHCN annual total rainfall time series for the period 1900-2008 for Indian and USA stations show that the power spectra and the corresponding probability distributions follow model predicted universal inverse power law form signifying an eddy continuum structure underlying the observed inter-annual variability of rainfall. On a global scale, man-made greenhouse gas related atmospheric warming would result in intensification of natural climate variability, seen immediately in high frequency fluctuations such as QBO and ENSO and even shorter timescales. Model concepts and results of analyses are discussed with reference to possible prediction of climate change. Model concepts, if correct, rule out unambiguously, linear trends in climate. Climate change will only be manifested as increase or decrease in the natural variability. However, more stringent tests of model concepts and predictions are required before applications to such an important issue as climate change. Observations and simulations with climate models show that precipitation extremes intensify in response to a warming climate (O'Gorman in Curr Clim Change Rep 1:49-59, 2015).

  5. Statistical description of non-Gaussian samples in the F2 layer of the ionosphere during heliogeophysical disturbances

    NASA Astrophysics Data System (ADS)

    Sergeenko, N. P.

    2017-11-01

    An adequate statistical method should be developed in order to predict probabilistically the range of ionospheric parameters. This problem is solved in this paper. The time series of the critical frequency of the layer F2- foF2( t) were subjected to statistical processing. For the obtained samples {δ foF2}, statistical distributions and invariants up to the fourth order are calculated. The analysis shows that the distributions differ from the Gaussian law during the disturbances. At levels of sufficiently small probability distributions, there are arbitrarily large deviations from the model of the normal process. Therefore, it is attempted to describe statistical samples {δ foF2} based on the Poisson model. For the studied samples, the exponential characteristic function is selected under the assumption that time series are a superposition of some deterministic and random processes. Using the Fourier transform, the characteristic function is transformed into a nonholomorphic excessive-asymmetric probability-density function. The statistical distributions of the samples {δ foF2} calculated for the disturbed periods are compared with the obtained model distribution function. According to the Kolmogorov's criterion, the probabilities of the coincidence of a posteriori distributions with the theoretical ones are P 0.7-0.9. The conducted analysis makes it possible to draw a conclusion about the applicability of a model based on the Poisson random process for the statistical description and probabilistic variation estimates during heliogeophysical disturbances of the variations {δ foF2}.

  6. Probability distribution of financial returns in a model of multiplicative Brownian motion with stochastic diffusion coefficient

    NASA Astrophysics Data System (ADS)

    Silva, Antonio

    2005-03-01

    It is well-known that the mathematical theory of Brownian motion was first developed in the Ph. D. thesis of Louis Bachelier for the French stock market before Einstein [1]. In Ref. [2] we studied the so-called Heston model, where the stock-price dynamics is governed by multiplicative Brownian motion with stochastic diffusion coefficient. We solved the corresponding Fokker-Planck equation exactly and found an analytic formula for the time-dependent probability distribution of stock price changes (returns). The formula interpolates between the exponential (tent-shaped) distribution for short time lags and the Gaussian (parabolic) distribution for long time lags. The theoretical formula agrees very well with the actual stock-market data ranging from the Dow-Jones index [2] to individual companies [3], such as Microsoft, Intel, etc. [] [1] Louis Bachelier, ``Th'eorie de la sp'eculation,'' Annales Scientifiques de l''Ecole Normale Sup'erieure, III-17:21-86 (1900).[] [2] A. A. Dragulescu and V. M. Yakovenko, ``Probability distribution of returns in the Heston model with stochastic volatility,'' Quantitative Finance 2, 443--453 (2002); Erratum 3, C15 (2003). [cond-mat/0203046] [] [3] A. C. Silva, R. E. Prange, and V. M. Yakovenko, ``Exponential distribution of financial returns at mesoscopic time lags: a new stylized fact,'' Physica A 344, 227--235 (2004). [cond-mat/0401225

  7. Bayesian analysis and classification of two Enzyme-Linked Immunosorbent Assay (ELISA) tests without a gold standard

    PubMed Central

    Zhang, Jingyang; Chaloner, Kathryn; McLinden, James H.; Stapleton, Jack T.

    2013-01-01

    Reconciling two quantitative ELISA tests for an antibody to an RNA virus, in a situation without a gold standard and where false negatives may occur, is the motivation for this work. False negatives occur when access of the antibody to the binding site is blocked. Based on the mechanism of the assay, a mixture of four bivariate normal distributions is proposed with the mixture probabilities depending on a two-stage latent variable model including the prevalence of the antibody in the population and the probabilities of blocking on each test. There is prior information on the prevalence of the antibody, and also on the probability of false negatives, and so a Bayesian analysis is used. The dependence between the two tests is modeled to be consistent with the biological mechanism. Bayesian decision theory is utilized for classification. The proposed method is applied to the motivating data set to classify the data into two groups: those with and those without the antibody. Simulation studies describe the properties of the estimation and the classification. Sensitivity to the choice of the prior distribution is also addressed by simulation. The same model with two levels of latent variables is applicable in other testing procedures such as quantitative polymerase chain reaction tests where false negatives occur when there is a mutation in the primer sequence. PMID:23592433

  8. Alternative configurations of Quantile Regression for estimating predictive uncertainty in water level forecasts for the Upper Severn River: a comparison

    NASA Astrophysics Data System (ADS)

    Lopez, Patricia; Verkade, Jan; Weerts, Albrecht; Solomatine, Dimitri

    2014-05-01

    Hydrological forecasting is subject to many sources of uncertainty, including those originating in initial state, boundary conditions, model structure and model parameters. Although uncertainty can be reduced, it can never be fully eliminated. Statistical post-processing techniques constitute an often used approach to estimate the hydrological predictive uncertainty, where a model of forecast error is built using a historical record of past forecasts and observations. The present study focuses on the use of the Quantile Regression (QR) technique as a hydrological post-processor. It estimates the predictive distribution of water levels using deterministic water level forecasts as predictors. This work aims to thoroughly verify uncertainty estimates using the implementation of QR that was applied in an operational setting in the UK National Flood Forecasting System, and to inter-compare forecast quality and skill in various, differing configurations of QR. These configurations are (i) 'classical' QR, (ii) QR constrained by a requirement that quantiles do not cross, (iii) QR derived on time series that have been transformed into the Normal domain (Normal Quantile Transformation - NQT), and (iv) a piecewise linear derivation of QR models. The QR configurations are applied to fourteen hydrological stations on the Upper Severn River with different catchments characteristics. Results of each QR configuration are conditionally verified for progressively higher flood levels, in terms of commonly used verification metrics and skill scores. These include Brier's probability score (BS), the continuous ranked probability score (CRPS) and corresponding skill scores as well as the Relative Operating Characteristic score (ROCS). Reliability diagrams are also presented and analysed. The results indicate that none of the four Quantile Regression configurations clearly outperforms the others.

  9. Optimal radiotherapy dose schedules under parametric uncertainty

    NASA Astrophysics Data System (ADS)

    Badri, Hamidreza; Watanabe, Yoichi; Leder, Kevin

    2016-01-01

    We consider the effects of parameter uncertainty on the optimal radiation schedule in the context of the linear-quadratic model. Our interest arises from the observation that if inter-patient variability in normal and tumor tissue radiosensitivity or sparing factor of the organs-at-risk (OAR) are not accounted for during radiation scheduling, the performance of the therapy may be strongly degraded or the OAR may receive a substantially larger dose than the allowable threshold. This paper proposes a stochastic radiation scheduling concept to incorporate inter-patient variability into the scheduling optimization problem. Our method is based on a probabilistic approach, where the model parameters are given by a set of random variables. Our probabilistic formulation ensures that our constraints are satisfied with a given probability, and that our objective function achieves a desired level with a stated probability. We used a variable transformation to reduce the resulting optimization problem to two dimensions. We showed that the optimal solution lies on the boundary of the feasible region and we implemented a branch and bound algorithm to find the global optimal solution. We demonstrated how the configuration of optimal schedules in the presence of uncertainty compares to optimal schedules in the absence of uncertainty (conventional schedule). We observed that in order to protect against the possibility of the model parameters falling into a region where the conventional schedule is no longer feasible, it is required to avoid extremal solutions, i.e. a single large dose or very large total dose delivered over a long period. Finally, we performed numerical experiments in the setting of head and neck tumors including several normal tissues to reveal the effect of parameter uncertainty on optimal schedules and to evaluate the sensitivity of the solutions to the choice of key model parameters.

  10. Demand for private health insurance: how important is the quality gap?

    PubMed

    Costa, Joan; García, Jaume

    2003-07-01

    Perceived quality of private and public health care, income and insurance premium are among the determinants of demand for private health insurance (PHI). In the context of a model in which individuals are expected utility maximizers, the non purchasing choice can result in consuming either public health care or private health care with full cost paid out-of-pocket. This paper empirically analyses the effect of the determinants of the demand for PHI on the probability of purchasing PHI by estimating a pseudo-structural model to deal with missing data and endogeneity issues. Our findings support the hypothesis that the demand for PHI is indeed driven by the quality gap between private and public health care. As expected, PHI is a normal good and a rise in the insurance premium reduces the probability of purchasing PHI albeit displaying price elasticities smaller than one in absolute value for different groups of individuals. Copyright 2002 John Wiley & Sons, Ltd.

  11. Statistical analysis of PM₁₀ concentrations at different locations in Malaysia.

    PubMed

    Sansuddin, Nurulilyana; Ramli, Nor Azam; Yahaya, Ahmad Shukri; Yusof, Noor Faizah Fitri Md; Ghazali, Nurul Adyani; Madhoun, Wesam Ahmed Al

    2011-09-01

    Malaysia has experienced several haze events since the 1980s as a consequence of the transboundary movement of air pollutants emitted from forest fires and open burning activities. Hazy episodes can result from local activities and be categorized as "localized haze". General probability distributions (i.e., gamma and log-normal) were chosen to analyze the PM(10) concentrations data at two different types of locations in Malaysia: industrial (Johor Bahru and Nilai) and residential (Kota Kinabalu and Kuantan). These areas were chosen based on their frequently high PM(10) concentration readings. The best models representing the areas were chosen based on their performance indicator values. The best distributions provided the probability of exceedances and the return period between the actual and predicted concentrations based on the threshold limit given by the Malaysian Ambient Air Quality Guidelines (24-h average of 150 μg/m(3)) for PM(10) concentrations. The short-term prediction for PM(10) exceedances in 14 days was obtained using the autoregressive model.

  12. On estimation of linear transformation models with nested case–control sampling

    PubMed Central

    Liu, Mengling

    2011-01-01

    Nested case–control (NCC) sampling is widely used in large epidemiological cohort studies for its cost effectiveness, but its data analysis primarily relies on the Cox proportional hazards model. In this paper, we consider a family of linear transformation models for analyzing NCC data and propose an inverse selection probability weighted estimating equation method for inference. Consistency and asymptotic normality of our estimators for regression coefficients are established. We show that the asymptotic variance has a closed analytic form and can be easily estimated. Numerical studies are conducted to support the theory and an application to the Wilms’ Tumor Study is also given to illustrate the methodology. PMID:21912975

  13. A brief introduction to probability.

    PubMed

    Di Paola, Gioacchino; Bertani, Alessandro; De Monte, Lavinia; Tuzzolino, Fabio

    2018-02-01

    The theory of probability has been debated for centuries: back in 1600, French mathematics used the rules of probability to place and win bets. Subsequently, the knowledge of probability has significantly evolved and is now an essential tool for statistics. In this paper, the basic theoretical principles of probability will be reviewed, with the aim of facilitating the comprehension of statistical inference. After a brief general introduction on probability, we will review the concept of the "probability distribution" that is a function providing the probabilities of occurrence of different possible outcomes of a categorical or continuous variable. Specific attention will be focused on normal distribution that is the most relevant distribution applied to statistical analysis.

  14. Clinicians' perceptions of the value of ventilation-perfusion scans.

    PubMed

    Siegel, Alan; Holtzman, Stephen R; Bettmann, Michael A; Black, William C

    2004-07-01

    The goal of this investigation was to understand clinicians' perceptions of the probability of pulmonary embolism as a function of V/Q scan results of normal, low, intermediate, and high probability. A questionnaire was developed and distributed to 429 clinicians at a single academic medical center. The response rate was 44% (188 of 429). The questions included level of training, specialty, probability of PE given 1 of the 4 V/Q scan results, and estimations of the charges for V/Q scanning and pulmonary angiography, and estimations of the risks of pulmonary angiography. The medians and ranges for the probability of pulmonary embolism given a normal, low, intermediate, and high probability V/Q scan result were 2.5% (0-30), 12.5% (0.5-52.5), 41.25% (5-75), and 85% (5-100), respectively. Eleven percent (21 of 188) of the respondents listed the probability of PE in patients with a low probability V/Q scan as being 5% or less, and 33% (62 of 188) listed the probability of PE given an intermediate probability scan as 50% or greater. The majority correctly identified the rate of serious complications of pulmonary arteriography, but many respondents underestimated the charge for V/Q scans and pulmonary arteriography. A substantial minority of clinicians do not understand the probability of pulmonary embolism in patients with low and intermediate probability ventilation-perfusion scans. More quantitative reporting of results is recommended. This could be particularly important because VQ scans are used less frequently but are still needed in certain clinical situations.

  15. Predicting the Probability of Abnormal Stimulated Growth Hormone Response in Children After Radiotherapy for Brain Tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hua Chiaho, E-mail: Chia-Ho.Hua@stjude.org; Wu Shengjie; Chemaitilly, Wassim

    Purpose: To develop a mathematical model utilizing more readily available measures than stimulation tests that identifies brain tumor survivors with high likelihood of abnormal growth hormone secretion after radiotherapy (RT), to avoid late recognition and a consequent delay in growth hormone replacement therapy. Methods and Materials: We analyzed 191 prospectively collected post-RT evaluations of peak growth hormone level (arginine tolerance/levodopa stimulation test), serum insulin-like growth factor 1 (IGF-1), IGF-binding protein 3, height, weight, growth velocity, and body mass index in 106 children and adolescents treated for ependymoma (n = 72), low-grade glioma (n = 28) or craniopharyngioma (n = 6),more » who had normal growth hormone levels before RT. Normal level in this study was defined as the peak growth hormone response to the stimulation test {>=}7 ng/mL. Results: Independent predictor variables identified by multivariate logistic regression with high statistical significance (p < 0.0001) included IGF-1 z score, weight z score, and hypothalamic dose. The developed predictive model demonstrated a strong discriminatory power with an area under the receiver operating characteristic curve of 0.883. At a potential cutoff point of probability of 0.3 the sensitivity was 80% and specificity 78%. Conclusions: Without unpleasant and expensive frequent stimulation tests, our model provides a quantitative approach to closely follow the growth hormone secretory capacity of brain tumor survivors. It allows identification of high-risk children for subsequent confirmatory tests and in-depth workup for diagnosis of growth hormone deficiency.« less

  16. Growth mixture modelling in families of the Framingham Heart Study

    PubMed Central

    2009-01-01

    Growth mixture modelling, a less explored method in genetic research, addresses unobserved heterogeneity in population samples. We applied this technique to longitudinal data of the Framingham Heart Study. We examined systolic blood pressure (BP) measures in 1060 males from 692 families and detected three subclasses, which varied significantly in their developmental trajectories over time. The first class consisted of 60 high-risk individuals with elevated BP early in life and a steep increase over time. The second group of 131 individuals displayed first normal BP, but showed a significant increase over time and reached high BP values late in their life time. The largest group of 869 individuals could be considered a normative group with normal BP on all exams. To identify genetic modulators for this phenotype, we tested 2,340 single-nucleotide polymorphisms on chromosome 8 for association with the class membership probabilities of our model. The probability of being in Class 1 was significantly associated with a very rare variant (rs1445404) present in only four individuals from four different families located in the coding region of the gene EYA (eyes absent homolog 1 in Drosophila) (p = 1.39 × 10-13). Mutations in EYA are known to cause brachio-oto-renal syndrome, as well as isolated renal malformations. Renal malformations could cause high BP early in life. This result awaits replication; however, it suggests that analyzing genetic data stratified for high-risk subgroups defined by a unique development over time could be useful for the detection of rare mutations in common multi-factorial diseases. PMID:20017979

  17. Protons in head-and-neck cancer: bridging the gap of evidence.

    PubMed

    Ramaekers, Bram L T; Grutters, Janneke P C; Pijls-Johannesma, Madelon; Lambin, Philippe; Joore, Manuela A; Langendijk, Johannes A

    2013-04-01

    To use Normal Tissue Complication Probability (NTCP) models and comparative planning studies to explore the (cost-)effectiveness of swallowing sparing intensity modulated proton radiotherapy (IMPT) compared with swallowing sparing intensity modulated radiotherapy with photons (IMRT) in head and neck cancer (HNC). A Markov model was constructed to examine and compare the costs and quality-adjusted life years (QALYs) of the following strategies: (1) IMPT for all patients; (2) IMRT for all patients; and (3) IMPT if efficient. The assumption of equal survival for IMPT and IMRT in the base case analysis was relaxed in a sensitivity analysis. Intensity modulated proton radiation therapy and IMRT for all patients yielded 6.620 and 6.520 QALYs and cost €50,989 and €41,038, respectively. Intensity modulated proton radiation therapy if efficient yielded 6.563 QALYs and cost €43,650. The incremental cost-effectiveness ratio of IMPT if efficient versus IMRT for all patients was €60,278 per QALY gained. In the sensitivity analysis, IMRT was more effective (0.967 QALYs) and less expensive (€8218) and thus dominated IMPT for all patients. Cost-effectiveness analysis based on normal tissue complication probability models and planning studies proved feasible and informative and enables the analysis of individualized strategies. The increased effectiveness of IMPT does not seem to outweigh the higher costs for all head-and-neck cancer patients. However, when assuming equal survival among both modalities, there seems to be value in identifying those patients for whom IMPT is cost-effective. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Statistics of Optical Coherence Tomography Data From Human Retina

    PubMed Central

    de Juan, Joaquín; Ferrone, Claudia; Giannini, Daniela; Huang, David; Koch, Giorgio; Russo, Valentina; Tan, Ou; Bruni, Carlo

    2010-01-01

    Optical coherence tomography (OCT) has recently become one of the primary methods for noninvasive probing of the human retina. The pseudoimage formed by OCT (the so-called B-scan) varies probabilistically across pixels due to complexities in the measurement technique. Hence, sensitive automatic procedures of diagnosis using OCT may exploit statistical analysis of the spatial distribution of reflectance. In this paper, we perform a statistical study of retinal OCT data. We find that the stretched exponential probability density function can model well the distribution of intensities in OCT pseudoimages. Moreover, we show a small, but significant correlation between neighbor pixels when measuring OCT intensities with pixels of about 5 µm. We then develop a simple joint probability model for the OCT data consistent with known retinal features. This model fits well the stretched exponential distribution of intensities and their spatial correlation. In normal retinas, fit parameters of this model are relatively constant along retinal layers, but varies across layers. However, in retinas with diabetic retinopathy, large spikes of parameter modulation interrupt the constancy within layers, exactly where pathologies are visible. We argue that these results give hope for improvement in statistical pathology-detection methods even when the disease is in its early stages. PMID:20304733

  19. Ditching Investigation of a 1/11-Scale Model of the Chance Vought F7U-3 Airplane, TED NO. NACA DE 360

    NASA Technical Reports Server (NTRS)

    Fisher, Lloyd J.; Windham, John O.

    1955-01-01

    An investigation was made of a 1/11-scale dynamically similar model of the Chance Vought F7U-3 airplane to study its behavior when ditched. The model was landed in calm water at the Langley tank no. 2 monorail. Various landing attitudes, speeds, and configurations were investigated. The behavior of the model was determined from visual observations, acceleration records, and motion-picture records of the ditchings. Data are presented in tabular form, sequence photographs, time-history acceleration curves, and plots of attitude change against time after contact. From the results of the investigation, it was concluded that the airplane should be ditched at the lowest speed and highest attitude consistent with adequate control. The aft part of the fuselage and the main landing-gear doors will probably be damaged. In a calm-water ditching under these conditions the airplane will probably skip slightly and then porpoise for the remainder of the run. Maximum longitudinal decelerations will be about 3 1/2g and maximum normal accelerations will be about 7g in a landing run of about 500 feet.

  20. Dynamical-statistical seasonal prediction for western North Pacific typhoons based on APCC multi-models

    NASA Astrophysics Data System (ADS)

    Kim, Ok-Yeon; Kim, Hye-Mi; Lee, Myong-In; Min, Young-Mi

    2017-01-01

    This study aims at predicting the seasonal number of typhoons (TY) over the western North Pacific with an Asia-Pacific Climate Center (APCC) multi-model ensemble (MME)-based dynamical-statistical hybrid model. The hybrid model uses the statistical relationship between the number of TY during the typhoon season (July-October) and the large-scale key predictors forecasted by APCC MME for the same season. The cross validation result from the MME hybrid model demonstrates high prediction skill, with a correlation of 0.67 between the hindcasts and observation for 1982-2008. The cross validation from the hybrid model with individual models participating in MME indicates that there is no single model which consistently outperforms the other models in predicting typhoon number. Although the forecast skill of MME is not always the highest compared to that of each individual model, the skill of MME presents rather higher averaged correlations and small variance of correlations. Given large set of ensemble members from multi-models, a relative operating characteristic score reveals an 82 % (above-) and 78 % (below-normal) improvement for the probabilistic prediction of the number of TY. It implies that there is 82 % (78 %) probability that the forecasts can successfully discriminate between above normal (below-normal) from other years. The forecast skill of the hybrid model for the past 7 years (2002-2008) is more skillful than the forecast from the Tropical Storm Risk consortium. Using large set of ensemble members from multi-models, the APCC MME could provide useful deterministic and probabilistic seasonal typhoon forecasts to the end-users in particular, the residents of tropical cyclone-prone areas in the Asia-Pacific region.

  1. Tornado risks and design windspeeds for the Oak Ridge Plant Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1975-08-01

    The effects of tornadoes and other extreme winds should be considered in establishing design criteria for structures to resist wind loads. Design standards that are incorporated in building codes do not normally include the effects of tornadoes in their wind load criteria. Some tornado risk models ignore the presence of nontornadic extreme winds. The purpose of this study is to determine the probability of tornadic and straight winds exceeding a threshold value in the geographical region surrounding the Oak Ridge, Tennessee plant site.

  2. Technology-Enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution

    ERIC Educational Resources Information Center

    Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2013-01-01

    Data analysis requires subtle probability reasoning to answer questions like "What is the chance of event A occurring, given that event B was observed?" This generic question arises in discussions of many intriguing scientific questions such as "What is the probability that an adolescent weighs between 120 and 140 pounds given that…

  3. Crop calendars for the US, USSR, and Canada in support of the early warning project

    NASA Technical Reports Server (NTRS)

    Hodges, T.; Sestak, M. L.; Trenchard, M. H. (Principal Investigator)

    1980-01-01

    New crop calendars are produced for U.S. regions where several years of periodic growth stage observations are available on a CRD basis. Preexisting crop calendars from the LACIE are also collected as are U.S. crop calendars currently being created for the Foreign Commodities Production Forecast project. For the U.S.S.R. and Canada, no new crop calendars are created because no new data are available. Instead, LACIE crop calendars are compared against simulated normal daily temperatures and against the Robertson wheat and Williams barley phenology models run on the simulated normal temperatures. Severe inconsistencies are noted and discussed. For the U.S.S.R., spring and fall planting dates can probably be estimated accurately from satellite or meteorological data. For the starter model problem, the Feyerherm spring wheat model is recommended for spring planted small grains, and the results of an analysis are presented. For fall planted small grains, use of normal planting dates supplemented by spectral observation of an early stage is recommended. The importance of nonmeteorological factors as they pertain to meteorological factors in determining fall planting is discussed. Crop calendar data available at the Johnson Space Center for the U.S., U.S.S.R., Canada, and other countries are inventoried.

  4. Variable selection models for genomic selection using whole-genome sequence data and singular value decomposition.

    PubMed

    Meuwissen, Theo H E; Indahl, Ulf G; Ødegård, Jørgen

    2017-12-27

    Non-linear Bayesian genomic prediction models such as BayesA/B/C/R involve iteration and mostly Markov chain Monte Carlo (MCMC) algorithms, which are computationally expensive, especially when whole-genome sequence (WGS) data are analyzed. Singular value decomposition (SVD) of the genotype matrix can facilitate genomic prediction in large datasets, and can be used to estimate marker effects and their prediction error variances (PEV) in a computationally efficient manner. Here, we developed, implemented, and evaluated a direct, non-iterative method for the estimation of marker effects for the BayesC genomic prediction model. The BayesC model assumes a priori that markers have normally distributed effects with probability [Formula: see text] and no effect with probability (1 - [Formula: see text]). Marker effects and their PEV are estimated by using SVD and the posterior probability of the marker having a non-zero effect is calculated. These posterior probabilities are used to obtain marker-specific effect variances, which are subsequently used to approximate BayesC estimates of marker effects in a linear model. A computer simulation study was conducted to compare alternative genomic prediction methods, where a single reference generation was used to estimate marker effects, which were subsequently used for 10 generations of forward prediction, for which accuracies were evaluated. SVD-based posterior probabilities of markers having non-zero effects were generally lower than MCMC-based posterior probabilities, but for some regions the opposite occurred, resulting in clear signals for QTL-rich regions. The accuracies of breeding values estimated using SVD- and MCMC-based BayesC analyses were similar across the 10 generations of forward prediction. For an intermediate number of generations (2 to 5) of forward prediction, accuracies obtained with the BayesC model tended to be slightly higher than accuracies obtained using the best linear unbiased prediction of SNP effects (SNP-BLUP model). When reducing marker density from WGS data to 30 K, SNP-BLUP tended to yield the highest accuracies, at least in the short term. Based on SVD of the genotype matrix, we developed a direct method for the calculation of BayesC estimates of marker effects. Although SVD- and MCMC-based marker effects differed slightly, their prediction accuracies were similar. Assuming that the SVD of the marker genotype matrix is already performed for other reasons (e.g. for SNP-BLUP), computation times for the BayesC predictions were comparable to those of SNP-BLUP.

  5. Probability of Regenerating a Normal Limb After Bite Injury in the Mexican Axolotl (Ambystoma mexicanum).

    PubMed

    Thompson, Sierra; Muzinic, Laura; Muzinic, Christopher; Niemiller, Matthew L; Voss, S Randal

    2014-06-01

    Multiple factors are thought to cause limb abnormalities in amphibian populations by altering processes of limb development and regeneration. We examined adult and juvenile axolotls ( Ambystoma mexicanum ) in the Ambystoma Genetic Stock Center (AGSC) for limb and digit abnormalities to investigate the probability of normal regeneration after bite injury. We observed that 80% of larval salamanders show evidence of bite injury at the time of transition from group housing to solitary housing. Among 717 adult axolotls that were surveyed, which included solitary-housed males and group-housed females, approximately half presented abnormalities, including examples of extra or missing digits and limbs, fused digits, and digits growing from atypical anatomical positions. Bite injury likely explains these limb defects, and not abnormal development, because limbs with normal anatomy regenerated after performing rostral amputations. We infer that only 43% of AGSC larvae will present four anatomically normal looking adult limbs after incurring a bite injury. Our results show regeneration of normal limb anatomy to be less than perfect after bite injury.

  6. Zero-state Markov switching count-data models: an empirical assessment.

    PubMed

    Malyshkina, Nataliya V; Mannering, Fred L

    2010-01-01

    In this study, a two-state Markov switching count-data model is proposed as an alternative to zero-inflated models to account for the preponderance of zeros sometimes observed in transportation count data, such as the number of accidents occurring on a roadway segment over some period of time. For this accident-frequency case, zero-inflated models assume the existence of two states: one of the states is a zero-accident count state, which has accident probabilities that are so low that they cannot be statistically distinguished from zero, and the other state is a normal-count state, in which counts can be non-negative integers that are generated by some counting process, for example, a Poisson or negative binomial. While zero-inflated models have come under some criticism with regard to accident-frequency applications - one fact is undeniable - in many applications they provide a statistically superior fit to the data. The Markov switching approach we propose seeks to overcome some of the criticism associated with the zero-accident state of the zero-inflated model by allowing individual roadway segments to switch between zero and normal-count states over time. An important advantage of this Markov switching approach is that it allows for the direct statistical estimation of the specific roadway-segment state (i.e., zero-accident or normal-count state) whereas traditional zero-inflated models do not. To demonstrate the applicability of this approach, a two-state Markov switching negative binomial model (estimated with Bayesian inference) and standard zero-inflated negative binomial models are estimated using five-year accident frequencies on Indiana interstate highway segments. It is shown that the Markov switching model is a viable alternative and results in a superior statistical fit relative to the zero-inflated models.

  7. Closed-form solutions of performability. [in computer systems

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1982-01-01

    It is noted that if computing system performance is degradable then system evaluation must deal simultaneously with aspects of both performance and reliability. One approach is the evaluation of a system's performability which, relative to a specified performance variable Y, generally requires solution of the probability distribution function of Y. The feasibility of closed-form solutions of performability when Y is continuous are examined. In particular, the modeling of a degradable buffer/multiprocessor system is considered whose performance Y is the (normalized) average throughput rate realized during a bounded interval of time. Employing an approximate decomposition of the model, it is shown that a closed-form solution can indeed be obtained.

  8. On the progenitors of Type Ia supernovae

    NASA Astrophysics Data System (ADS)

    Livio, Mario; Mazzali, Paolo

    2018-03-01

    We review all the models proposed for the progenitor systems of Type Ia supernovae and discuss the strengths and weaknesses of each scenario when confronted with observations. We show that all scenarios encounter at least a few serious difficulties, if taken to represent a comprehensive model for the progenitors of all Type Ia supernovae (SNe Ia). Consequently, we tentatively conclude that there is probably more than one channel leading SNe Ia. While the single-degenerate scenario (in which a single white dwarf accretes mass from a normal stellar companion) has been studied in some detail, the other scenarios will need a similar level of scrutiny before any firm conclusions can be drawn.

  9. The ferroan-anorthositic suite and the extent of primordial lunar melting

    NASA Technical Reports Server (NTRS)

    Warren, Paul H.; Kallemeyn, Gregory W.

    1992-01-01

    The Apollo highlands rock collection includes more than 100 'pristine' fragments that survived the intense meteoritic bombardment of the ancient lunar crust with unmixed, endogenously igneous compositions. The geochemical anomaly manifested by the 'ferroan-anorthositic suite' (FAS) appears to reflect a geochemical, and probably also a genetic, bimodality among the ancient lunar cumulates. Early models that purported to account for this bimodality as a product of a single magma have been discredited. The model of the present paper implies that the Mg-suite rocks formed by a comparatively normal variety of basaltic fractional crystallization (FC) shortly after the era of magma ocean (MO) crystallization and FAS genesis.

  10. Biological dose estimation for charged-particle therapy using an improved PHITS code coupled with a microdosimetric kinetic model.

    PubMed

    Sato, Tatsuhiko; Kase, Yuki; Watanabe, Ritsuko; Niita, Koji; Sihver, Lembit

    2009-01-01

    Microdosimetric quantities such as lineal energy, y, are better indexes for expressing the RBE of HZE particles in comparison to LET. However, the use of microdosimetric quantities in computational dosimetry is severely limited because of the difficulty in calculating their probability densities in macroscopic matter. We therefore improved the particle transport simulation code PHITS, providing it with the capability of estimating the microdosimetric probability densities in a macroscopic framework by incorporating a mathematical function that can instantaneously calculate the probability densities around the trajectory of HZE particles with a precision equivalent to that of a microscopic track-structure simulation. A new method for estimating biological dose, the product of physical dose and RBE, from charged-particle therapy was established using the improved PHITS coupled with a microdosimetric kinetic model. The accuracy of the biological dose estimated by this method was tested by comparing the calculated physical doses and RBE values with the corresponding data measured in a slab phantom irradiated with several kinds of HZE particles. The simulation technique established in this study will help to optimize the treatment planning of charged-particle therapy, thereby maximizing the therapeutic effect on tumors while minimizing unintended harmful effects on surrounding normal tissues.

  11. Deep Learning Role in Early Diagnosis of Prostate Cancer

    PubMed Central

    Reda, Islam; Khalil, Ashraf; Elmogy, Mohammed; Abou El-Fetouh, Ahmed; Shalaby, Ahmed; Abou El-Ghar, Mohamed; Elmaghraby, Adel; Ghazal, Mohammed; El-Baz, Ayman

    2018-01-01

    The objective of this work is to develop a computer-aided diagnostic system for early diagnosis of prostate cancer. The presented system integrates both clinical biomarkers (prostate-specific antigen) and extracted features from diffusion-weighted magnetic resonance imaging collected at multiple b values. The presented system performs 3 major processing steps. First, prostate delineation using a hybrid approach that combines a level-set model with nonnegative matrix factorization. Second, estimation and normalization of diffusion parameters, which are the apparent diffusion coefficients of the delineated prostate volumes at different b values followed by refinement of those apparent diffusion coefficients using a generalized Gaussian Markov random field model. Then, construction of the cumulative distribution functions of the processed apparent diffusion coefficients at multiple b values. In parallel, a K-nearest neighbor classifier is employed to transform the prostate-specific antigen results into diagnostic probabilities. Finally, those prostate-specific antigen–based probabilities are integrated with the initial diagnostic probabilities obtained using stacked nonnegativity constraint sparse autoencoders that employ apparent diffusion coefficient–cumulative distribution functions for better diagnostic accuracy. Experiments conducted on 18 diffusion-weighted magnetic resonance imaging data sets achieved 94.4% diagnosis accuracy (sensitivity = 88.9% and specificity = 100%), which indicate the promising results of the presented computer-aided diagnostic system. PMID:29804518

  12. Effect of Endocrown Restorations with Different CAD/CAM Materials: 3D Finite Element and Weibull Analyses

    PubMed Central

    Ulusoy, Nuran

    2017-01-01

    The aim of this study was to evaluate the effects of two endocrown designs and computer aided design/manufacturing (CAD/CAM) materials on stress distribution and failure probability of restorations applied to severely damaged endodontically treated maxillary first premolar tooth (MFP). Two types of designs without and with 3 mm intraradicular extensions, endocrown (E) and modified endocrown (ME), were modeled on a 3D Finite element (FE) model of the MFP. Vitablocks Mark II (VMII), Vita Enamic (VE), and Lava Ultimate (LU) CAD/CAM materials were used for each type of design. von Mises and maximum principle values were evaluated and the Weibull function was incorporated with FE analysis to calculate the long term failure probability. Regarding the stresses that occurred in enamel, for each group of material, ME restoration design transmitted less stress than endocrown. During normal occlusal function, the overall failure probability was minimum for ME with VMII. ME restoration design with VE was the best restorative option for premolar teeth with extensive loss of coronal structure under high occlusal loads. Therefore, ME design could be a favorable treatment option for MFPs with missing palatal cusp. Among the CAD/CAM materials tested, VMII and VE were found to be more tooth-friendly than LU. PMID:29119108

  13. Population pharmacokinetics and maximum a posteriori probability Bayesian estimator of abacavir: application of individualized therapy in HIV-infected infants and toddlers

    PubMed Central

    Zhao, Wei; Cella, Massimo; Della Pasqua, Oscar; Burger, David; Jacqz-Aigrain, Evelyne

    2012-01-01

    AIMS To develop a population pharmacokinetic model for abacavir in HIV-infected infants and toddlers, which will be used to describe both once and twice daily pharmacokinetic profiles, identify covariates that explain variability and propose optimal time points to optimize the area under the concentration–time curve (AUC) targeted dosage and individualize therapy. METHODS The pharmacokinetics of abacavir was described with plasma concentrations from 23 patients using nonlinear mixed-effects modelling (NONMEM) software. A two-compartment model with first-order absorption and elimination was developed. The final model was validated using bootstrap, visual predictive check and normalized prediction distribution errors. The Bayesian estimator was validated using the cross-validation and simulation–estimation method. RESULTS The typical population pharmacokinetic parameters and relative standard errors (RSE) were apparent systemic clearance (CL) 13.4 l h−1 (RSE 6.3%), apparent central volume of distribution 4.94 l (RSE 28.7%), apparent peripheral volume of distribution 8.12 l (RSE14.2%), apparent intercompartment clearance 1.25 l h−1 (RSE 16.9%) and absorption rate constant 0.758 h−1 (RSE 5.8%). The covariate analysis identified weight as the individual factor influencing the apparent oral clearance: CL = 13.4 × (weight/12)1.14. The maximum a posteriori probability Bayesian estimator, based on three concentrations measured at 0, 1 or 2, and 3 h after drug intake allowed predicting individual AUC0–t. CONCLUSIONS The population pharmacokinetic model developed for abacavir in HIV-infected infants and toddlers accurately described both once and twice daily pharmacokinetic profiles. The maximum a posteriori probability Bayesian estimator of AUC0–t was developed from the final model and can be used routinely to optimize individual dosing. PMID:21988586

  14. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)

    2005-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  15. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2006-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  16. Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2008-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  17. The Influence of Part-Word Phonotactic Probability/Neighborhood Density on Word Learning by Preschool Children Varying in Expressive Vocabulary

    ERIC Educational Resources Information Center

    Storkel, Holly L.; Hoover, Jill R.

    2011-01-01

    The goal of this study was to examine the influence of part-word phonotactic probability/neighborhood density on word learning by preschool children with normal vocabularies that varied in size. Ninety-eight children (age 2 ; 11-6 ; 0) were taught consonant-vowel-consonant (CVC) nonwords orthogonally varying in the probability/density of the CV…

  18. A Lidar Point Cloud Based Procedure for Vertical Canopy Structure Analysis And 3D Single Tree Modelling in Forest

    PubMed Central

    Wang, Yunsheng; Weinacker, Holger; Koch, Barbara

    2008-01-01

    A procedure for both vertical canopy structure analysis and 3D single tree modelling based on Lidar point cloud is presented in this paper. The whole area of research is segmented into small study cells by a raster net. For each cell, a normalized point cloud whose point heights represent the absolute heights of the ground objects is generated from the original Lidar raw point cloud. The main tree canopy layers and the height ranges of the layers are detected according to a statistical analysis of the height distribution probability of the normalized raw points. For the 3D modelling of individual trees, individual trees are detected and delineated not only from the top canopy layer but also from the sub canopy layer. The normalized points are resampled into a local voxel space. A series of horizontal 2D projection images at the different height levels are then generated respect to the voxel space. Tree crown regions are detected from the projection images. Individual trees are then extracted by means of a pre-order forest traversal process through all the tree crown regions at the different height levels. Finally, 3D tree crown models of the extracted individual trees are reconstructed. With further analyses on the 3D models of individual tree crowns, important parameters such as crown height range, crown volume and crown contours at the different height levels can be derived. PMID:27879916

  19. Modeling of Dissipation Element Statistics in Turbulent Non-Premixed Jet Flames

    NASA Astrophysics Data System (ADS)

    Denker, Dominik; Attili, Antonio; Boschung, Jonas; Hennig, Fabian; Pitsch, Heinz

    2017-11-01

    The dissipation element (DE) analysis is a method for analyzing and compartmentalizing turbulent scalar fields. DEs can be described by two parameters, namely the Euclidean distance l between their extremal points and the scalar difference in the respective points Δϕ . The joint probability density function (jPDF) of these two parameters P(Δϕ , l) is expected to suffice for a statistical reconstruction of the scalar field. In addition, reacting scalars show a strong correlation with these DE parameters in both premixed and non-premixed flames. Normalized DE statistics show a remarkable invariance towards changes in Reynolds numbers. This feature of DE statistics was exploited in a Boltzmann-type evolution equation based model for the probability density function (PDF) of the distance between the extremal points P(l) in isotropic turbulence. Later, this model was extended for the jPDF P(Δϕ , l) and then adapted for the use in free shear flows. The effect of heat release on the scalar scales and DE statistics is investigated and an extended model for non-premixed jet flames is introduced, which accounts for the presence of chemical reactions. This new model is validated against a series of DNS of temporally evolving jet flames. European Research Council Project ``Milestone''.

  20. A NEW METHOD OF PEAK DETECTION FOR ANALYSIS OF COMPREHENSIVE TWO-DIMENSIONAL GAS CHROMATOGRAPHY MASS SPECTROMETRY DATA*

    PubMed Central

    Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang

    2014-01-01

    We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models. PMID:25264474

  1. Use of weather data and remote sensing to predict the geographic and seasonal distribution of Phlebotomus papatasi in southwest Asia.

    PubMed

    Cross, E R; Newcomb, W W; Tucker, C J

    1996-05-01

    Sandfly fever and leishmaniasis were major causes of infectious disease morbidity among military personnel deployed to the Middle East during World War II. Recently, leishmaniasis has been reported in the United Nations Multinational Forces and Observers in the Sinai. Despite these indications of endemicity, no cases of sandfly fever and only 31 cases of leishmaniasis have been identified among U.S. veterans of the Persian Gulf War. The distribution in the Persian Gulf of the vector, Phlebotomus papatasi, is thought to be highly dependent on environmental conditions, especially temperature and relative humidity. A computer model was developed using the occurrence of P. papatasi as the dependent variable and weather data as the independent variables. The results of this model indicated that the greatest sand fly activity and thus the highest risk of sandfly fever and leishmania infections occurred during the spring/summer months before U.S. troops were deployed to the Persian Gulf. Because the weather model produced probability of occurrence information for locations of the weather stations only, normalized difference vegetation index (NDVI) levels from remotely sensed Advanced Very High Resolution Radiometer satellites were determined for each weather station. From the results of the frequency of NDVI levels by probability of occurrence, the range of NDVI levels for presence of the vector was determined. The computer then identified all pixels within the NDVI range indicated and produced a computer-generated map of the probable distribution of P. papatasi. The resulting map expanded the analysis to areas where there were no weather stations and from which no information was reported in the literature, identifying these areas as having either a high or low probability of vector occurrence.

  2. Boundary Layer Effect on Behavior of Discrete Models

    PubMed Central

    Eliáš, Jan

    2017-01-01

    The paper studies systems of rigid bodies with randomly generated geometry interconnected by normal and tangential bonds. The stiffness of these bonds determines the macroscopic elastic modulus while the macroscopic Poisson’s ratio of the system is determined solely by the normal/tangential stiffness ratio. Discrete models with no directional bias have the same probability of element orientation for any direction and therefore the same mechanical properties in a statistical sense at any point and direction. However, the layers of elements in the vicinity of the boundary exhibit biased orientation, preferring elements parallel with the boundary. As a consequence, when strain occurs in this direction, the boundary layer becomes stiffer than the interior for the normal/tangential stiffness ratio larger than one, and vice versa. Nonlinear constitutive laws are typically such that the straining of an element in shear results in higher strength and ductility than straining in tension. Since the boundary layer tends, due to the bias in the elemental orientation, to involve more tension than shear at the contacts, it also becomes weaker and less ductile. The paper documents these observations and compares them to the results of theoretical analysis. PMID:28772517

  3. Human factors of flight-deck checklists: The normal checklist

    NASA Technical Reports Server (NTRS)

    Degani, Asaf; Wiener, Earl L.

    1991-01-01

    Although the aircraft checklist has long been regarded as the foundation of pilot standardization and cockpit safety, it has escaped the scrutiny of the human factors profession. The improper use, or the non-use, of the normal checklist by flight crews is often cited as the probable cause or at least a contributing factor to aircraft accidents. An attempt is made to analyze the normal checklist, its functions, format, design, length, usage, and the limitations of the humans who must interact with it. The development of the checklist from the certification of a new model to its delivery and use by the customer are discussed. The influence of the government, particularly the FAA Principle Operations Inspector, the manufacturer's philosophy, the airline's culture, and the end user, the pilot, influence the ultimate design and usage of this device. The effects of airline mergers and acquisitions on checklist usage and design are noted. In addition, the interaction between production pressures and checklist usage and checklist management are addressed. Finally, a list of design guidelines for normal checklists is provided.

  4. Impact of Chemotherapy on Normal Tissue Complication Probability Models of Acute Hematologic Toxicity in Patients Receiving Pelvic Intensity Modulated Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bazan, Jose G.; Luxton, Gary; Kozak, Margaret M.

    Purpose: To determine how chemotherapy agents affect radiation dose parameters that correlate with acute hematologic toxicity (HT) in patients treated with pelvic intensity modulated radiation therapy (P-IMRT) and concurrent chemotherapy. Methods and Materials: We assessed HT in 141 patients who received P-IMRT for anal, gynecologic, rectal, or prostate cancers, 95 of whom received concurrent chemotherapy. Patients were separated into 4 groups: mitomycin (MMC) + 5-fluorouracil (5FU, 37 of 141), platinum ± 5FU (Cis, 32 of 141), 5FU (26 of 141), and P-IMRT alone (46 of 141). The pelvic bone was contoured as a surrogate for pelvic bone marrow (PBM) andmore » divided into subsites: ilium, lower pelvis, and lumbosacral spine (LSS). The volumes of each region receiving 5-40 Gy were calculated. The endpoint for HT was grade ≥3 (HT3+) leukopenia, neutropenia or thrombocytopenia. Normal tissue complication probability was calculated using the Lyman-Kutcher-Burman model. Logistic regression was used to analyze association between HT3+ and dosimetric parameters. Results: Twenty-six patients experienced HT3+: 10 of 37 (27%) MMC, 14 of 32 (44%) Cis, 2 of 26 (8%) 5FU, and 0 of 46 P-IMRT. PBM dosimetric parameters were correlated with HT3+ in the MMC group but not in the Cis group. LSS dosimetric parameters were well correlated with HT3+ in both the MMC and Cis groups. Constrained optimization (0« less

  5. Optimal Symmetric Multimodal Templates and Concatenated Random Forests for Supervised Brain Tumor Segmentation (Simplified) with ANTsR.

    PubMed

    Tustison, Nicholas J; Shrinidhi, K L; Wintermark, Max; Durst, Christopher R; Kandel, Benjamin M; Gee, James C; Grossman, Murray C; Avants, Brian B

    2015-04-01

    Segmenting and quantifying gliomas from MRI is an important task for diagnosis, planning intervention, and for tracking tumor changes over time. However, this task is complicated by the lack of prior knowledge concerning tumor location, spatial extent, shape, possible displacement of normal tissue, and intensity signature. To accommodate such complications, we introduce a framework for supervised segmentation based on multiple modality intensity, geometry, and asymmetry feature sets. These features drive a supervised whole-brain and tumor segmentation approach based on random forest-derived probabilities. The asymmetry-related features (based on optimal symmetric multimodal templates) demonstrate excellent discriminative properties within this framework. We also gain performance by generating probability maps from random forest models and using these maps for a refining Markov random field regularized probabilistic segmentation. This strategy allows us to interface the supervised learning capabilities of the random forest model with regularized probabilistic segmentation using the recently developed ANTsR package--a comprehensive statistical and visualization interface between the popular Advanced Normalization Tools (ANTs) and the R statistical project. The reported algorithmic framework was the top-performing entry in the MICCAI 2013 Multimodal Brain Tumor Segmentation challenge. The challenge data were widely varying consisting of both high-grade and low-grade glioma tumor four-modality MRI from five different institutions. Average Dice overlap measures for the final algorithmic assessment were 0.87, 0.78, and 0.74 for "complete", "core", and "enhanced" tumor components, respectively.

  6. Measurements of scalar released from point sources in a turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Talluru, K. M.; Hernandez-Silva, C.; Philip, J.; Chauhan, K. A.

    2017-04-01

    Measurements of velocity and concentration fluctuations for a horizontal plume released at several wall-normal locations in a turbulent boundary layer (TBL) are discussed in this paper. The primary objective of this study is to establish a systematic procedure to acquire accurate single-point concentration measurements for a substantially long time so as to obtain converged statistics of long tails of probability density functions of concentration. Details of the calibration procedure implemented for long measurements are presented, which include sensor drift compensation to eliminate the increase in average background concentration with time. While most previous studies reported measurements where the source height is limited to, {{s}z}/δ ≤slant 0.2 , where s z is the wall-normal source height and δ is the boundary layer thickness, here results of concentration fluctuations when the plume is released in the outer layer are emphasised. Results of mean and root-mean-square (r.m.s.) profiles of concentration for elevated sources agree with the well-accepted reflected Gaussian model (Fackrell and Robins 1982 J. Fluid. Mech. 117). However, there is clear deviation from the reflected Gaussian model for source in the intermittent region of TBL particularly at locations higher than the source itself. Further, we find that the plume half-widths are different for the mean and r.m.s. concentration profiles. Long sampling times enabled us to calculate converged probability density functions at high concentrations and these are found to exhibit exponential distribution.

  7. Buried landmine detection using multivariate normal clustering

    NASA Astrophysics Data System (ADS)

    Duston, Brian M.

    2001-10-01

    A Bayesian classification algorithm is presented for discriminating buried land mines from buried and surface clutter in Ground Penetrating Radar (GPR) signals. This algorithm is based on multivariate normal (MVN) clustering, where feature vectors are used to identify populations (clusters) of mines and clutter objects. The features are extracted from two-dimensional images created from ground penetrating radar scans. MVN clustering is used to determine the number of clusters in the data and to create probability density models for target and clutter populations, producing the MVN clustering classifier (MVNCC). The Bayesian Information Criteria (BIC) is used to evaluate each model to determine the number of clusters in the data. An extension of the MVNCC allows the model to adapt to local clutter distributions by treating each of the MVN cluster components as a Poisson process and adaptively estimating the intensity parameters. The algorithm is developed using data collected by the Mine Hunter/Killer Close-In Detector (MH/K CID) at prepared mine lanes. The Mine Hunter/Killer is a prototype mine detecting and neutralizing vehicle developed for the U.S. Army to clear roads of anti-tank mines.

  8. Constraining the Source of the M w 8.1 Chiapas, Mexico Earthquake of 8 September 2017 Using Teleseismic and Tsunami Observations

    NASA Astrophysics Data System (ADS)

    Heidarzadeh, Mohammad; Ishibe, Takeo; Harada, Tomoya

    2018-04-01

    The September 2017 Chiapas (Mexico) normal-faulting intraplate earthquake (M w 8.1) occurred within the Tehuantepec seismic gap offshore Mexico. We constrained the finite-fault slip model of this great earthquake using teleseismic and tsunami observations. First, teleseismic body-wave inversions were conducted for both steep (NP-1) and low-angle (NP-2) nodal planes for rupture velocities (V r) of 1.5-4.0 km/s. Teleseismic inversion guided us to NP-1 as the actual fault plane, but was not conclusive about the best V r. Tsunami simulations also confirmed that NP-1 is favored over NP-2 and guided the V r = 2.5 km/s as the best source model. Our model has a maximum and average slips of 13.1 and 3.7 m, respectively, over a 130 km × 80 km fault plane. Coulomb stress transfer analysis revealed that the probability for the occurrence of a future large thrust interplate earthquake at offshore of the Tehuantepec seismic gap had been increased following the 2017 Chiapas normal-faulting intraplate earthquake.

  9. Escape problem under stochastic volatility: The Heston model

    NASA Astrophysics Data System (ADS)

    Masoliver, Jaume; Perelló, Josep

    2008-11-01

    We solve the escape problem for the Heston random diffusion model from a finite interval of span L . We obtain exact expressions for the survival probability (which amounts to solving the complete escape problem) as well as for the mean exit time. We also average the volatility in order to work out the problem for the return alone regardless of volatility. We consider these results in terms of the dimensionless normal level of volatility—a ratio of the three parameters that appear in the Heston model—and analyze their form in several asymptotic limits. Thus, for instance, we show that the mean exit time grows quadratically with large spans while for small spans the growth is systematically slower, depending on the value of the normal level. We compare our results with those of the Wiener process and show that the assumption of stochastic volatility, in an apparently paradoxical way, increases survival and prolongs the escape time. We finally observe that the model is able to describe the main exit-time statistics of the Dow-Jones daily index.

  10. Bayesian alternative to the ISO-GUM's use of the Welch Satterthwaite formula

    NASA Astrophysics Data System (ADS)

    Kacker, Raghu N.

    2006-02-01

    In certain disciplines, uncertainty is traditionally expressed as an interval about an estimate for the value of the measurand. Development of such uncertainty intervals with a stated coverage probability based on the International Organization for Standardization (ISO) Guide to the Expression of Uncertainty in Measurement (GUM) requires a description of the probability distribution for the value of the measurand. The ISO-GUM propagates the estimates and their associated standard uncertainties for various input quantities through a linear approximation of the measurement equation to determine an estimate and its associated standard uncertainty for the value of the measurand. This procedure does not yield a probability distribution for the value of the measurand. The ISO-GUM suggests that under certain conditions motivated by the central limit theorem the distribution for the value of the measurand may be approximated by a scaled-and-shifted t-distribution with effective degrees of freedom obtained from the Welch-Satterthwaite (W-S) formula. The approximate t-distribution may then be used to develop an uncertainty interval with a stated coverage probability for the value of the measurand. We propose an approximate normal distribution based on a Bayesian uncertainty as an alternative to the t-distribution based on the W-S formula. A benefit of the approximate normal distribution based on a Bayesian uncertainty is that it greatly simplifies the expression of uncertainty by eliminating altogether the need for calculating effective degrees of freedom from the W-S formula. In the special case where the measurand is the difference between two means, each evaluated from statistical analyses of independent normally distributed measurements with unknown and possibly unequal variances, the probability distribution for the value of the measurand is known to be a Behrens-Fisher distribution. We compare the performance of the approximate normal distribution based on a Bayesian uncertainty and the approximate t-distribution based on the W-S formula with respect to the Behrens-Fisher distribution. The approximate normal distribution is simpler and better in this case. A thorough investigation of the relative performance of the two approximate distributions would require comparison for a range of measurement equations by numerical methods.

  11. Regional magnetic resonance imaging measures for multivariate analysis in Alzheimer's disease and mild cognitive impairment.

    PubMed

    Westman, Eric; Aguilar, Carlos; Muehlboeck, J-Sebastian; Simmons, Andrew

    2013-01-01

    Automated structural magnetic resonance imaging (MRI) processing pipelines are gaining popularity for Alzheimer's disease (AD) research. They generate regional volumes, cortical thickness measures and other measures, which can be used as input for multivariate analysis. It is not clear which combination of measures and normalization approach are most useful for AD classification and to predict mild cognitive impairment (MCI) conversion. The current study includes MRI scans from 699 subjects [AD, MCI and controls (CTL)] from the Alzheimer's disease Neuroimaging Initiative (ADNI). The Freesurfer pipeline was used to generate regional volume, cortical thickness, gray matter volume, surface area, mean curvature, gaussian curvature, folding index and curvature index measures. 259 variables were used for orthogonal partial least square to latent structures (OPLS) multivariate analysis. Normalisation approaches were explored and the optimal combination of measures determined. Results indicate that cortical thickness measures should not be normalized, while volumes should probably be normalized by intracranial volume (ICV). Combining regional cortical thickness measures (not normalized) with cortical and subcortical volumes (normalized with ICV) using OPLS gave a prediction accuracy of 91.5 % when distinguishing AD versus CTL. This model prospectively predicted future decline from MCI to AD with 75.9 % of converters correctly classified. Normalization strategy did not have a significant effect on the accuracies of multivariate models containing multiple MRI measures for this large dataset. The appropriate choice of input for multivariate analysis in AD and MCI is of great importance. The results support the use of un-normalised cortical thickness measures and volumes normalised by ICV.

  12. Robust LOD scores for variance component-based linkage analysis.

    PubMed

    Blangero, J; Williams, J T; Almasy, L

    2000-01-01

    The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.

  13. A simple implementation of a normal mixture approach to differential gene expression in multiclass microarrays.

    PubMed

    McLachlan, G J; Bean, R W; Jones, L Ben-Tovim

    2006-07-01

    An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.

  14. Percolation

    NASA Astrophysics Data System (ADS)

    Dã¡Vila, Alã¡N.; Escudero, Christian; López, Jorge, , Dr.

    2004-10-01

    Several methods have been developed in order to study phase transitions in nuclear fragmentation. The one used in this research is Percolation. This method allows us to adjust resulting data to heavy ion collisions experiments. In systems, such as atomic nuclei or molecules, energy is put into the system. The system's particles move away from each other until their links are broken. Some particles will still be linked. The fragments' distribution is found to be a power law. We are witnessing then a critical phenomenon. In our model the particles are represented as occupied spaces in a cubical array. Each particle has a bound to each one of its 6 neighbors. Each bound can be active if the two particles are linked or inactive if they are not. When two or more particles are linked, a fragment is formed. The probability for a specific link to be broken cannot be calculated, so the probability for a bound to be active is going to be used as parameter when trying to adjust the data. For a given probability p several arrays are generated. The fragments are counted. The fragments' distribution is then adjusted to a power law. The probability that generates the better fit is going to be the critical probability that indicates a phase transition. The better fit is found by seeking the fragments' distribution that gives the minimal chi squared when compared to a power law. As additional evidence of criticality the entropy and normalized variance of the mass are also calculated for each probability.

  15. Portfolio optimization with skewness and kurtosis

    NASA Astrophysics Data System (ADS)

    Lam, Weng Hoe; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi

    2013-04-01

    Mean and variance of return distributions are two important parameters of the mean-variance model in portfolio optimization. However, the mean-variance model will become inadequate if the returns of assets are not normally distributed. Therefore, higher moments such as skewness and kurtosis cannot be ignored. Risk averse investors prefer portfolios with high skewness and low kurtosis so that the probability of getting negative rates of return will be reduced. The objective of this study is to compare the portfolio compositions as well as performances between the mean-variance model and mean-variance-skewness-kurtosis model by using the polynomial goal programming approach. The results show that the incorporation of skewness and kurtosis will change the optimal portfolio compositions. The mean-variance-skewness-kurtosis model outperforms the mean-variance model because the mean-variance-skewness-kurtosis model takes skewness and kurtosis into consideration. Therefore, the mean-variance-skewness-kurtosis model is more appropriate for the investors of Malaysia in portfolio optimization.

  16. Accumulation risk assessment for the flooding hazard

    NASA Astrophysics Data System (ADS)

    Roth, Giorgio; Ghizzoni, Tatiana; Rudari, Roberto

    2010-05-01

    One of the main consequences of the demographic and economic development and of markets and trades globalization is represented by risks cumulus. In most cases, the cumulus of risks intuitively arises from the geographic concentration of a number of vulnerable elements in a single place. For natural events, risks cumulus can be associated, in addition to intensity, also to event's extension. In this case, the magnitude can be such that large areas, that may include many regions or even large portions of different countries, are stroked by single, catastrophic, events. Among natural risks, the impact of the flooding hazard cannot be understated. To cope with, a variety of mitigation actions can be put in place: from the improvement of monitoring and alert systems to the development of hydraulic structures, throughout land use restrictions, civil protection, financial and insurance plans. All of those viable options present social and economic impacts, either positive or negative, whose proper estimate should rely on the assumption of appropriate - present and future - flood risk scenarios. It is therefore necessary to identify proper statistical methodologies, able to describe the multivariate aspects of the involved physical processes and their spatial dependence. In hydrology and meteorology, but also in finance and insurance practice, it has early been recognized that classical statistical theory distributions (e.g., the normal and gamma families) are of restricted use for modeling multivariate spatial data. Recent research efforts have been therefore directed towards developing statistical models capable of describing the forms of asymmetry manifest in data sets. This, in particular, for the quite frequent case of phenomena whose empirical outcome behaves in a non-normal fashion, but still maintains some broad similarity with the multivariate normal distribution. Fruitful approaches were recognized in the use of flexible models, which include the normal distribution as a special or limiting case (e.g., the skew-normal or skew-t distributions). The present contribution constitutes an attempt to provide a better estimation of the joint probability distribution able to describe flood events in a multi-site multi-basin fashion. This goal will be pursued through the multivariate skew-t distribution, which allows to analytically define the joint probability distribution. Performances of the skew-t distribution will be discussed with reference to the Tanaro River in Northwestern Italy. To enhance the characteristics of the correlation structure, both nested and non-nested gauging stations will be selected, with significantly different contributing areas.

  17. Clinicopathologic Significance of Mismatch Repair Defects in Endometrial Cancer: An NRG Oncology/Gynecologic Oncology Group Study

    PubMed Central

    McMeekin, D. Scott; Tritchler, David L.; Cohn, David E.; Mutch, David G.; Lankes, Heather A.; Geller, Melissa A.; Powell, Matthew A.; Backes, Floor J.; Landrum, Lisa M.; Zaino, Richard; Broaddus, Russell D.; Ramirez, Nilsa; Gao, Feng; Ali, Shamshad; Darcy, Kathleen M.; Pearl, Michael L.; DiSilvestro, Paul A.; Lele, Shashikant B.

    2016-01-01

    Purpose The clinicopathologic significance of mismatch repair (MMR) defects in endometrioid endometrial cancer (EEC) has not been definitively established. We undertook tumor typing to classify MMR defects to determine if MMR status is prognostic or predictive. Methods Primary EECs from NRG/GOG0210 patients were assessed for microsatellite instability (MSI), MLH1 methylation, and MMR protein expression. Each tumor was assigned to one of four MMR classes: normal, epigenetic defect, probable mutation (MMR defect not attributable to MLH1 methylation), or MSI-low. The relationships between MMR classes and clinicopathologic variables were assessed using contingency table tests and Cox proportional hazard models. Results A total of 1,024 tumors were assigned to MMR classes. Epigenetic and probable mutations in MMR were significantly associated with higher grade and more frequent lymphovascular space invasion. Epigenetic defects were more common in patients with higher International Federation of Gynecology and Obstetrics stage. Overall, there were no differences in outcomes. Progression-free survival was, however, worse for women whose tumors had epigenetic MMR defects compared with the MMR normal group (hazard ratio, 1.37; P < .05; 95% CI, 1.00 to 1.86). An exploratory analysis of interaction between MMR status and adjuvant therapy showed a trend toward improved progression-free survival for probable MMR mutation cases. Conclusion MMR defects in EECs are associated with a number of well-established poor prognostic indicators. Women with tumors that had MMR defects were likely to have higher-grade cancers and more frequent lymphovascular space invasion. Surprisingly, outcomes in these patients were similar to patients with MMR normal tumors, suggesting that MMR defects may counteract the effects of negative prognostic factors. Altered immune surveillance of MMR-deficient tumors, and other host/tumor interactions, is likely to determine outcomes for patients with MMR-deficient tumors. PMID:27325856

  18. Clinicopathologic Significance of Mismatch Repair Defects in Endometrial Cancer: An NRG Oncology/Gynecologic Oncology Group Study.

    PubMed

    McMeekin, D Scott; Tritchler, David L; Cohn, David E; Mutch, David G; Lankes, Heather A; Geller, Melissa A; Powell, Matthew A; Backes, Floor J; Landrum, Lisa M; Zaino, Richard; Broaddus, Russell D; Ramirez, Nilsa; Gao, Feng; Ali, Shamshad; Darcy, Kathleen M; Pearl, Michael L; DiSilvestro, Paul A; Lele, Shashikant B; Goodfellow, Paul J

    2016-09-01

    The clinicopathologic significance of mismatch repair (MMR) defects in endometrioid endometrial cancer (EEC) has not been definitively established. We undertook tumor typing to classify MMR defects to determine if MMR status is prognostic or predictive. Primary EECs from NRG/GOG0210 patients were assessed for microsatellite instability (MSI), MLH1 methylation, and MMR protein expression. Each tumor was assigned to one of four MMR classes: normal, epigenetic defect, probable mutation (MMR defect not attributable to MLH1 methylation), or MSI-low. The relationships between MMR classes and clinicopathologic variables were assessed using contingency table tests and Cox proportional hazard models. A total of 1,024 tumors were assigned to MMR classes. Epigenetic and probable mutations in MMR were significantly associated with higher grade and more frequent lymphovascular space invasion. Epigenetic defects were more common in patients with higher International Federation of Gynecology and Obstetrics stage. Overall, there were no differences in outcomes. Progression-free survival was, however, worse for women whose tumors had epigenetic MMR defects compared with the MMR normal group (hazard ratio, 1.37; P < .05; 95% CI, 1.00 to 1.86). An exploratory analysis of interaction between MMR status and adjuvant therapy showed a trend toward improved progression-free survival for probable MMR mutation cases. MMR defects in EECs are associated with a number of well-established poor prognostic indicators. Women with tumors that had MMR defects were likely to have higher-grade cancers and more frequent lymphovascular space invasion. Surprisingly, outcomes in these patients were similar to patients with MMR normal tumors, suggesting that MMR defects may counteract the effects of negative prognostic factors. Altered immune surveillance of MMR-deficient tumors, and other host/tumor interactions, is likely to determine outcomes for patients with MMR-deficient tumors. © 2016 by American Society of Clinical Oncology.

  19. Myoarchitecture and connective tissue in hearts with tricuspid atresia

    PubMed Central

    Sanchez-Quintana, D; Climent, V; Ho, S; Anderson, R

    1999-01-01

    Objective—To compare the atrial and ventricular myoarchitecture in the normal heart and the heart with tricuspid atresia, and to investigate changes in the three dimensional arrangement of collagen fibrils.
Methods—Blunt dissection and cell maceration with scanning electron microscopy were used to study the architecture of the atrial and ventricular musculature and the arrangement of collagen fibrils in three specimens with tricuspid atresia and six normal human hearts.
Results—There were significant modifications in the myoarchitecture of the right atrium and the left ventricle, both being noticeably hypertrophied. The middle layer of the ventricle in the abnormal hearts was thicker than in the normal hearts. The orientation of the superficial layer in the left ventricle in hearts with tricuspid atresia was irregular compared with the normal hearts. Scanning electron microscopy showed coarser endomysial sheaths and denser perimysial septa in hearts with tricuspid atresia than in normal hearts.
Conclusions—The overall architecture of the muscle fibres and its connective tissue matrix in hearts with tricuspid atresia differed from normal, probably reflecting modelling of the myocardium that is inherent to the malformation. This is in concordance with clinical observations showing deterioration in pump function of the dominant left ventricle from very early in life.

 Keywords: tricuspid atresia; congenital heart defects; connective tissue; fibrosis PMID:9922357

  20. A two-scale scattering model with application to the JONSWAP '75 aircraft microwave scatterometer experiment

    NASA Technical Reports Server (NTRS)

    Wentz, F. J.

    1977-01-01

    The general problem of bistatic scattering from a two scale surface was evaluated. The treatment was entirely two-dimensional and in a vector formulation independent of any particular coordinate system. The two scale scattering model was then applied to backscattering from the sea surface. In particular, the model was used in conjunction with the JONSWAP 1975 aircraft scatterometer measurements to determine the sea surface's two scale roughness distributions, namely the probability density of the large scale surface slope and the capillary wavenumber spectrum. Best fits yield, on the average, a 0.7 dB rms difference between the model computations and the vertical polarization measurements of the normalized radar cross section. Correlations between the distribution parameters and the wind speed were established from linear, least squares regressions.

  1. A hybrid CS-SA intelligent approach to solve uncertain dynamic facility layout problems considering dependency of demands

    NASA Astrophysics Data System (ADS)

    Moslemipour, Ghorbanali

    2018-07-01

    This paper aims at proposing a quadratic assignment-based mathematical model to deal with the stochastic dynamic facility layout problem. In this problem, product demands are assumed to be dependent normally distributed random variables with known probability density function and covariance that change from period to period at random. To solve the proposed model, a novel hybrid intelligent algorithm is proposed by combining the simulated annealing and clonal selection algorithms. The proposed model and the hybrid algorithm are verified and validated using design of experiment and benchmark methods. The results show that the hybrid algorithm has an outstanding performance from both solution quality and computational time points of view. Besides, the proposed model can be used in both of the stochastic and deterministic situations.

  2. Bayesian evidence computation for model selection in non-linear geoacoustic inference problems.

    PubMed

    Dettmer, Jan; Dosso, Stan E; Osler, John C

    2010-12-01

    This paper applies a general Bayesian inference approach, based on Bayesian evidence computation, to geoacoustic inversion of interface-wave dispersion data. Quantitative model selection is carried out by computing the evidence (normalizing constants) for several model parameterizations using annealed importance sampling. The resulting posterior probability density estimate is compared to estimates obtained from Metropolis-Hastings sampling to ensure consistent results. The approach is applied to invert interface-wave dispersion data collected on the Scotian Shelf, off the east coast of Canada for the sediment shear-wave velocity profile. Results are consistent with previous work on these data but extend the analysis to a rigorous approach including model selection and uncertainty analysis. The results are also consistent with core samples and seismic reflection measurements carried out in the area.

  3. Mixture modelling for cluster analysis.

    PubMed

    McLachlan, G J; Chang, S U

    2004-10-01

    Cluster analysis via a finite mixture model approach is considered. With this approach to clustering, the data can be partitioned into a specified number of clusters g by first fitting a mixture model with g components. An outright clustering of the data is then obtained by assigning an observation to the component to which it has the highest estimated posterior probability of belonging; that is, the ith cluster consists of those observations assigned to the ith component (i = 1,..., g). The focus is on the use of mixtures of normal components for the cluster analysis of data that can be regarded as being continuous. But attention is also given to the case of mixed data, where the observations consist of both continuous and discrete variables.

  4. Leaf optical system modeled as a stochastic process. [solar radiation interaction with terrestrial vegetation

    NASA Technical Reports Server (NTRS)

    Tucker, C. J.; Garratt, M. W.

    1977-01-01

    A stochastic leaf radiation model based upon physical and physiological properties of dicot leaves has been developed. The model accurately predicts the absorbed, reflected, and transmitted radiation of normal incidence as a function of wavelength resulting from the leaf-irradiance interaction over the spectral interval of 0.40-2.50 micron. The leaf optical system has been represented as Markov process with a unique transition matrix at each 0.01-micron increment between 0.40 micron and 2.50 micron. Probabilities are calculated at every wavelength interval from leaf thickness, structure, pigment composition, and water content. Simulation results indicate that this approach gives accurate estimations of actual measured values for dicot leaf absorption, reflection, and transmission as a function of wavelength.

  5. Exploiting data representation for fault tolerance

    DOE PAGES

    Hoemmen, Mark Frederick; Elliott, J.; Sandia National Lab.; ...

    2015-01-06

    Incorrect computer hardware behavior may corrupt intermediate computations in numerical algorithms, possibly resulting in incorrect answers. Prior work models misbehaving hardware by randomly flipping bits in memory. We start by accepting this premise, and present an analytic model for the error introduced by a bit flip in an IEEE 754 floating-point number. We then relate this finding to the linear algebra concepts of normalization and matrix equilibration. In particular, we present a case study illustrating that normalizing both vector inputs of a dot product minimizes the probability of a single bit flip causing a large error in the dot product'smore » result. Moreover, the absolute error is either less than one or very large, which allows detection of large errors. Then, we apply this to the GMRES iterative solver. We count all possible errors that can be introduced through faults in arithmetic in the computationally intensive orthogonalization phase of GMRES, and show that when the matrix is equilibrated, the absolute error is bounded above by one.« less

  6. Maxwell and the normal distribution: A colored story of probability, independence, and tendency toward equilibrium

    NASA Astrophysics Data System (ADS)

    Gyenis, Balázs

    2017-02-01

    We investigate Maxwell's attempt to justify the mathematical assumptions behind his 1860 Proposition IV according to which the velocity components of colliding particles follow the normal distribution. Contrary to the commonly held view we find that his molecular collision model plays a crucial role in reaching this conclusion, and that his model assumptions also permit inference to equalization of mean kinetic energies (temperatures), which is what he intended to prove in his discredited and widely ignored Proposition VI. If we take a charitable reading of his own proof of Proposition VI then it was Maxwell, and not Boltzmann, who gave the first proof of a tendency towards equilibrium, a sort of H-theorem. We also call attention to a potential conflation of notions of probabilistic and value independence in relevant prior works of his contemporaries and of his own, and argue that this conflation might have impacted his adoption of the suspect independence assumption of Proposition IV.

  7. Normal myocardial perfusion scan portends a benign prognosis independent from the pretest probability of coronary artery disease. Sub-analysis of the J-ACCESS study.

    PubMed

    Imamura, Yosihiro; Fukuyama, Takaya; Nishimura, Sigeyuki; Nishimura, Tsunehiko

    2009-08-01

    We assessed the usefulness of gated stress/rest 99mTc-tetrofosmin myocardial perfusion single photon emission computed tomography (SPECT) to predict ischemic cardiac events in Japanese patients with various estimated pretest probabilities of coronary artery disease (CAD). Of the 4031 consecutively registered patients for a J-ACCESS (Japanese Assessment of Cardiac Events and Survival Study by Quantitative Gated SPECT) study, 1904 patients without prior cardiac events were selected. Gated stress/rest myocardial perfusion SPECT was performed and segmental perfusion scores and quantitative gated SPECT results were derived. The pretest probability for having CAD was estimated using the American College of Cardiology/American Heart Association/American College of Physicians-American Society of Internal Medicine guideline data for the management of patients with chronic stable angina, which includes age, gender, and type of chest discomfort. The patients were followed up for three years. During the three-year follow-up period, 96 developed ischemic cardiac events: 17 cardiac deaths, 8 nonfatal myocardial infarction, and 71 clinically driven revascularization. The summed stress score (SSS) was the most powerful independent predictor of all ischemic cardiac events (hazard ratio 1.077, CI 1.045-1.110). Abnormal SSS (> 3) was associated with a significantly higher cardiac event rate in patients with an intermediate to high pretest probability of CAD. Normal SSS (< or = 3) was associated with a low event rate in patients with any pretest probability of CAD. Myocardial perfusion SPECT is useful for further risk-stratification of patients with suspected CAD. The abnormal scan result (SSS > 3) is discriminative for subsequent cardiac events only in the groups with an intermediate to high pretest probability of CAD. The salient result is that normal scan results portend a benign prognosis independent from the pretest probability of CAD.

  8. Ditching Tests of a 1/20-Scale Model of the Northrop B-35 Airplane

    NASA Technical Reports Server (NTRS)

    Fisher, Lloyd J.

    1948-01-01

    Tests of a 1/20-scale dynamically similar model of the Northrop B-35 airplane were made to study its ditching characteristics. The model was ditched in calm water at the Langley tank no. 2 monorail. Various landing attitudes, speeds,and conditions of damage were simulated during the investigation. The ditching characteristics were determined by visual observation and from motion-picture records and time-history acceleration records. Both longitudinal and lateral accelerations were measured. Results are given in tabular form and time-history acceleration curves and sequence photographs are presented. Conclusions based on the model investigation are as follows: 1. The best ditching of the B-35 airplane probably can be made by contacting the water in a near normal landing attitude of about 9 deg with the landing flaps full down so as to have a low horizontal speed. 2. The airplane usually will turn or yaw but the motion will not be violent. The maximum lateral acceleration will be about 2g. 3. If the airplane does not turn or yaw immediately after landing, it probably will trim up and then make a smooth run or porpoise slightly. The maximum longitudinal decelerations that will be encountered are about 6g or 7g. 4. Although the decelerations are not indicated to be especially large, the construction of the airplane is such that extensive damage is to be expected, and it probably will be difficult to find ditching stations where crew members can adequately brace themselves and be reasonably sure of avoiding a large inrush of water.

  9. Multifractals embedded in short time series: An unbiased estimation of probability moment

    NASA Astrophysics Data System (ADS)

    Qiu, Lu; Yang, Tianguang; Yin, Yanhua; Gu, Changgui; Yang, Huijie

    2016-12-01

    An exact estimation of probability moments is the base for several essential concepts, such as the multifractals, the Tsallis entropy, and the transfer entropy. By means of approximation theory we propose a new method called factorial-moment-based estimation of probability moments. Theoretical prediction and computational results show that it can provide us an unbiased estimation of the probability moments of continuous order. Calculations on probability redistribution model verify that it can extract exactly multifractal behaviors from several hundred recordings. Its powerfulness in monitoring evolution of scaling behaviors is exemplified by two empirical cases, i.e., the gait time series for fast, normal, and slow trials of a healthy volunteer, and the closing price series for Shanghai stock market. By using short time series with several hundred lengths, a comparison with the well-established tools displays significant advantages of its performance over the other methods. The factorial-moment-based estimation can evaluate correctly the scaling behaviors in a scale range about three generations wider than the multifractal detrended fluctuation analysis and the basic estimation. The estimation of partition function given by the wavelet transform modulus maxima has unacceptable fluctuations. Besides the scaling invariance focused in the present paper, the proposed factorial moment of continuous order can find its various uses, such as finding nonextensive behaviors of a complex system and reconstructing the causality relationship network between elements of a complex system.

  10. Local regularity for time-dependent tug-of-war games with varying probabilities

    NASA Astrophysics Data System (ADS)

    Parviainen, Mikko; Ruosteenoja, Eero

    2016-07-01

    We study local regularity properties of value functions of time-dependent tug-of-war games. For games with constant probabilities we get local Lipschitz continuity. For more general games with probabilities depending on space and time we obtain Hölder and Harnack estimates. The games have a connection to the normalized p (x , t)-parabolic equation ut = Δu + (p (x , t) - 2) Δ∞N u.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, M; Choi, E; Chuong, M

    Purpose: To evaluate weather the current radiobiological models can predict the normal liver complications of radioactive Yttrium-90 ({sup 90}Y) selective-internal-radiation-treatment (SIRT) for metastatic liver lesions based on the post-infusion {sup 90}Y PET images. Methods: A total of 20 patients with metastatic liver tumors treated with SIRT that received a post-infusion {sup 90}Y-PET/CT scan were analyzed in this work. The 3D activity distribution of the PET images was converted into a 3D dose distribution via a kernel convolution process. The physical dose distribution was converted into the equivalent dose (EQ2) delivered at 2 Gy based on the linear-quadratic (LQ) model consideringmore » the dose rate effect. The biological endpoint of this work was radiation-induce liver disease (RILD). The NTCPs were calculated with four different repair-times (T1/2-Liver-Repair= 0,0.5,1.0,2.0 hr) and three published NTCP models (Lyman-external-RT, Lyman 90Y-HCC-SIRT, parallel model) were compared to the incidence of RILD of the recruited patients to evaluate their ability of outcome prediction. Results: The mean normal liver physical dose (avg. 51.9 Gy, range 31.9–69.8 Gy) is higher than the suggested liver dose constraint for external beam treatment (∼30 Gy). However, none of the patients in our study developed RILD after the SIRT. The estimated probability of ‘no patient developing RILD’ obtained from the two Lyman models are 46.3% to 48.3% (T1/2-Liver-Repair= 0hr) and <1% for all other repair times. For the parallel model, the estimated probability is 97.3% (0hr), 51.7% (0.5hr), 2.0% (1.0hr) and <1% (2.0hr). Conclusion: Molecular-images providing the distribution of {sup 90}Y enable the dose-volume based dose/outcome analysis for SIRT. Current NTCP models fail to predict RILD complications in our patient population, unless a very short repair-time for the liver is assumed. The discrepancy between the Lyman {sup 90}Y-HCC-SIRT model predicted and the clinically observed outcomes further demonstrates the need of an NTCP model specific to the metastatic liver SIRT.« less

  12. Chaotic advection at large Péclet number: Electromagnetically driven experiments, numerical simulations, and theoretical predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Figueroa, Aldo; Meunier, Patrice; Villermaux, Emmanuel

    2014-01-15

    We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, “The diffusive strip method for scalar mixing in two-dimensions,” J. Fluid Mech. 662, 134–172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement withmore » quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.« less

  13. Bayesian modeling and inference for diagnostic accuracy and probability of disease based on multiple diagnostic biomarkers with and without a perfect reference standard.

    PubMed

    Jafarzadeh, S Reza; Johnson, Wesley O; Gardner, Ian A

    2016-03-15

    The area under the receiver operating characteristic (ROC) curve (AUC) is used as a performance metric for quantitative tests. Although multiple biomarkers may be available for diagnostic or screening purposes, diagnostic accuracy is often assessed individually rather than in combination. In this paper, we consider the interesting problem of combining multiple biomarkers for use in a single diagnostic criterion with the goal of improving the diagnostic accuracy above that of an individual biomarker. The diagnostic criterion created from multiple biomarkers is based on the predictive probability of disease, conditional on given multiple biomarker outcomes. If the computed predictive probability exceeds a specified cutoff, the corresponding subject is allocated as 'diseased'. This defines a standard diagnostic criterion that has its own ROC curve, namely, the combined ROC (cROC). The AUC metric for cROC, namely, the combined AUC (cAUC), is used to compare the predictive criterion based on multiple biomarkers to one based on fewer biomarkers. A multivariate random-effects model is proposed for modeling multiple normally distributed dependent scores. Bayesian methods for estimating ROC curves and corresponding (marginal) AUCs are developed when a perfect reference standard is not available. In addition, cAUCs are computed to compare the accuracy of different combinations of biomarkers for diagnosis. The methods are evaluated using simulations and are applied to data for Johne's disease (paratuberculosis) in cattle. Copyright © 2015 John Wiley & Sons, Ltd.

  14. Generalized time-dependent model of radiation-induced chromosomal aberrations in normal and repair-deficient human cells.

    PubMed

    Ponomarev, Artem L; George, Kerry; Cucinotta, Francis A

    2014-03-01

    We have developed a model that can simulate the yield of radiation-induced chromosomal aberrations (CAs) and unrejoined chromosome breaks in normal and repair-deficient cells. The model predicts the kinetics of chromosomal aberration formation after exposure in the G₀/G₁ phase of the cell cycle to either low- or high-LET radiation. A previously formulated model based on a stochastic Monte Carlo approach was updated to consider the time dependence of DNA double-strand break (DSB) repair (proper or improper), and different cell types were assigned different kinetics of DSB repair. The distribution of the DSB free ends was derived from a mechanistic model that takes into account the structure of chromatin and DSB clustering from high-LET radiation. The kinetics of chromosomal aberration formation were derived from experimental data on DSB repair kinetics in normal and repair-deficient cell lines. We assessed different types of chromosomal aberrations with the focus on simple and complex exchanges, and predicted the DSB rejoining kinetics and misrepair probabilities for different cell types. The results identify major cell-dependent factors, such as a greater yield of chromosome misrepair in ataxia telangiectasia (AT) cells and slower rejoining in Nijmegen (NBS) cells relative to the wild-type. The model's predictions suggest that two mechanisms could exist for the inefficiency of DSB repair in AT and NBS cells, one that depends on the overall speed of joining (either proper or improper) of DNA broken ends, and another that depends on geometric factors, such as the Euclidian distance between DNA broken ends, which influences the relative frequency of misrepair.

  15. The evolution of trade-offs: geographic variation in call duration and flight ability in the sand cricket, Gryllus firmus.

    PubMed

    Roff, D A; Crnokrak, P; Fairbairn, D J

    2003-07-01

    Quantitative genetic theory assumes that trade-offs are best represented by bivariate normal distributions. This theory predicts that selection will shift the trade-off function itself and not just move the mean trait values along a fixed trade-off line, as is generally assumed in optimality models. As a consequence, quantitative genetic theory predicts that the trade-off function will vary among populations in which at least one of the component traits itself varies. This prediction is tested using the trade-off between call duration and flight capability, as indexed by the mass of the dorsolateral flight muscles, in the macropterous morph of the sand cricket. We use four different populations of crickets that vary in the proportion of macropterous males (Lab = 33%, Florida = 29%, Bermuda = 72%, South Carolina = 80%). We find, as predicted, that there is significant variation in the intercept of the trade-off function but not the slope, supporting the hypothesis that trade-off functions are better represented as bivariate normal distributions rather than single lines. We also test the prediction from a quantitative genetical model of the evolution of wing dimorphism that the mean call duration of macropterous males will increase with the percentage of macropterous males in the population. This prediction is also supported. Finally, we estimate the probability of a macropterous male attracting a female, P, as a function of the relative time spent calling (P = time spent calling by macropterous male/(total time spent calling by both micropterous and macropterous male). We find that in the Lab and Florida populations the probability of a female selecting the macropterous male is equal to P, indicating that preference is due simply to relative call duration. But in the Bermuda and South Carolina populations the probability of a female selecting a macropterous male is less than P, indicating a preference for the micropterous male even after differences in call duration are accounted for.

  16. Brain-wave Dynamics Related to Cognitive Tasks and Neurofeedback Information Flow

    NASA Astrophysics Data System (ADS)

    Pop-Jordanova, Nada; Pop-Jordanov, Jordan; Dimitrovski, Darko; Markovska, Natasa

    2003-08-01

    Synchronization of oscillating neuronal discharges has been recently correlated to the moment of perception and the ensuing motor response, with transition between these two cognitive acts "through cellular mechanisms that remain to be established"[1]. Last year, using genetic strategies, it was found that the switching off persistent electric activity in the brain blocks memory recall [2]. On the other hand, analyzing mental-neural information flow, the nobelist Eccles has formulated a fundamental hypotheses that mental events may change the probability of quantum vesicular emissions of transmitters analogously to probability functions of quantum mechanics [3]. Applying the advanced quantum modeling to molecular rotational states exposed to electric activity in brain cells, we found that the probability of transitions does not depend on the field amplitude, suggesting the electric field frequency as the possible information-bearing physical quantity [4]. In this paper, an attempt is made to inter-correlate the above results on frequency aspects of neural transitions induced by cognitive tasks. Furthermore, considering the consecutive steps of mental-neural information flow during the biofeedback training to normalize EEG frequencies, the rationales for neurofeedback efficiency have been deduced.

  17. Estimation of the risk of failure for an endodontically treated maxillary premolar with MODP preparation and CAD/CAM ceramic restorations.

    PubMed

    Lin, Chun-Li; Chang, Yen-Hsiang; Pa, Che-An

    2009-10-01

    This study evaluated the risk of failure for an endodontically treated premolar with mesio occlusodistal palatal (MODP) preparation and 3 different computer-aided design/computer-aided manufacturing (CAD/CAM) ceramic restoration configurations. Three 3-dimensional finite element (FE) models designed with CAD/CAM ceramic onlay, endocrown, and conventional crown restorations were constructed to perform simulations. The Weibull function was incorporated with FE analysis to calculate the long-term failure probability relative to different load conditions. The results indicated that the stress values on the enamel, dentin, and luting cement for endocrown restoration were the lowest values relative to the other 2 restorations. Weibull analysis revealed that the individual failure probability in the endocrown enamel, dentin, and luting cement obviously diminished more than those for onlay and conventional crown restorations. The overall failure probabilities were 27.5%, 1%, and 1% for onlay, endocrown, and conventional crown restorations, respectively, in normal occlusal condition. This numeric investigation suggests that endocrown and conventional crown restorations for endodontically treated premolars with MODP preparation present similar longevity.

  18. Growth of left ventricular mass with military basic training in army recruits.

    PubMed

    Batterham, Alan M; George, Keith P; Birch, Karen M; Pennell, Dudley J; Myerson, Saul G

    2011-07-01

    Exercise-induced left ventricular hypertrophy is well documented, but whether this occurs merely in line with concomitant increases in lean body mass is unclear. Our aim was to model the extent of left ventricular hypertrophy associated with increased lean body mass attributable to an exercise training program. Cardiac and whole-body magnetic resonance imaging was performed before and after a 10-wk intensive British Army basic training program in a sample of 116 healthy Caucasian males (aged 17-28 yr). The within-subjects repeated-measures allometric relationship between lean body mass and left ventricular mass was modeled to allow the proper normalization of changes in left ventricular mass for attendant changes in lean body mass. To linearize the general allometric model (Y=aXb), data were log-transformed before analysis; the resulting effects were therefore expressed as percent changes. We quantified the probability that the true population increase in normalized left ventricular mass was greater than a predefined minimum important difference of 0.2 SD, assigning a probabilistic descriptive anchor for magnitude-based inference. The absolute increase in left ventricular mass was 4.8% (90% confidence interval=3.5%-6%), whereas lean body mass increased by 2.6% (2.1%-3.0%). The change in left ventricular mass adjusted for the change in lean body mass was 3.5% (1.9%-5.1%), equivalent to an increase of 0.25 SD (0.14-0.37). The probability that this effect size was greater than or equal to our predefined minimum important change of 0.2 SD was 0.78-likely to be important. After correction for allometric growth rates, left ventricular hypertrophy and lean body mass changes do not occur at the same magnitude in response to chronic exercise.

  19. The quotient of normal random variables and application to asset price fat tails

    NASA Astrophysics Data System (ADS)

    Caginalp, Carey; Caginalp, Gunduz

    2018-06-01

    The quotient of random variables with normal distributions is examined and proven to have power law decay, with density f(x) ≃f0x-2, with the coefficient depending on the means and variances of the numerator and denominator and their correlation. We also obtain the conditional probability densities for each of the four quadrants given by the signs of the numerator and denominator for arbitrary correlation ρ ∈ [ - 1 , 1) . For ρ = - 1 we obtain a particularly simple closed form solution for all x ∈ R. The results are applied to a basic issue in economics and finance, namely the density of relative price changes. Classical finance stipulates a normal distribution of relative price changes, though empirical studies suggest a power law at the tail end. By considering the supply and demand in a basic price change model, we prove that the relative price change has density that decays with an x-2 power law. Various parameter limits are established.

  20. Characterization of renal response to prolonged immersion in normal man

    NASA Technical Reports Server (NTRS)

    Epstein, M.; Denunzio, A. G.; Ramachandran, M.

    1980-01-01

    ?jDuring the initial phase of space flight, there is a translocation of fluid from the lower parts of the body to the central vascular compartment with a resultant natriuresis, diuresis, and weight loss. Because water immersion is regarded as an appropriate model for studying the redistribution of fluid that occurs in weightlessness, an immersion study of relatively prolonged duration was carried out in order to characterize the temporal profile of the renal adaptation to central hypervolemia. Twelve normal male subjects underwent an immersion study of 8-h duration in the sodium-replete state. Immersion resulted in marked natriuresis and diuresis which were sustained throughout the immersion period. The failure of that natriuresis and diuresis of immersion to abate or cease despite marked extracellular fluid volume contraction as evidenced by a mean weight loss of -2.2 + or - 0.3 kg suggests that central blood volume was not restored to normal and that some degree of central hypervolemia probably persisted.

  1. Examining dental expenditure and dental insurance accounting for probability of incurring expenses.

    PubMed

    Teusner, Dana; Smith, Valerie; Gnanamanickam, Emmanuel; Brennan, David

    2017-04-01

    There are few studies of dental service expenditure in Australia. Although dental insurance status is strongly associated with a higher probability of dental visiting, some studies indicate that there is little variation in expenditure by insurance status among those who attend for care. Our objective was to assess the overall impact of insurance on expenditures by modelling the association between insurance and expenditure accounting for variation in the probability of incurring expenses, that is dental visiting. A sample of 3000 adults (aged 30-61 years) was randomly selected from the Australian electoral roll. Dental service expenditures were collected prospectively over 2 years by client-held log books. Questionnaires collecting participant characteristics were administered at baseline, 12 months and 24 months. Unadjusted and adjusted ratios of expenditure were estimated using marginalized two-part log-skew-normal models. Such models accommodate highly skewed data and estimate effects of covariates on the overall marginal mean while accounting for the probability of incurring expenses. Baseline response was 39%; of these, 40% (n = 438) were retained over the 2-year period. Only participants providing complete data were included in the analysis (n = 378). Of these, 68.5% were insured, and 70.9% accessed dental services of which nearly all (97.7%) incurred individual dental expenses. The mean dental service expenditure for the total sample (those who did and did not attend) for dental care was AUS$788. Model-adjusted ratios of mean expenditures were higher for the insured (1.61; 95% CI 1.18, 2.20), females (1.38; 95% CI 1.06, 1.81), major city residents (1.43; 95% CI 1.10, 1.84) and those who brushed their teeth twice or more a day (1.50; 95% CI 1.15, 1.96) than their respective counterparts. Accounting for the probability of incurring dental expenses, and other explanatory factors, insured working-aged adults had (on average) approximately 60% higher individual dental service expenditures than uninsured adults. The analytical approach adopted in this study is useful for estimating effects on dental expenditure when a variable is associated with both the probability of visiting for care, and with the types of services received. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. A cellular automata model for avascular solid tumor growth under the effect of therapy

    NASA Astrophysics Data System (ADS)

    Reis, E. A.; Santos, L. B. L.; Pinho, S. T. R.

    2009-04-01

    Tumor growth has long been a target of investigation within the context of mathematical and computer modeling. The objective of this study is to propose and analyze a two-dimensional stochastic cellular automata model to describe avascular solid tumor growth, taking into account both the competition between cancer cells and normal cells for nutrients and/or space and a time-dependent proliferation of cancer cells. Gompertzian growth, characteristic of some tumors, is described and some of the features of the time-spatial pattern of solid tumors, such as compact morphology with irregular borders, are captured. The parameter space is studied in order to analyze the occurrence of necrosis and the response to therapy. Our findings suggest that transitions exist between necrotic and non-necrotic phases (no-therapy cases), and between the states of cure and non-cure (therapy cases). To analyze cure, the control and order parameters are, respectively, the highest probability of cancer cell proliferation and the probability of the therapeutic effect on cancer cells. With respect to patterns, it is possible to observe the inner necrotic core and the effect of the therapy destroying the tumor from its outer borders inwards.

  3. Simulation study on characteristics of long-range interaction in randomly asymmetric exclusion process

    NASA Astrophysics Data System (ADS)

    Zhao, Shi-Bo; Liu, Ming-Zhe; Yang, Lan-Ying

    2015-04-01

    In this paper we investigate the dynamics of an asymmetric exclusion process on a one-dimensional lattice with long-range hopping and random update via Monte Carlo simulations theoretically. Particles in the model will firstly try to hop over successive unoccupied sites with a probability q, which is different from previous exclusion process models. The probability q may represent the random access of particles. Numerical simulations for stationary particle currents, density profiles, and phase diagrams are obtained. There are three possible stationary phases: the low density (LD) phase, high density (HD) phase, and maximal current (MC) in the system, respectively. Interestingly, bulk density in the LD phase tends to zero, while the MC phase is governed by α, β, and q. The HD phase is nearly the same as the normal TASEP, determined by exit rate β. Theoretical analysis is in good agreement with simulation results. The proposed model may provide a better understanding of random interaction dynamics in complex systems. Project supported by the National Natural Science Foundation of China (Grant Nos. 41274109 and 11104022), the Fund for Sichuan Youth Science and Technology Innovation Research Team (Grant No. 2011JTD0013), and the Creative Team Program of Chengdu University of Technology.

  4. Evaluation of availability of water from drift aquifers near the Pomme de Terre and Chippewa rivers, western Minnesota

    USGS Publications Warehouse

    Delin, G.N.

    1987-01-01

    The model was used to simulate the effects of below-normal precipitation (drought) and hypothetical increases in ground-water development. Model results indicate that reduced recharge and increased pumping during a three-year extended drought probably would lower water levels 2 to 6 feet regionally in the surficial aquifer and in the Appleton and Benson-middle aquifers and as much as 11 feet near aquifer boundaries. Ground-water discharge to the Pomme de Terre and Chippewa Rivers in the modeled area probably would be reduced during the simulated drought by 15.2 and 7.4 cubic feet per second, respectively, compared to 1982 conditions. The addition of 30 hypothetical wells in the Benson-middle aquifer near Benson, pumping a total of 810 million gallons per year, resulted in water-level declines of as much as 1.3 and 2.7 feet in the surficial and Benson-middle aquifers, respectively. The addition of 28 hypothetical wells in the Appleton aquifer east and southeast of Appleton, pumping a total of 756 million gallons per year, lowered water levels in the surficial and Appleton confined aquifers as much as 5 feet.

  5. Confounding environmental colour and distribution shape leads to underestimation of population extinction risk.

    PubMed

    Fowler, Mike S; Ruokolainen, Lasse

    2013-01-01

    The colour of environmental variability influences the size of population fluctuations when filtered through density dependent dynamics, driving extinction risk through dynamical resonance. Slow fluctuations (low frequencies) dominate in red environments, rapid fluctuations (high frequencies) in blue environments and white environments are purely random (no frequencies dominate). Two methods are commonly employed to generate the coloured spatial and/or temporal stochastic (environmental) series used in combination with population (dynamical feedback) models: autoregressive [AR(1)] and sinusoidal (1/f) models. We show that changing environmental colour from white to red with 1/f models, and from white to red or blue with AR(1) models, generates coloured environmental series that are not normally distributed at finite time-scales, potentially confounding comparison with normally distributed white noise models. Increasing variability of sample Skewness and Kurtosis and decreasing mean Kurtosis of these series alter the frequency distribution shape of the realised values of the coloured stochastic processes. These changes in distribution shape alter patterns in the probability of single and series of extreme conditions. We show that the reduced extinction risk for undercompensating (slow growing) populations in red environments previously predicted with traditional 1/f methods is an artefact of changes in the distribution shapes of the environmental series. This is demonstrated by comparison with coloured series controlled to be normally distributed using spectral mimicry. Changes in the distribution shape that arise using traditional methods lead to underestimation of extinction risk in normally distributed, red 1/f environments. AR(1) methods also underestimate extinction risks in traditionally generated red environments. This work synthesises previous results and provides further insight into the processes driving extinction risk in model populations. We must let the characteristics of known natural environmental covariates (e.g., colour and distribution shape) guide us in our choice of how to best model the impact of coloured environmental variation on population dynamics.

  6. Integrated Cognitive-neuroscience Architectures for Understanding Sensemaking (ICArUS): Phase 2 Test and Evaluation Development Guide

    DTIC Science & Technology

    2014-11-01

    location, based on the evidence provided in Datum ( OSINT , IMINT, and the BLUEBOOK). The targetSum and normalizationConstraint attributes indicate that the...34LessThanOrEqualTo" id="Pp" name="P(Attack | IMINT, OSINT )" type="AttackProbabilityReport_Pp"> <Datum locationId=ŕ-1" datumType=" OSINT ...AttackProbabilityProbe_Ppc targetSum=蔴.0" normalizationConstraint="LessThanOrEqualTo" id="Ppc" name="P(Attack | HUMINT, IMINT, OSINT )" type

  7. Normal people working in normal organizations with normal equipment: system safety and cognition in a mid-air collision.

    PubMed

    de Carvalho, Paulo Victor Rodrigues; Gomes, José Orlando; Huber, Gilbert Jacob; Vidal, Mario Cesar

    2009-05-01

    A fundamental challenge in improving the safety of complex systems is to understand how accidents emerge in normal working situations, with equipment functioning normally in normally structured organizations. We present a field study of the en route mid-air collision between a commercial carrier and an executive jet, in the clear afternoon Amazon sky in which 154 people lost their lives, that illustrates one response to this challenge. Our focus was on how and why the several safety barriers of a well structured air traffic system melted down enabling the occurrence of this tragedy, without any catastrophic component failure, and in a situation where everything was functioning normally. We identify strong consistencies and feedbacks regarding factors of system day-to-day functioning that made monitoring and awareness difficult, and the cognitive strategies that operators have developed to deal with overall system behavior. These findings emphasize the active problem-solving behavior needed in air traffic control work, and highlight how the day-to-day functioning of the system can jeopardize such behavior. An immediate consequence is that safety managers and engineers should review their traditional safety approach and accident models based on equipment failure probability, linear combinations of failures, rules and procedures, and human errors, to deal with complex patterns of coincidence possibilities, unexpected links, resonance among system functions and activities, and system cognition.

  8. Incorporating big data into treatment plan evaluation: Development of statistical DVH metrics and visualization dashboards.

    PubMed

    Mayo, Charles S; Yao, John; Eisbruch, Avraham; Balter, James M; Litzenberg, Dale W; Matuszak, Martha M; Kessler, Marc L; Weyburn, Grant; Anderson, Carlos J; Owen, Dawn; Jackson, William C; Haken, Randall Ten

    2017-01-01

    To develop statistical dose-volume histogram (DVH)-based metrics and a visualization method to quantify the comparison of treatment plans with historical experience and among different institutions. The descriptive statistical summary (ie, median, first and third quartiles, and 95% confidence intervals) of volume-normalized DVH curve sets of past experiences was visualized through the creation of statistical DVH plots. Detailed distribution parameters were calculated and stored in JavaScript Object Notation files to facilitate management, including transfer and potential multi-institutional comparisons. In the treatment plan evaluation, structure DVH curves were scored against computed statistical DVHs and weighted experience scores (WESs). Individual, clinically used, DVH-based metrics were integrated into a generalized evaluation metric (GEM) as a priority-weighted sum of normalized incomplete gamma functions. Historical treatment plans for 351 patients with head and neck cancer, 104 with prostate cancer who were treated with conventional fractionation, and 94 with liver cancer who were treated with stereotactic body radiation therapy were analyzed to demonstrate the usage of statistical DVH, WES, and GEM in a plan evaluation. A shareable dashboard plugin was created to display statistical DVHs and integrate GEM and WES scores into a clinical plan evaluation within the treatment planning system. Benchmarking with normal tissue complication probability scores was carried out to compare the behavior of GEM and WES scores. DVH curves from historical treatment plans were characterized and presented, with difficult-to-spare structures (ie, frequently compromised organs at risk) identified. Quantitative evaluations by GEM and/or WES compared favorably with the normal tissue complication probability Lyman-Kutcher-Burman model, transforming a set of discrete threshold-priority limits into a continuous model reflecting physician objectives and historical experience. Statistical DVH offers an easy-to-read, detailed, and comprehensive way to visualize the quantitative comparison with historical experiences and among institutions. WES and GEM metrics offer a flexible means of incorporating discrete threshold-prioritizations and historic context into a set of standardized scoring metrics. Together, they provide a practical approach for incorporating big data into clinical practice for treatment plan evaluations.

  9. Zealotry effects on opinion dynamics in the adaptive voter model

    NASA Astrophysics Data System (ADS)

    Klamser, Pascal P.; Wiedermann, Marc; Donges, Jonathan F.; Donner, Reik V.

    2017-11-01

    The adaptive voter model has been widely studied as a conceptual model for opinion formation processes on time-evolving social networks. Past studies on the effect of zealots, i.e., nodes aiming to spread their fixed opinion throughout the system, only considered the voter model on a static network. Here we extend the study of zealotry to the case of an adaptive network topology co-evolving with the state of the nodes and investigate opinion spreading induced by zealots depending on their initial density and connectedness. Numerical simulations reveal that below the fragmentation threshold a low density of zealots is sufficient to spread their opinion to the whole network. Beyond the transition point, zealots must exhibit an increased degree as compared to ordinary nodes for an efficient spreading of their opinion. We verify the numerical findings using a mean-field approximation of the model yielding a low-dimensional set of coupled ordinary differential equations. Our results imply that the spreading of the zealots' opinion in the adaptive voter model is strongly dependent on the link rewiring probability and the average degree of normal nodes in comparison with that of the zealots. In order to avoid a complete dominance of the zealots' opinion, there are two possible strategies for the remaining nodes: adjusting the probability of rewiring and/or the number of connections with other nodes, respectively.

  10. An empirical probability density distribution of planetary ionosphere storms with geomagnetic precursors

    NASA Astrophysics Data System (ADS)

    Gulyaeva, Tamara; Stanislawska, Iwona; Arikan, Feza; Arikan, Orhan

    The probability of occurrence of the positive and negative planetary ionosphere storms is evaluated using the W index maps produced from Global Ionospheric Maps of Total Electron Content, GIM-TEC, provided by Jet Propulsion Laboratory, and transformed from geographic coordinates to magnetic coordinates frame. The auroral electrojet AE index and the equatorial disturbance storm time Dst index are investigated as precursors of the global ionosphere storm. The superposed epoch analysis is performed for 77 intense storms (Dst≤-100 nT) and 227 moderate storms (-100

  11. Binary data corruption due to a Brownian agent

    NASA Astrophysics Data System (ADS)

    Newman, T. J.; Triampo, Wannapong

    1999-05-01

    We introduce a model of binary data corruption induced by a Brownian agent (active random walker) on a d-dimensional lattice. A continuum formulation allows the exact calculation of several quantities related to the density of corrupted bits ρ, for example, the mean of ρ and the density-density correlation function. Excellent agreement is found with the results from numerical simulations. We also calculate the probability distribution of ρ in d=1, which is found to be log normal, indicating that the system is governed by extreme fluctuations.

  12. Effects of confinement and electron transport on magnetic switching in single Co nanoparticles

    PubMed Central

    Jiang, W.; Birk, F. T.; Davidović, D.

    2013-01-01

    This work reports the first study of current-driven magnetization noise in a single, nanometerscale, ferromagnetic (Co) particle, attached to normal metal leads by high-resistance tunneling junctions. As the tunnel current increases at low temperature, the magnetic switching field decreases, its probability distribution widens, while the temperature of the environment remains nearly constant. These observations demonstrate nonequilibrium magnetization noise. A classical model of the noise is provided, where the spin-orbit interaction plays a central role in driving magnetic tunneling transitions. PMID:23383370

  13. Anisotropic Defect-Mediated Melting of Two-Dimensional Colloidal Crystals

    NASA Astrophysics Data System (ADS)

    Eisenmann, C.; Gasser, U.; Keim, P.; Maret, G.

    2004-09-01

    The melting transition of anisotropic two-dimensional (2D) crystals is studied in a model system of superparamagnetic colloids. The anisotropy of the induced dipole-dipole interaction is varied by tilting the external magnetic field off the normal to the particle plane. By analyzing the time-dependent Lindemann parameter as well as translational and orientational order we observe a 2D smecticlike phase. The Kosterlitz-Thouless-Halperin-Nelson-Young scenario of isotropic melting is modified: dislocation pairs and dislocations appear with different probabilities depending on their orientation with respect to the in-plane field.

  14. A sustainable approach to planning housing and social care: if not now, when?

    PubMed

    Foord, M; Simic, P

    2001-05-01

    The publication of Supporting People (Department of Social Security 1998) has given urgency to discussions around needs analysis, planning, user voice and the development of 'normal' housing for people with support needs. This paper explores a project, which aimed to design a collaborative model for identifying supported housing needs. We provide an overview of the research background, local imperatives and findings, and point to the probability of increasing conflict between the policy of developing 'sustainable communities' and the development of housing for people with support needs.

  15. Mechanism-based model for tumor drug resistance.

    PubMed

    Kuczek, T; Chan, T C

    1992-01-01

    The development of tumor resistance to cytotoxic agents has important implications in the treatment of cancer. If supported by experimental data, mathematical models of resistance can provide useful information on the underlying mechanisms and aid in the design of therapeutic regimens. We report on the development of a model of tumor-growth kinetics based on the assumption that the rates of cell growth in a tumor are normally distributed. We further assumed that the growth rate of each cell is proportional to its rate of total pyrimidine synthesis (de novo plus salvage). Using an ovarian carcinoma cell line (2008) and resistant variants selected for chronic exposure to a pyrimidine antimetabolite, N-phosphonacetyl-L-aspartate (PALA), we derived a simple and specific analytical form describing the growth curves generated in 72 h growth assays. The model assumes that the rate of de novo pyrimidine synthesis, denoted alpha, is shifted down by an amount proportional to the log10 PALA concentration and that cells whose rate of pyrimidine synthesis falls below a critical level, denoted alpha 0, can no longer grow. This is described by the equation: Probability (growth) = probability (alpha 0 less than alpha-constant x log10 [PALA]). This model predicts that when growth curves are plotted on probit paper, they will produce straight lines. This prediction is in agreement with the data we obtained for the 2008 cells. Another prediction of this model is that the same probit plots for the resistant variants should shift to the right in a parallel fashion. Probit plots of the dose-response data obtained for each resistant 2008 line following chronic exposure to PALA again confirmed this prediction. Correlation of the rightward shift of dose responses to uridine transport (r = 0.99) also suggests that salvage metabolism plays a key role in tumor-cell resistance to PALA. Furthermore, the slope of the regression lines enables the detection of synergy such as that observed between dipyridamole and PALA. Although the rate-normal model was used to study the rate of salvage metabolism in PALA resistance in the present study, it may be widely applicable to modeling of other resistance mechanisms such as gene amplification of target enzymes.

  16. 14 CFR 417.224 - Probability of failure analysis.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Probability of failure analysis. 417.224 Section 417.224 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION ADMINISTRATION... phase of normal flight or when any anomalous condition exhibits the potential for a stage or its debris...

  17. 14 CFR 417.224 - Probability of failure analysis.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Probability of failure analysis. 417.224 Section 417.224 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION ADMINISTRATION... phase of normal flight or when any anomalous condition exhibits the potential for a stage or its debris...

  18. 14 CFR 417.224 - Probability of failure analysis.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Probability of failure analysis. 417.224 Section 417.224 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION ADMINISTRATION... phase of normal flight or when any anomalous condition exhibits the potential for a stage or its debris...

  19. 14 CFR 417.224 - Probability of failure analysis.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Probability of failure analysis. 417.224 Section 417.224 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION ADMINISTRATION... phase of normal flight or when any anomalous condition exhibits the potential for a stage or its debris...

  20. 12 CFR 700.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... that the facts that caused the deficient share-asset ratio no longer exist; and (ii) The likelihood of further depreciation of the share-asset ratio is not probable; and (iii) The return of the share-asset ratio to its normal limits within a reasonable time for the credit union concerned is probable; and (iv...

  1. 14 CFR 417.224 - Probability of failure analysis.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Probability of failure analysis. 417.224 Section 417.224 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION ADMINISTRATION... phase of normal flight or when any anomalous condition exhibits the potential for a stage or its debris...

  2. The influence of local majority opinions on the dynamics of the Sznajd model

    NASA Astrophysics Data System (ADS)

    Crokidakis, Nuno

    2014-03-01

    In this work we study a Sznajd-like opinion dynamics on a square lattice of linear size L. For this purpose, we consider that each agent has a convincing power C, that is a time-dependent quantity. Each high convincing power group of four agents sharing the same opinion may convince its neighbors to follow the group opinion, which induces an increase of the group's convincing power. In addition, we have considered that a group with a local majority opinion (3 up/1 down spins or 1 up/3 down spins) can persuade the agents neighboring the group with probability p, since the group's convincing power is high enough. The two mechanisms (convincing powers and probability p) lead to an increase of the competition among the opinions, which avoids dictatorship (full consensus, all spins parallel) for a wide range of model's parameters, and favors the occurrence of democratic states (partial order, the majority of spins pointing in one direction). We have found that the relaxation times of the model follow log-normal distributions, and that the average relaxation time τ grows with system size as τ ~ L5/2, independent of p. We also discuss the occurrence of the usual phase transition of the Sznajd model.

  3. Wave turbulence in shallow water models.

    PubMed

    Clark di Leoni, P; Cobelli, P J; Mininni, P D

    2014-06-01

    We study wave turbulence in shallow water flows in numerical simulations using two different approximations: the shallow water model and the Boussinesq model with weak dispersion. The equations for both models were solved using periodic grids with up to 2048{2} points. In all simulations, the Froude number varies between 0.015 and 0.05, while the Reynolds number and level of dispersion are varied in a broader range to span different regimes. In all cases, most of the energy in the system remains in the waves, even after integrating the system for very long times. For shallow flows, nonlinear waves are nondispersive and the spectrum of potential energy is compatible with ∼k{-2} scaling. For deeper (Boussinesq) flows, the nonlinear dispersion relation as directly measured from the wave and frequency spectrum (calculated independently) shows signatures of dispersion, and the spectrum of potential energy is compatible with predictions of weak turbulence theory, ∼k{-4/3}. In this latter case, the nonlinear dispersion relation differs from the linear one and has two branches, which we explain with a simple qualitative argument. Finally, we study probability density functions of the surface height and find that in all cases the distributions are asymmetric. The probability density function can be approximated by a skewed normal distribution as well as by a Tayfun distribution.

  4. Ditching Tests with a 1/16-Size Model of the Navy XP2V-1 Airplane at the Langley Tank No. 2 Monorail

    NASA Technical Reports Server (NTRS)

    Fisher, Lloyd J.; Tarshis, Robert P.

    1947-01-01

    Tests were made with a 1/16 size dynamically similar model of the Navy XP2V-1 airplane to study its performance when ditched. The model was ditched in calm water at the Langley tank no. 2 monorail. Various landing attitudes, speeds, and conditions of damage were simulated. The performance of the node1 was determined and recorded from visual observations, by recording time histories of the longitudinal decelerations, and by taking motion pictures of the ditchings From the results of the tests with the model the following conclusions were drawn: 1. The airplane should be ditched at the normal landing attitude. The flaps should be fully extended to obtain the lowest possible landing speed; 2. Extensive damage will occur in a ditching and the airplane probably will dive violently after a run of about 2 fuselage lengths. Maximum longitudinal decelerations up to about 4g will be encountered; and 3. If a trapezoidal hydroflap 4 feet by 2 feet by 1 foot is attached to the airplane at station 192.4, diving will be prevented and the airplane will probably porpoise in a run of about 4 fuselage lengths with a maximum longitudinal deceleration of less than 3.5g.

  5. Forward modeling of gravity data using geostatistically generated subsurface density variations

    USGS Publications Warehouse

    Phelps, Geoffrey

    2016-01-01

    Using geostatistical models of density variations in the subsurface, constrained by geologic data, forward models of gravity anomalies can be generated by discretizing the subsurface and calculating the cumulative effect of each cell (pixel). The results of such stochastically generated forward gravity anomalies can be compared with the observed gravity anomalies to find density models that match the observed data. These models have an advantage over forward gravity anomalies generated using polygonal bodies of homogeneous density because generating numerous realizations explores a larger region of the solution space. The stochastic modeling can be thought of as dividing the forward model into two components: that due to the shape of each geologic unit and that due to the heterogeneous distribution of density within each geologic unit. The modeling demonstrates that the internally heterogeneous distribution of density within each geologic unit can contribute significantly to the resulting calculated forward gravity anomaly. Furthermore, the stochastic models match observed statistical properties of geologic units, the solution space is more broadly explored by producing a suite of successful models, and the likelihood of a particular conceptual geologic model can be compared. The Vaca Fault near Travis Air Force Base, California, can be successfully modeled as a normal or strike-slip fault, with the normal fault model being slightly more probable. It can also be modeled as a reverse fault, although this structural geologic configuration is highly unlikely given the realizations we explored.

  6. Evaluation of normal lung tissue complication probability in gated and conventional radiotherapy using the 4D XCAT digital phantom.

    PubMed

    Shahzadeh, Sara; Gholami, Somayeh; Aghamiri, Seyed Mahmood Reza; Mahani, Hojjat; Nabavi, Mansoure; Kalantari, Faraz

    2018-06-01

    The present study was conducted to investigate normal lung tissue complication probability in gated and conventional radiotherapy (RT) as a function of diaphragm motion, lesion size, and its location using 4D-XCAT digital phantom in a simulation study. Different time series of 3D-CT images were generated using the 4D-XCAT digital phantom. The binary data obtained from this phantom were then converted to the digital imaging and communication in medicine (DICOM) format using an in-house MATLAB-based program to be compatible with our treatment planning system (TPS). The 3D-TPS with superposition computational algorithm was used to generate conventional and gated plans. Treatment plans were generated for 36 different XCAT phantom configurations. These included four diaphragm motions of 20, 25, 30 and 35 mm, three lesion sizes of 3, 4, and 5 cm in diameter and each tumor was placed in four different lung locations (right lower lobe, right upper lobe, left lower lobe and left upper lobe). The complication of normal lung tissue was assessed in terms of mean lung dose (MLD), the lung volume receiving ≥20 Gy (V20), and normal tissue complication probability (NTCP). The results showed that the gated RT yields superior outcomes in terms of normal tissue complication compared to the conventional RT. For all cases, the gated radiation therapy technique reduced the mean dose, V20, and NTCP of lung tissue by up to 5.53 Gy, 13.38%, and 23.89%, respectively. The results of this study showed that the gated RT provides significant advantages in terms of the normal lung tissue complication, compared to the conventional RT, especially for the lesions near the diaphragm. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. DEM simulation of flow of dumbbells on a rough inclined plane

    NASA Astrophysics Data System (ADS)

    Mandal, Sandip; Khakhar, Devang

    2015-11-01

    The rheology of non-spherical granular materials such as food grains, sugar cubes, sand, pharmaceutical pills, among others, is not understood well. We study the flow of non-spherical dumbbells of different aspect ratios on a rough inclined plane by using soft sphere DEM simulations. The dumbbells are generated by fusing two spheres together and a linear spring dashpot model along with Coulombic friction is employed to calculate inter-particle forces. At steady state, a uni-directional shear flow is obtained which allows for a detailed study of the rheology. The effect of aspect ratio and inclination angle on mean velocity, volume fraction, shear rate, shear stress, pressure and viscosity profiles is examined. The effect of aspect ratio on probability distribution of angles, made by the major axes of the dumbbells with the flow direction, average angle and order parameter is analyzed. The dense flow rheology is well explained by Bagnold's law and the constitutive laws of JFP model. The dependencies of first and second normal stress differences on aspect ratio are studied. The probability distributions of translational and rotational velocity are analyzed.

  8. Cross-stream migration of active particles

    NASA Astrophysics Data System (ADS)

    Uspal, William; Katuri, Jaideep; Simmchen, Juliane; Miguel-Lopez, Albert; Sanchez, Samuel

    For natural microswimmers, the interplay of swimming activity and external flow can promote robust directed motion, e.g. propulsion against (upstream rheotaxis) or perpendicular to the direction of flow. These effects are generally attributed to their complex body shapes and flagellar beat patterns. Here, using catalytic Janus particles as a model system, we report on a strong directional response that naturally emerges for spherical active particles in a channel flow. The particles align their propulsion axis to be perpendicular to both the direction of flow and the normal vector of a nearby bounding surface. We develop a deterministic theoretical model that captures this spontaneous transverse orientational order. We show how the directional response emerges from the interplay of external shear flow and swimmer/surface interactions (e.g., hydrodynamic interactions) that originate in swimming activity. Finally, adding the effect of thermal noise, we obtain probability distributions for the swimmer orientation that show good agreement with the experimental probability distributions. Our findings show that the qualitative response of microswimmers to flow is sensitive to the detailed interaction between individual microswimmers and bounding surfaces.

  9. [Clinical evaluation of heavy-particle radiotherapy using dose volume histogram (DVH)].

    PubMed

    Terahara, A; Nakano, T; Tsujii, H

    1998-01-01

    Radiotherapy with heavy particles such as proton and heavy-charged particles is a promising modality for treatment of localized malignant tumors because of the good dose distribution. A dose calculation and radiotherapy planning system which is essential for this kind of treatment has been developed in recent years. It has the capability to compute the dose volume histogram (DVH) which contains dose-volume information for the target volume and other interesting volumes. Recently, DVH is commonly used to evaluate and compare dose distributions in radiotherapy with both photon and heavy particles, and it shows that a superior dose distribution is obtained in heavy particle radiotherapy. DVH is also utilized for the evaluation of dose distribution related to clinical outcomes. Besides models such as normal tissue complication probability (NTCP) and tumor control probability (TCP), which can be calculated from DVH are proposed by several authors, they are applied to evaluate dose distributions themselves and to evaluate them in relation to clinical results. DVH is now a useful and important tool, but further studies are needed to use DVH and these models practically for clinical evaluation of heavy-particle radiotherapy.

  10. Financial derivative pricing under probability operator via Esscher transfomation

    NASA Astrophysics Data System (ADS)

    Achi, Godswill U.

    2014-10-01

    The problem of pricing contingent claims has been extensively studied for non-Gaussian models, and in particular, Black- Scholes formula has been derived for the NIG asset pricing model. This approach was first developed in insurance pricing9 where the original distortion function was defined in terms of the normal distribution. This approach was later studied6 where they compared the standard Black-Scholes contingent pricing and distortion based contingent pricing. So, in this paper, we aim at using distortion operators by Cauchy distribution under a simple transformation to price contingent claim. We also show that we can recuperate the Black-Sholes formula using the distribution. Similarly, in a financial market in which the asset price represented by a stochastic differential equation with respect to Brownian Motion, the price mechanism based on characteristic Esscher measure can generate approximate arbitrage free financial derivative prices. The price representation derived involves probability Esscher measure and Esscher Martingale measure and under a new complex valued measure φ (u) evaluated at the characteristic exponents φx(u) of Xt we recuperate the Black-Scholes formula for financial derivative prices.

  11. System and method for statistically monitoring and analyzing sensed conditions

    DOEpatents

    Pebay, Philippe P [Livermore, CA; Brandt, James M [Dublin, CA; Gentile, Ann C [Dublin, CA; Marzouk, Youssef M [Oakland, CA; Hale, Darrian J [San Jose, CA; Thompson, David C [Livermore, CA

    2011-01-04

    A system and method of monitoring and analyzing a plurality of attributes for an alarm condition is disclosed. The attributes are processed and/or unprocessed values of sensed conditions of a collection of a statistically significant number of statistically similar components subjected to varying environmental conditions. The attribute values are used to compute the normal behaviors of some of the attributes and also used to infer parameters of a set of models. Relative probabilities of some attribute values are then computed and used along with the set of models to determine whether an alarm condition is met. The alarm conditions are used to prevent or reduce the impact of impending failure.

  12. System and method for statistically monitoring and analyzing sensed conditions

    DOEpatents

    Pebay, Philippe P [Livermore, CA; Brandt, James M [Dublin, CA; Gentile, Ann C [Dublin, CA; Marzouk, Youssef M [Oakland, CA; Hale, Darrian J [San Jose, CA; Thompson, David C [Livermore, CA

    2011-01-25

    A system and method of monitoring and analyzing a plurality of attributes for an alarm condition is disclosed. The attributes are processed and/or unprocessed values of sensed conditions of a collection of a statistically significant number of statistically similar components subjected to varying environmental conditions. The attribute values are used to compute the normal behaviors of some of the attributes and also used to infer parameters of a set of models. Relative probabilities of some attribute values are then computed and used along with the set of models to determine whether an alarm condition is met. The alarm conditions are used to prevent or reduce the impact of impending failure.

  13. System and method for statistically monitoring and analyzing sensed conditions

    DOEpatents

    Pebay, Philippe P [Livermore, CA; Brandt, James M. , Gentile; Ann C. , Marzouk; Youssef M. , Hale; Darrian J. , Thompson; David, C [Livermore, CA

    2010-07-13

    A system and method of monitoring and analyzing a plurality of attributes for an alarm condition is disclosed. The attributes are processed and/or unprocessed values of sensed conditions of a collection of a statistically significant number of statistically similar components subjected to varying environmental conditions. The attribute values are used to compute the normal behaviors of some of the attributes and also used to infer parameters of a set of models. Relative probabilities of some attribute values are then computed and used along with the set of models to determine whether an alarm condition is met. The alarm conditions are used to prevent or reduce the impact of impending failure.

  14. Estimation of the incubation period of invasive aspergillosis by survival models in acute myeloid leukemia patients.

    PubMed

    Bénet, Thomas; Voirin, Nicolas; Nicolle, Marie-Christine; Picot, Stephane; Michallet, Mauricette; Vanhems, Philippe

    2013-02-01

    The duration of the incubation of invasive aspergillosis (IA) remains unknown. The objective of this investigation was to estimate the time interval between aplasia onset and that of IA symptoms in acute myeloid leukemia (AML) patients. A single-centre prospective survey (2004-2009) included all patients with AML and probable/proven IA. Parametric survival models were fitted to the distribution of the time intervals between aplasia onset and IA. Overall, 53 patients had IA after aplasia, with the median observed time interval between the two being 15 days. Based on log-normal distribution, the median estimated IA incubation period was 14.6 days (95% CI; 12.8-16.5 days).

  15. Cost-effectiveness of Wait Time Reduction for Intensive Behavioral Intervention Services in Ontario, Canada.

    PubMed

    Piccininni, Caroline; Bisnaire, Lise; Penner, Melanie

    2017-01-01

    Earlier access to intensive behavioral intervention (IBI) is associated with improved outcomes for children with severe autism spectrum disorder (ASD); however, there are long waiting times for this program. No analyses have been performed modeling the cost-effectiveness of wait time reduction for IBI. To model the starting age for IBI with reduced wait time (RWT) (by half) and eliminated wait time (EWT), and perform a cost-effectiveness analysis comparing RWT and EWT with current wait time (CWT) from government and societal perspectives. Published waiting times were used to model the mean starting age for IBI for CWT, RWT, and EWT in children diagnosed with severe ASD who were treated at Ontario's Autism Intervention Program. Inputs were loaded into a decision analytic model, with an annual discount rate of 3% applied. Incremental cost-effectiveness ratios (ICERs) were determined. One-way and probabilistic sensitivity analyses were performed to assess the effect of model uncertainty. We used data from the year 2012 (January 1 through December 31) provided from the Children's Hospital of Eastern Ontario IBI center for the starting ages. Data analysis was done from May through July 2015. The outcome was independence measured in dependency-free life-years (DFLYs) to 65 years of age. To derive this, expected IQ was modeled based on probability of early (age <4 years) or late (age ≥4 years) access to IBI. Probabilities of having an IQ in the normal (≥70) or intellectual disability (<70) range were calculated. The IQ strata were assigned probabilities of achieving an independent (60 DFLYs), semidependent (30 DFLYs), or dependent (0 DFLYs) outcome. Costs were calculated for provincial government and societal perspectives in Canadian dollars (Can$1 = US$0.78). The mean starting ages for IBI were 5.24 years for CWT, 3.89 years for RWT, and 2.71 years for EWT. From the provincial government perspective, EWT was the dominant strategy, generating the most DFLYs for Can$53 000 less per individual to 65 years of age than CWT. From the societal perspective, EWT produced lifetime savings of Can$267 000 per individual compared with CWT. The ICERs were most sensitive to uncertainty in the starting age for IBI and in achieving a normal IQ based on starting age. This study predicts the long-term effect of the current disparity between IBI service needs and the amount of IBI being delivered in the province of Ontario. The results suggest that providing timely access optimizes IBI outcomes, improves future independence, and lessens costs from provincial and societal perspectives.

  16. Estimating the Properties of Hard X-Ray Solar Flares by Constraining Model Parameters

    NASA Technical Reports Server (NTRS)

    Ireland, J.; Tolbert, A. K.; Schwartz, R. A.; Holman, G. D.; Dennis, B. R.

    2013-01-01

    We wish to better constrain the properties of solar flares by exploring how parameterized models of solar flares interact with uncertainty estimation methods. We compare four different methods of calculating uncertainty estimates in fitting parameterized models to Ramaty High Energy Solar Spectroscopic Imager X-ray spectra, considering only statistical sources of error. Three of the four methods are based on estimating the scale-size of the minimum in a hypersurface formed by the weighted sum of the squares of the differences between the model fit and the data as a function of the fit parameters, and are implemented as commonly practiced. The fourth method is also based on the difference between the data and the model, but instead uses Bayesian data analysis and Markov chain Monte Carlo (MCMC) techniques to calculate an uncertainty estimate. Two flare spectra are modeled: one from the Geostationary Operational Environmental Satellite X1.3 class flare of 2005 January 19, and the other from the X4.8 flare of 2002 July 23.We find that the four methods give approximately the same uncertainty estimates for the 2005 January 19 spectral fit parameters, but lead to very different uncertainty estimates for the 2002 July 23 spectral fit. This is because each method implements different analyses of the hypersurface, yielding method-dependent results that can differ greatly depending on the shape of the hypersurface. The hypersurface arising from the 2005 January 19 analysis is consistent with a normal distribution; therefore, the assumptions behind the three non- Bayesian uncertainty estimation methods are satisfied and similar estimates are found. The 2002 July 23 analysis shows that the hypersurface is not consistent with a normal distribution, indicating that the assumptions behind the three non-Bayesian uncertainty estimation methods are not satisfied, leading to differing estimates of the uncertainty. We find that the shape of the hypersurface is crucial in understanding the output from each uncertainty estimation technique, and that a crucial factor determining the shape of hypersurface is the location of the low-energy cutoff relative to energies where the thermal emission dominates. The Bayesian/MCMC approach also allows us to provide detailed information on probable values of the low-energy cutoff, Ec, a crucial parameter in defining the energy content of the flare-accelerated electrons. We show that for the 2002 July 23 flare data, there is a 95% probability that Ec lies below approximately 40 keV, and a 68% probability that it lies in the range 7-36 keV. Further, the low-energy cutoff is more likely to be in the range 25-35 keV than in any other 10 keV wide energy range. The low-energy cutoff for the 2005 January 19 flare is more tightly constrained to 107 +/- 4 keV with 68% probability.

  17. Robust functional statistics applied to Probability Density Function shape screening of sEMG data.

    PubMed

    Boudaoud, S; Rix, H; Al Harrach, M; Marin, F

    2014-01-01

    Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.

  18. Improving the chi-squared approximation for bivariate normal tolerance regions

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.

    1993-01-01

    Let X be a two-dimensional random variable distributed according to N2(mu,Sigma) and let bar-X and S be the respective sample mean and covariance matrix calculated from N observations of X. Given a containment probability beta and a level of confidence gamma, we seek a number c, depending only on N, beta, and gamma such that the ellipsoid R = (x: (x - bar-X)'S(exp -1) (x - bar-X) less than or = c) is a tolerance region of content beta and level gamma; i.e., R has probability gamma of containing at least 100 beta percent of the distribution of X. Various approximations for c exist in the literature, but one of the simplest to compute -- a multiple of the ratio of certain chi-squared percentage points -- is badly biased for small N. For the bivariate normal case, most of the bias can be removed by simple adjustment using a factor A which depends on beta and gamma. This paper provides values of A for various beta and gamma so that the simple approximation for c can be made viable for any reasonable sample size. The methodology provides an illustrative example of how a combination of Monte-Carlo simulation and simple regression modelling can be used to improve an existing approximation.

  19. Bayesian framework inspired no-reference region-of-interest quality measure for brain MRI images

    PubMed Central

    Osadebey, Michael; Pedersen, Marius; Arnold, Douglas; Wendel-Mitoraj, Katrina

    2017-01-01

    Abstract. We describe a postacquisition, attribute-based quality assessment method for brain magnetic resonance imaging (MRI) images. It is based on the application of Bayes theory to the relationship between entropy and image quality attributes. The entropy feature image of a slice is segmented into low- and high-entropy regions. For each entropy region, there are three separate observations of contrast, standard deviation, and sharpness quality attributes. A quality index for a quality attribute is the posterior probability of an entropy region given any corresponding region in a feature image where quality attribute is observed. Prior belief in each entropy region is determined from normalized total clique potential (TCP) energy of the slice. For TCP below the predefined threshold, the prior probability for a region is determined by deviation of its percentage composition in the slice from a standard normal distribution built from 250 MRI volume data provided by Alzheimer’s Disease Neuroimaging Initiative. For TCP above the threshold, the prior is computed using a mathematical model that describes the TCP–noise level relationship in brain MRI images. Our proposed method assesses the image quality of each entropy region and the global image. Experimental results demonstrate good correlation with subjective opinions of radiologists for different types and levels of quality distortions. PMID:28630885

  20. Superstatistics analysis of the ion current distribution function: Met3PbCl influence study.

    PubMed

    Miśkiewicz, Janusz; Trela, Zenon; Przestalski, Stanisław; Karcz, Waldemar

    2010-09-01

    A novel analysis of ion current time series is proposed. It is shown that higher (second, third and fourth) statistical moments of the ion current probability distribution function (PDF) can yield new information about ion channel properties. The method is illustrated on a two-state model where the PDF of the compound states are given by normal distributions. The proposed method was applied to the analysis of the SV cation channels of vacuolar membrane of Beta vulgaris and the influence of trimethyllead chloride (Met(3)PbCl) on the ion current probability distribution. Ion currents were measured by patch-clamp technique. It was shown that Met(3)PbCl influences the variance of the open-state ion current but does not alter the PDF of the closed-state ion current. Incorporation of higher statistical moments into the standard investigation of ion channel properties is proposed.

  1. RYR1-related rhabdomyolysis: A common but probably underdiagnosed manifestation of skeletal muscle ryanodine receptor dysfunction.

    PubMed

    Voermans, N C; Snoeck, M; Jungbluth, H

    2016-10-01

    Mutations in the skeletal muscle ryanodine receptor (RYR1) gene are associated with a wide spectrum of inherited myopathies presenting throughout life. Malignant hyperthermia susceptibility (MHS)-related RYR1 mutations have emerged as a common cause of exertional rhabdomyolysis, accounting for up to 30% of rhabdomyolysis episodes in otherwise healthy individuals. Common triggers are exercise and heat and, less frequently, viral infections, alcohol and drugs. Most subjects are normally strong and have no personal or family history of malignant hyperthermia. Heat intolerance and cold-induced muscle stiffness may be a feature. Recognition of this (probably not uncommon) rhabdomyolysis cause is vital for effective counselling, to identify potentially malignant hyperthermia-susceptible individuals and to adapt training regimes. Studies in various animal models provide insights regarding possible pathophysiological mechanisms and offer therapeutic perspectives. Copyright © 2016. Published by Elsevier Masson SAS.

  2. Evidence for a warm wind from the red star in symbiotic binaries

    NASA Technical Reports Server (NTRS)

    Friedjung, M.; Stencel, R. E.; Viotti, R.

    1983-01-01

    A systematic redshift of the high ionization resonance emission lines with respect to the intercombination lines is found from an examination of the ultraviolet spectra of symbiotic stars obtained with IUE. After consideration of other possibilities, this is most probably explained by photon scattering in an expanding envelope optically thick to the resonance lines. Line formation in a wind, or at the base of a wind is therefore suggested. Reasons are also given indicating line formation of the most ionized species in a region with an electron temperature of the order of 100,000 K, probably around the cool star. The behavior of the emission line width with ionization energy seems to support this model. The cool components of symbiotic stars appear to differ from normal red giants, which do not have winds of this temperature. An explanation in terms of a higher rotation velocity due to the binary nature of these stars is suggested.

  3. Bilateral Image Subtraction and Multivariate Models for the Automated Triaging of Screening Mammograms

    PubMed Central

    Celaya-Padilla, José; Martinez-Torteya, Antonio; Rodriguez-Rojas, Juan; Galvan-Tejada, Jorge; Treviño, Victor; Tamez-Peña, José

    2015-01-01

    Mammography is the most common and effective breast cancer screening test. However, the rate of positive findings is very low, making the radiologic interpretation monotonous and biased toward errors. This work presents a computer-aided diagnosis (CADx) method aimed to automatically triage mammogram sets. The method coregisters the left and right mammograms, extracts image features, and classifies the subjects into risk of having malignant calcifications (CS), malignant masses (MS), and healthy subject (HS). In this study, 449 subjects (197 CS, 207 MS, and 45 HS) from a public database were used to train and evaluate the CADx. Percentile-rank (p-rank) and z-normalizations were used. For the p-rank, the CS versus HS model achieved a cross-validation accuracy of 0.797 with an area under the receiver operating characteristic curve (AUC) of 0.882; the MS versus HS model obtained an accuracy of 0.772 and an AUC of 0.842. For the z-normalization, the CS versus HS model achieved an accuracy of 0.825 with an AUC of 0.882 and the MS versus HS model obtained an accuracy of 0.698 and an AUC of 0.807. The proposed method has the potential to rank cases with high probability of malignant findings aiding in the prioritization of radiologists work list. PMID:26240818

  4. TU-A-BRD-01: Outcomes of Hypofractionated Treatments - Initial Results of the WGSBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, X; Lee, P; Ohri, N

    2014-06-15

    Stereotactic Body Radiation Therapy (SBRT) has emerged in recent decades as a treatment paradigm that is becoming increasingly important in clinical practice. Clinical outcomes data are rapidly accumulating. Although published relations between outcomes and dose distributions are still sparse, the field has progressed to the point where evidence-based normal tissue dose-volume constraints, prescription strategies, and Tumor Control Probability (TCP) and Normal Tissue Complication Probability (NTCP) models can be developed. The Working Group on SBRT (WGSBRT), under the Biological Effects Subcommittee of AAPM, is a group of physicists and physicians working in the area of SBRT. It is currently performing criticalmore » literature reviews to extract and synthesize usable data and to develop guidelines and models to aid with safe and effective treatment. The group is investigating clinically relevant findings from SBRT in six anatomical regions: Cranial, Head and Neck, Thoracic, Abdominal, Pelvic, and Spinal. In this session of AAPM 2014, interim results are presented on TCP for lung and liver, NTCP for thoracic organs, and radiobiological foundations:• Lung TCP: Detailed modeling of TCP data from 118 published studies on early stage lung SBRT investigates dose response and hypothesized mechanisms to explain the improved outcomes of SBRT. This is presented from the perspective of a physicist, a physician, and a radiobiologist.• Liver TCP: For primary and metastatic liver tumors, individual patient data were extracted from published reports to examine the effects of biologically effective dose on local control.• Thoracic NTCP: Clinically significant SBRT toxicity of lung, rib / chest wall and other structures are evaluated and compared among published clinical data, in terms of risk, risk factors, and safe practice.• Improving the clinical utility of published toxicity reports from SBRT and Hypofractionated treatments. What do we want, and how do we get it? Methods and problems of synthesizing data from published reports. Learning Objectives: Common SBRT fractionation schemes and current evidence for efficacy. Evidence for normal tissue tolerances in hypofractionated treatments. Clinically relevant radiobiological effects at large fraction sizes.« less

  5. Predicting the probability of slip in gait: methodology and distribution study.

    PubMed

    Gragg, Jared; Yang, James

    2016-01-01

    The likelihood of a slip is related to the available and required friction for a certain activity, here gait. Classical slip and fall analysis presumed that a walking surface was safe if the difference between the mean available and required friction coefficients exceeded a certain threshold. Previous research was dedicated to reformulating the classical slip and fall theory to include the stochastic variation of the available and required friction when predicting the probability of slip in gait. However, when predicting the probability of a slip, previous researchers have either ignored the variation in the required friction or assumed the available and required friction to be normally distributed. Also, there are no published results that actually give the probability of slip for various combinations of required and available frictions. This study proposes a modification to the equation for predicting the probability of slip, reducing the previous equation from a double-integral to a more convenient single-integral form. Also, a simple numerical integration technique is provided to predict the probability of slip in gait: the trapezoidal method. The effect of the random variable distributions on the probability of slip is also studied. It is shown that both the required and available friction distributions cannot automatically be assumed as being normally distributed. The proposed methods allow for any combination of distributions for the available and required friction, and numerical results are compared to analytical solutions for an error analysis. The trapezoidal method is shown to be highly accurate and efficient. The probability of slip is also shown to be sensitive to the input distributions of the required and available friction. Lastly, a critical value for the probability of slip is proposed based on the number of steps taken by an average person in a single day.

  6. Tools for Basic Statistical Analysis

    NASA Technical Reports Server (NTRS)

    Luz, Paul L.

    2005-01-01

    Statistical Analysis Toolset is a collection of eight Microsoft Excel spreadsheet programs, each of which performs calculations pertaining to an aspect of statistical analysis. These programs present input and output data in user-friendly, menu-driven formats, with automatic execution. The following types of calculations are performed: Descriptive statistics are computed for a set of data x(i) (i = 1, 2, 3 . . . ) entered by the user. Normal Distribution Estimates will calculate the statistical value that corresponds to cumulative probability values, given a sample mean and standard deviation of the normal distribution. Normal Distribution from two Data Points will extend and generate a cumulative normal distribution for the user, given two data points and their associated probability values. Two programs perform two-way analysis of variance (ANOVA) with no replication or generalized ANOVA for two factors with four levels and three repetitions. Linear Regression-ANOVA will curvefit data to the linear equation y=f(x) and will do an ANOVA to check its significance.

  7. Theoretical Benefits of Dynamic Collimation in Pencil Beam Scanning Proton Therapy for Brain Tumors: Dosimetric and Radiobiological Metrics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moignier, Alexandra, E-mail: alexandra-moignier@uiowa.edu; Gelover, Edgar; Wang, Dongxu

    Purpose: To quantify the dosimetric benefit of using a dynamic collimation system (DCS) for penumbra reduction during the treatment of brain tumors by pencil beam scanning proton therapy (PBS PT). Methods and Materials: Collimated and uncollimated brain treatment plans were created for 5 patients previously treated with PBS PT and retrospectively enrolled in an institutional review board–approved study. The in-house treatment planning system, RDX, was used to generate the plans because it is capable of modeling both collimated and uncollimated beamlets. The clinically delivered plans were reproduced with uncollimated plans in terms of target coverage and organ at risk (OAR) sparingmore » to ensure a clinically relevant starting point, and collimated plans were generated to improve the OAR sparing while maintaining target coverage. Physical and biological comparison metrics, such as dose distribution conformity, mean and maximum doses, normal tissue complication probability, and risk of secondary brain cancer, were used to evaluate the plans. Results: The DCS systematically improved the dose distribution conformity while preserving the target coverage. The average reduction of the mean dose to the 10-mm ring surrounding the target and the healthy brain were 13.7% (95% confidence interval [CI] 11.6%-15.7%; P<.0001) and 25.1% (95% CI 16.8%-33.4%; P<.001), respectively. This yielded an average reduction of 24.8% (95% CI 0.8%-48.8%; P<.05) for the brain necrosis normal tissue complication probability using the Flickinger model, and 25.1% (95% CI 16.8%-33.4%; P<.001) for the risk of secondary brain cancer. A general improvement of the OAR sparing was also observed. Conclusion: The lateral penumbra reduction afforded by the DCS increases the normal tissue sparing capabilities of PBS PT for brain cancer treatment while preserving target coverage.« less

  8. Tensor products of process matrices with indefinite causal structure

    NASA Astrophysics Data System (ADS)

    Jia, Ding; Sakharwade, Nitica

    2018-03-01

    Theories with indefinite causal structure have been studied from both the fundamental perspective of quantum gravity and the practical perspective of information processing. In this paper we point out a restriction in forming tensor products of objects with indefinite causal structure in certain models: there exist both classical and quantum objects the tensor products of which violate the normalization condition of probabilities, if all local operations are allowed. We obtain a necessary and sufficient condition for when such unrestricted tensor products of multipartite objects are (in)valid. This poses a challenge to extending communication theory to indefinite causal structures, as the tensor product is the fundamental ingredient in the asymptotic setting of communication theory. We discuss a few options to evade this issue. In particular, we show that the sequential asymptotic setting does not suffer the violation of normalization.

  9. Nonclassical thermal-state superpositions: Analytical evolution law and decoherence behavior

    NASA Astrophysics Data System (ADS)

    Meng, Xiang-guo; Goan, Hsi-Sheng; Wang, Ji-suo; Zhang, Ran

    2018-03-01

    Employing the integration technique within normal products of bosonic operators, we present normal product representations of thermal-state superpositions and investigate their nonclassical features, such as quadrature squeezing, sub-Poissonian distribution, and partial negativity of the Wigner function. We also analytically and numerically investigate their evolution law and decoherence characteristics in an amplitude-decay model via the variations of the probability distributions and the negative volumes of Wigner functions in phase space. The results indicate that the evolution formulas of two thermal component states for amplitude decay can be viewed as the same integral form as a displaced thermal state ρ(V , d) , but governed by the combined action of photon loss and thermal noise. In addition, the larger values of the displacement d and noise V lead to faster decoherence for thermal-state superpositions.

  10. Quantum probability rule: a generalization of the theorems of Gleason and Busch

    NASA Astrophysics Data System (ADS)

    Barnett, Stephen M.; Cresser, James D.; Jeffers, John; Pegg, David T.

    2014-04-01

    Busch's theorem deriving the standard quantum probability rule can be regarded as a more general form of Gleason's theorem. Here we show that a further generalization is possible by reducing the number of quantum postulates used by Busch. We do not assume that the positive measurement outcome operators are effects or that they form a probability operator measure. We derive a more general probability rule from which the standard rule can be obtained from the normal laws of probability when there is no measurement outcome information available, without the need for further quantum postulates. Our general probability rule has prediction-retrodiction symmetry and we show how it may be applied in quantum communications and in retrodictive quantum theory.

  11. Estimating probabilities of reservoir storage for the upper Delaware River basin

    USGS Publications Warehouse

    Hirsch, Robert M.

    1981-01-01

    A technique for estimating conditional probabilities of reservoir system storage is described and applied to the upper Delaware River Basin. The results indicate that there is a 73 percent probability that the three major New York City reservoirs (Pepacton, Cannonsville, and Neversink) would be full by June 1, 1981, and only a 9 percent probability that storage would return to the ' drought warning ' sector of the operations curve sometime in the next year. In contrast, if restrictions are lifted and there is an immediate return to normal operating policies, the probability of the reservoir system being full by June 1 is 37 percent and the probability that storage would return to the ' drought warning ' sector in the next year is 30 percent. (USGS)

  12. Spatial analysis and hazard assessment on soil total nitrogen in the middle subtropical zone of China

    NASA Astrophysics Data System (ADS)

    Lu, Peng; Lin, Wenpeng; Niu, Zheng; Su, Yirong; Wu, Jinshui

    2006-10-01

    Nitrogen (N) is one of the main factors affecting environmental pollution. In recent years, non-point source pollution and water body eutrophication have become increasing concerns for both scientists and the policy-makers. In order to assess the environmental hazard of soil total N pollution, a typical ecological unit was selected as the experimental site. This paper showed that Box-Cox transformation achieved normality in the data set, and dampened the effect of outliers. The best theoretical model of soil total N was a Gaussian model. Spatial variability of soil total N at NE60° and NE150° directions showed that it had a strip anisotropic structure. The ordinary kriging estimate of soil total N concentration was mapped. The spatial distribution pattern of soil total N in the direction of NE150° displayed a strip-shaped structure. Kriging standard deviations (KSD) provided valuable information that will increase the accuracy of total N mapping. The probability kriging method is useful to assess the hazard of N pollution by providing the conditional probability of N concentration exceeding the threshold value, where we found soil total N>2.0g/kg. The probability distribution of soil total N will be helpful to conduct hazard assessment, optimal fertilization, and develop management practices to control the non-point sources of N pollution.

  13. The effect of 6 and 15 MV on intensity-modulated radiation therapy prostate cancer treatment: plan evaluation, tumour control probability and normal tissue complication probability analysis, and the theoretical risk of secondary induced malignancies

    PubMed Central

    Hussein, M; Aldridge, S; Guerrero Urbano, T; Nisbet, A

    2012-01-01

    Objective The aim of this study was to investigate the effect of 6 and 15-MV photon energies on intensity-modulated radiation therapy (IMRT) prostate cancer treatment plan outcome and to compare the theoretical risks of secondary induced malignancies. Methods Separate prostate cancer IMRT plans were prepared for 6 and 15-MV beams. Organ-equivalent doses were obtained through thermoluminescent dosemeter measurements in an anthropomorphic Aldersen radiation therapy human phantom. The neutron dose contribution at 15 MV was measured using polyallyl-diglycol-carbonate neutron track etch detectors. Risk coefficients from the International Commission on Radiological Protection Report 103 were used to compare the risk of fatal secondary induced malignancies in out-of-field organs and tissues for 6 and 15 MV. For the bladder and the rectum, a comparative evaluation of the risk using three separate models was carried out. Dose–volume parameters for the rectum, bladder and prostate planning target volume were evaluated, as well as normal tissue complication probability (NTCP) and tumour control probability calculations. Results There is a small increased theoretical risk of developing a fatal cancer from 6 MV compared with 15 MV, taking into account all the organs. Dose–volume parameters for the rectum and bladder show that 15 MV results in better volume sparing in the regions below 70 Gy, but the volume exposed increases slightly beyond this in comparison with 6 MV, resulting in a higher NTCP for the rectum of 3.6% vs 3.0% (p=0.166). Conclusion The choice to treat using IMRT at 15 MV should not be excluded, but should be based on risk vs benefit while considering the age and life expectancy of the patient together with the relative risk of radiation-induced cancer and NTCPs. PMID:22010028

  14. Applications of conformal field theory to problems in 2D percolation

    NASA Astrophysics Data System (ADS)

    Simmons, Jacob Joseph Harris

    This thesis explores critical two-dimensional percolation in bounded regions in the continuum limit. The main method which we employ is conformal field theory (CFT). Our specific results follow from the null-vector structure of the c = 0 CFT that applies to critical two-dimensional percolation. We also make use of the duality symmetry obeyed at the percolation point, and the fact that percolation may be understood as the q-state Potts model in the limit q → 1. Our first results describe the correlations between points in the bulk and boundary intervals or points, i.e. the probability that the various points or intervals are in the same percolation cluster. These quantities correspond to order-parameter profiles under the given conditions, or cluster connection probabilities. We consider two specific cases: an anchoring interval, and two anchoring points. We derive results for these and related geometries using the CFT null-vectors for the corresponding boundary condition changing (bcc) operators. In addition, we exhibit several exact relationships between these probabilities. These relations between the various bulk-boundary connection probabilities involve parameters of the CFT called operator product expansion (OPE) coefficients. We then compute several of these OPE coefficients, including those arising in our new probability relations. Beginning with the familiar CFT operator φ1,2, which corresponds to a free-fixed spin boundary change in the q-state Potts model, we then develop physical interpretations of the bcc operators. We argue that, when properly normalized, higher-order bcc operators correspond to successive fusions of multiple φ1,2, operators. Finally, by identifying the derivative of φ1,2 with the operator φ1,4, we derive several new quantities called first crossing densities. These new results are then combined and integrated to obtain the three previously known crossing quantities in a rectangle: the probability of a horizontal crossing cluster, the probability of a cluster crossing both horizontally and vertically, and the expected number of horizontal crossing clusters. These three results were known to be solutions to a certain fifth-order differential equation, but until now no physically meaningful explanation had appeared. This differential equation arises naturally in our derivation.

  15. Development of a score and probability estimate for detecting angle closure based on anterior segment optical coherence tomography.

    PubMed

    Nongpiur, Monisha E; Haaland, Benjamin A; Perera, Shamira A; Friedman, David S; He, Mingguang; Sakata, Lisandro M; Baskaran, Mani; Aung, Tin

    2014-01-01

    To develop a score along with an estimated probability of disease for detecting angle closure based on anterior segment optical coherence tomography (AS OCT) imaging. Cross-sectional study. A total of 2047 subjects 50 years of age and older were recruited from a community polyclinic in Singapore. All subjects underwent standardized ocular examination including gonioscopy and imaging by AS OCT (Carl Zeiss Meditec). Customized software (Zhongshan Angle Assessment Program) was used to measure AS OCT parameters. Complete data were available for 1368 subjects. Data from the right eyes were used for analysis. A stepwise logistic regression model with Akaike information criterion was used to generate a score that then was converted to an estimated probability of the presence of gonioscopic angle closure, defined as the inability to visualize the posterior trabecular meshwork for at least 180 degrees on nonindentation gonioscopy. Of the 1368 subjects, 295 (21.6%) had gonioscopic angle closure. The angle closure score was calculated from the shifted linear combination of the AS OCT parameters. The score can be converted to an estimated probability of having angle closure using the relationship: estimated probability = e(score)/(1 + e(score)), where e is the natural exponential. The score performed well in a second independent sample of 178 angle-closure subjects and 301 normal controls, with an area under the receiver operating characteristic curve of 0.94. A score derived from a single AS OCT image, coupled with an estimated probability, provides an objective platform for detection of angle closure. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Negative serum carcinoembryonic antigen has insufficient accuracy for excluding recurrence from patients with Dukes C colorectal cancer: analysis with likelihood ratio and posttest probability in a follow-up study.

    PubMed

    Hara, Masayasu; Kanemitsu, Yukihide; Hirai, Takashi; Komori, Koji; Kato, Tomoyuki

    2008-11-01

    This study was designed to determine the efficacy of carcinoembryonic antigen (CEA) monitoring for screening patients with colorectal cancer by using posttest probability of recurrence. For this study, 348 (preoperative serum CEA level elevated: CEA+, n = 119; or normal: CEA-, n = 229) patients who had undergone potentially curative surgery for colorectal cancer were enrolled. After five-year follow-up with measurements of serum CEA levels and imaging workup, posttest probabilities of recurrence were calculated. Recurrence was observed in 39 percent of CEA+ patients and 30 percent in CEA- patients, and CEA levels were elevated in 33.3 percent of CEA+ patients and 17.5 percent of CEA- patients. With obtained sensitivity (68.4 percent, CEA+; 41 percent, CEA-), specificity (83 percent, CEA+; 91 percent, CEA-) and likelihood ratio (test positive: 4.0, CEA+; 4.4, CEA-; and test negative: 0.38, CEA+; 0.66, CEA-), posttest probability given the presence of CEA elevation in the CEA+ and CEA- was 72.2 and 65.5 percent, respectively, and that given the absence of CEA elevation was 20 and 22.2 percent, respectively. Whereas postoperative CEA elevation indicates recurrence with high probability, a normal postoperative CEA is not useful for excluding the probability of recurrence.

  17. Correlation between the clinical pretest probability score and the lung ventilation and perfusion scan probability.

    PubMed

    Bhoobalan, Shanmugasundaram; Chakravartty, Riddhika; Dolbear, Gill; Al-Janabi, Mazin

    2013-10-01

    Aim of the study was to determine the accuracy of the clinical pretest probability (PTP) score and its association with lung ventilation and perfusion (VQ) scan. A retrospective analysis of 510 patients who had a lung VQ scan between 2008 and 2010 were included in the study. Out of 510 studies, the number of normal, low, and high probability VQ scans were 155 (30%), 289 (57%), and 55 (11%), respectively. A total of 103 patients underwent computed tomography pulmonary angiography (CTPA) scan in which 21 (20%) had a positive scan, 81 (79%) had a negative scan and one (1%) had an equivocal result. The rate of PE in the normal, low-probability, and high-probability scan categories were: 2 (9.5%), 10 (47.5%), and 9 (43%) respectively. A very low correlation (Pearson correlation coefficient r = 0.20) between the clinical PTP score and lung VQ scan. The area under the curve (AUC) of the clinical PTP score was 52% when compared with the CTPA results. However, the accuracy of lung VQ scan was better (AUC = 74%) when compared with CTPA scan. The clinical PTP score is unreliable on its own; however, it may still aid in the interpretation of lung VQ scan. The accuracy of the lung VQ scan was better in the assessment of underlying pulmonary embolism (PE).

  18. Correlation between the clinical pretest probability score and the lung ventilation and perfusion scan probability

    PubMed Central

    Bhoobalan, Shanmugasundaram; Chakravartty, Riddhika; Dolbear, Gill; Al-Janabi, Mazin

    2013-01-01

    Purpose: Aim of the study was to determine the accuracy of the clinical pretest probability (PTP) score and its association with lung ventilation and perfusion (VQ) scan. Materials and Methods: A retrospective analysis of 510 patients who had a lung VQ scan between 2008 and 2010 were included in the study. Out of 510 studies, the number of normal, low, and high probability VQ scans were 155 (30%), 289 (57%), and 55 (11%), respectively. Results: A total of 103 patients underwent computed tomography pulmonary angiography (CTPA) scan in which 21 (20%) had a positive scan, 81 (79%) had a negative scan and one (1%) had an equivocal result. The rate of PE in the normal, low-probability, and high-probability scan categories were: 2 (9.5%), 10 (47.5%), and 9 (43%) respectively. A very low correlation (Pearson correlation coefficient r = 0.20) between the clinical PTP score and lung VQ scan. The area under the curve (AUC) of the clinical PTP score was 52% when compared with the CTPA results. However, the accuracy of lung VQ scan was better (AUC = 74%) when compared with CTPA scan. Conclusion: The clinical PTP score is unreliable on its own; however, it may still aid in the interpretation of lung VQ scan. The accuracy of the lung VQ scan was better in the assessment of underlying pulmonary embolism (PE). PMID:24379532

  19. Monte Carlo role in radiobiological modelling of radiotherapy outcomes

    NASA Astrophysics Data System (ADS)

    El Naqa, Issam; Pater, Piotr; Seuntjens, Jan

    2012-06-01

    Radiobiological models are essential components of modern radiotherapy. They are increasingly applied to optimize and evaluate the quality of different treatment planning modalities. They are frequently used in designing new radiotherapy clinical trials by estimating the expected therapeutic ratio of new protocols. In radiobiology, the therapeutic ratio is estimated from the expected gain in tumour control probability (TCP) to the risk of normal tissue complication probability (NTCP). However, estimates of TCP/NTCP are currently based on the deterministic and simplistic linear-quadratic formalism with limited prediction power when applied prospectively. Given the complex and stochastic nature of the physical, chemical and biological interactions associated with spatial and temporal radiation induced effects in living tissues, it is conjectured that methods based on Monte Carlo (MC) analysis may provide better estimates of TCP/NTCP for radiotherapy treatment planning and trial design. Indeed, over the past few decades, methods based on MC have demonstrated superior performance for accurate simulation of radiation transport, tumour growth and particle track structures; however, successful application of modelling radiobiological response and outcomes in radiotherapy is still hampered with several challenges. In this review, we provide an overview of some of the main techniques used in radiobiological modelling for radiotherapy, with focus on the MC role as a promising computational vehicle. We highlight the current challenges, issues and future potentials of the MC approach towards a comprehensive systems-based framework in radiobiological modelling for radiotherapy.

  20. Radiobiological modeling of two stereotactic body radiotherapy schedules in patients with stage I peripheral non-small cell lung cancer.

    PubMed

    Huang, Bao-Tian; Lin, Zhu; Lin, Pei-Xian; Lu, Jia-Yang; Chen, Chuang-Zhen

    2016-06-28

    This study aims to compare the radiobiological response of two stereotactic body radiotherapy (SBRT) schedules for patients with stage I peripheral non-small cell lung cancer (NSCLC) using radiobiological modeling methods. Volumetric modulated arc therapy (VMAT)-based SBRT plans were designed using two dose schedules of 1 × 34 Gy (34 Gy in 1 fraction) and 4 × 12 Gy (48 Gy in 4 fractions) for 19 patients diagnosed with primary stage I NSCLC. Dose to the gross target volume (GTV), planning target volume (PTV), lung and chest wall (CW) were converted to biologically equivalent dose in 2 Gy fraction (EQD2) for comparison. Five different radiobiological models were employed to predict the tumor control probability (TCP) value. Three additional models were utilized to estimate the normal tissue complication probability (NTCP) value for the lung and the modified equivalent uniform dose (mEUD) value to the CW. Our result indicates that the 1 × 34 Gy dose schedule provided a higher EQD2 dose to the tumor, lung and CW. Radiobiological modeling revealed that the TCP value for the tumor, NTCP value for the lung and mEUD value for the CW were 7.4% (in absolute value), 7.2% (in absolute value) and 71.8% (in relative value) higher on average, respectively, using the 1 × 34 Gy dose schedule.

  1. Mapping risk of plague in Qinghai-Tibetan Plateau, China.

    PubMed

    Qian, Quan; Zhao, Jian; Fang, Liqun; Zhou, Hang; Zhang, Wenyi; Wei, Lan; Yang, Hong; Yin, Wenwu; Cao, Wuchun; Li, Qun

    2014-07-10

    Qinghai-Tibetan Plateau of China is known to be the plague endemic region where marmot (Marmota himalayana) is the primary host. Human plague cases are relatively low incidence but high mortality, which presents unique surveillance and public health challenges, because early detection through surveillance may not always be feasible and infrequent clinical cases may be misdiagnosed. Based on plague surveillance data and environmental variables, Maxent was applied to model the presence probability of plague host. 75% occurrence points were randomly selected for training model, and the rest 25% points were used for model test and validation. Maxent model performance was measured as test gain and test AUC. The optimal probability cut-off value was chosen by maximizing training sensitivity and specificity simultaneously. We used field surveillance data in an ecological niche modeling (ENM) framework to depict spatial distribution of natural foci of plague in Qinghai-Tibetan Plateau. Most human-inhabited areas at risk of exposure to enzootic plague are distributed in the east and south of the Plateau. Elevation, temperature of land surface and normalized difference vegetation index play a large part in determining the distribution of the enzootic plague. This study provided a more detailed view of spatial pattern of enzootic plague and human-inhabited areas at risk of plague. The maps could help public health authorities decide where to perform plague surveillance and take preventive measures in Qinghai-Tibetan Plateau.

  2. Hypothesis testing and earthquake prediction.

    PubMed

    Jackson, D D

    1996-04-30

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.

  3. Hypothesis testing and earthquake prediction.

    PubMed Central

    Jackson, D D

    1996-01-01

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663

  4. Computer routines for probability distributions, random numbers, and related functions

    USGS Publications Warehouse

    Kirby, W.

    1983-01-01

    Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  5. Computer routines for probability distributions, random numbers, and related functions

    USGS Publications Warehouse

    Kirby, W.H.

    1980-01-01

    Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  6. Impact of Physician BMI on Obesity Care and Beliefs

    PubMed Central

    Bleich, Sara N.; Bennett, Wendy L.; Gudzune, Kimberly A.; Cooper, Lisa A.

    2013-01-01

    Using a national cross-sectional survey of 500 primary care physicians conducted between 9 February and 1 March 2011, the objective of this study was to assess the impact of physician BMI on obesity care, physician self-efficacy, perceptions of role-modeling weight-related health behaviors, and perceptions of patient trust in weight loss advice. We found that physicians with normal BMI were more likely to engage their obese patients in weight loss discussions as compared to overweight/obese physicians (30% vs. 18%, P = 0.010). Physicians with normal BMI had greater confidence in their ability to provide diet (53% vs. 37%, P = 0.002) and exercise counseling (56% vs. 38%, P = 0.001) to their obese patients. A higher percentage of normal BMI physicians believed that overweight/obese patients would be less likely to trust weight loss advice from overweight/obese doctors (80% vs. 69%, P = 0.02). Physicians in the normal BMI category were more likely to believe that physicians should model healthy weight-related behaviors—maintaining a healthy weight (72% vs. 56%, P = 0.002) and exercising regularly (73% vs. 57%, P = 0.001). The probability of a physician recording an obesity diagnosis (93% vs. 7%, P < 0.001) or initiating a weight loss conversation (89% vs. 11%, P ≤ 0.001) with their obese patients was higher when the physicians’ perception of the patients’ body weight met or exceeded their own personal body weight. These results suggest that more normal weight physicians provided recommended obesity care to their patients and felt confident doing so. PMID:22262162

  7. The distribution of the intervals between neural impulses in the maintained discharges of retinal ganglion cells.

    PubMed

    Levine, M W

    1991-01-01

    Simulated neural impulse trains were generated by a digital realization of the integrate-and-fire model. The variability in these impulse trains had as its origin a random noise of specified distribution. Three different distributions were used: the normal (Gaussian) distribution (no skew, normokurtic), a first-order gamma distribution (positive skew, leptokurtic), and a uniform distribution (no skew, platykurtic). Despite these differences in the distribution of the variability, the distributions of the intervals between impulses were nearly indistinguishable. These inter-impulse distributions were better fit with a hyperbolic gamma distribution than a hyperbolic normal distribution, although one might expect a better approximation for normally distributed inverse intervals. Consideration of why the inter-impulse distribution is independent of the distribution of the causative noise suggests two putative interval distributions that do not depend on the assumed noise distribution: the log normal distribution, which is predicated on the assumption that long intervals occur with the joint probability of small input values, and the random walk equation, which is the diffusion equation applied to a random walk model of the impulse generating process. Either of these equations provides a more satisfactory fit to the simulated impulse trains than the hyperbolic normal or hyperbolic gamma distributions. These equations also provide better fits to impulse trains derived from the maintained discharges of ganglion cells in the retinae of cats or goldfish. It is noted that both equations are free from the constraint that the coefficient of variation (CV) have a maximum of unity.(ABSTRACT TRUNCATED AT 250 WORDS)

  8. Identification of Patient Benefit From Proton Therapy for Advanced Head and Neck Cancer Patients Based on Individual and Subgroup Normal Tissue Complication Probability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakobi, Annika, E-mail: Annika.Jakobi@OncoRay.de; Bandurska-Luque, Anna; Department of Radiation Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden

    Purpose: The purpose of this study was to determine, by treatment plan comparison along with normal tissue complication probability (NTCP) modeling, whether a subpopulation of patients with head and neck squamous cell carcinoma (HNSCC) could be identified that would gain substantial benefit from proton therapy in terms of NTCP. Methods and Materials: For 45 HNSCC patients, intensity modulated radiation therapy (IMRT) was compared to intensity modulated proton therapy (IMPT). Physical dose distributions were evaluated as well as the resulting NTCP values, using modern models for acute mucositis, xerostomia, aspiration, dysphagia, laryngeal edema, and trismus. Patient subgroups were defined based onmore » primary tumor location. Results: Generally, IMPT reduced the NTCP values while keeping similar target coverage for all patients. Subgroup analyses revealed a higher individual reduction of swallowing-related side effects by IMPT for patients with tumors in the upper head and neck area, whereas the risk reduction of acute mucositis was more pronounced in patients with tumors in the larynx region. More patients with tumors in the upper head and neck area had a reduction in NTCP of more than 10%. Conclusions: Subgrouping can help to identify patients who may benefit more than others from the use of IMPT and, thus, can be a useful tool for a preselection of patients in the clinic where there are limited PT resources. Because the individual benefit differs within a subgroup, the relative merits should additionally be evaluated by individual treatment plan comparisons.« less

  9. Real time near-infrared Raman spectroscopy for the diagnosis of nasopharyngeal cancer.

    PubMed

    Ming, Lim Chwee; Gangodu, Nagaraja Rao; Loh, Thomas; Zheng, Wei; Wang, Jianfeng; Lin, Kan; Zhiwei, Huang

    2017-07-25

    Near-infrared (NIR) Raman spectroscopy has been investigated as a tool to differentiate nasopharyngeal cancer (NPC) from normal nasopharyngeal tissue in an ex-vivo setting. Recently, we have miniaturized the fiber-optic Raman probe to investigate its utility in real time in-vivo surveillance of NPC patients. A posterior probability model using partial linear square (PLS) mathematical technique was constructed to verify the sensitivity and specificity of Raman spectroscopy in diagnosing NPC from post-irradiated and normal tissue using a diagnostic algorithm from three significant latent variables. NIR-Raman signals of 135 sites were measured from 79 patients with either newly diagnosed NPC (N = 12), post irradiated nasopharynx (N = 37) and normal nasopharynx (N = 30). The mean Raman spectra peaks identified differences at several Raman peaks at 853 cm-1, 940 cm-1, 1078 cm-1, 1335 cm-1, 1554 cm-1, 2885 cm-1 and 2940 cm-1 in the three different nasopharyngeal conditions. The sensitivity and specificity of distinguishing Raman signatures among normal nasopharynx versus NPC and post-irradiated nasopharynx versus NPC were 91% and 95%; and 77% and 96% respectively. Real time near-infrared Raman spectroscopy has a high specificity in distinguishing malignant from normal nasopharyngeal tissue in vivo, and may be investigated as a novel non-invasive surveillance tool in patients with nasopharyngeal cancer.

  10. Statistical Characterization of the Mechanical Parameters of Intact Rock Under Triaxial Compression: An Experimental Proof of the Jinping Marble

    NASA Astrophysics Data System (ADS)

    Jiang, Quan; Zhong, Shan; Cui, Jie; Feng, Xia-Ting; Song, Leibo

    2016-12-01

    We investigated the statistical characteristics and probability distribution of the mechanical parameters of natural rock using triaxial compression tests. Twenty cores of Jinping marble were tested under each different levels of confining stress (i.e., 5, 10, 20, 30, and 40 MPa). From these full stress-strain data, we summarized the numerical characteristics and determined the probability distribution form of several important mechanical parameters, including deformational parameters, characteristic strength, characteristic strains, and failure angle. The statistical proofs relating to the mechanical parameters of rock presented new information about the marble's probabilistic distribution characteristics. The normal and log-normal distributions were appropriate for describing random strengths of rock; the coefficients of variation of the peak strengths had no relationship to the confining stress; the only acceptable random distribution for both Young's elastic modulus and Poisson's ratio was the log-normal function; and the cohesive strength had a different probability distribution pattern than the frictional angle. The triaxial tests and statistical analysis also provided experimental evidence for deciding the minimum reliable number of experimental sample and for picking appropriate parameter distributions to use in reliability calculations for rock engineering.

  11. A new plan-scoring method using normal tissue complication probability for personalized treatment plan decisions in prostate cancer

    NASA Astrophysics Data System (ADS)

    Kim, Kwang Hyeon; Lee, Suk; Shim, Jang Bo; Yang, Dae Sik; Yoon, Won Sup; Park, Young Je; Kim, Chul Yong; Cao, Yuan Jie; Chang, Kyung Hwan

    2018-01-01

    The aim of this study was to derive a new plan-scoring index using normal tissue complication probabilities to verify different plans in the selection of personalized treatment. Plans for 12 patients treated with tomotherapy were used to compare scoring for ranking. Dosimetric and biological indexes were analyzed for the plans for a clearly distinguishable group ( n = 7) and a similar group ( n = 12), using treatment plan verification software that we developed. The quality factor ( QF) of our support software for treatment decisions was consistent with the final treatment plan for the clearly distinguishable group (average QF = 1.202, 100% match rate, n = 7) and the similar group (average QF = 1.058, 33% match rate, n = 12). Therefore, we propose a normal tissue complication probability (NTCP) based on the plan scoring index for verification of different plans for personalized treatment-plan selection. Scoring using the new QF showed a 100% match rate (average NTCP QF = 1.0420). The NTCP-based new QF scoring method was adequate for obtaining biological verification quality and organ risk saving using the treatment-planning decision-support software we developed for prostate cancer.

  12. Endocannabinoids control vesicle release mode at midbrain periaqueductal grey inhibitory synapses.

    PubMed

    Aubrey, Karin R; Drew, Geoffrey M; Jeong, Hyo-Jin; Lau, Benjamin K; Vaughan, Christopher W

    2017-01-01

    The midbrain periaqueductal grey (PAG) forms part of an endogenous analgesic system which is tightly regulated by the neurotransmitter GABA. The role of endocannabinoids in regulating GABAergic control of this system was examined in rat PAG slices. Under basal conditions GABAergic neurotransmission onto PAG output neurons was multivesicular. Activation of the endocannabinoid system reduced GABAergic inhibition by reducing the probability of release and by shifting release to a univesicular mode. Blockade of endocannabinoid system unmasked a tonic control over the probability and mode of GABA release. These findings provides a mechanistic foundation for the control of the PAG analgesic system by disinhibition. The midbrain periaqueductal grey (PAG) has a crucial role in coordinating endogenous analgesic responses to physiological and psychological stressors. Endocannabinoids are thought to mediate a form of stress-induced analgesia within the PAG by relieving GABAergic inhibition of output neurons, a process known as disinhibition. This disinhibition is thought to be achieved by a presynaptic reduction in GABA release probability. We examined whether other mechanisms have a role in endocannabinoid modulation of GABAergic synaptic transmission within the rat PAG. The group I mGluR agonist DHPG ((R,S)-3,5-dihydroxyphenylglycine) inhibited evoked IPSCs and increased their paired pulse ratio in normal external Ca 2+ , and when release probability was reduced by lowering Ca 2+ . However, the effect of DHPG on the coefficient of variation and kinetics of evoked IPSCs differed between normal and low Ca 2+ . Lowering external Ca 2+ had a similar effect on evoked IPSCs to that observed for DHPG in normal external Ca 2+ . The low affinity GABA A receptor antagonist TPMPA ((1,2,5,6-tetrahydropyridin-4-yl)methylphosphinic acid) inhibited evoked IPSCs to a greater extent in low than in normal Ca 2+ . Together these findings indicate that the normal mode of GABA release is multivesicular within the PAG, and that DHPG and lowering external Ca 2+ switch this to a univesicular mode. The effects of DHPG were mediated by mGlu5 receptor engagement of the retrograde endocannabinoid system. Blockade of endocannabinoid breakdown produced a similar shift in the mode of release. We conclude that endocannabinoids control both the mode and the probability of GABA release within the PAG. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.

  13. Convex reformulation of biologically-based multi-criteria intensity-modulated radiation therapy optimization including fractionation effects

    NASA Astrophysics Data System (ADS)

    Hoffmann, Aswin L.; den Hertog, Dick; Siem, Alex Y. D.; Kaanders, Johannes H. A. M.; Huizenga, Henk

    2008-11-01

    Finding fluence maps for intensity-modulated radiation therapy (IMRT) can be formulated as a multi-criteria optimization problem for which Pareto optimal treatment plans exist. To account for the dose-per-fraction effect of fractionated IMRT, it is desirable to exploit radiobiological treatment plan evaluation criteria based on the linear-quadratic (LQ) cell survival model as a means to balance the radiation benefits and risks in terms of biologic response. Unfortunately, the LQ-model-based radiobiological criteria are nonconvex functions, which make the optimization problem hard to solve. We apply the framework proposed by Romeijn et al (2004 Phys. Med. Biol. 49 1991-2013) to find transformations of LQ-model-based radiobiological functions and establish conditions under which transformed functions result in equivalent convex criteria that do not change the set of Pareto optimal treatment plans. The functions analysed are: the LQ-Poisson-based model for tumour control probability (TCP) with and without inter-patient heterogeneity in radiation sensitivity, the LQ-Poisson-based relative seriality s-model for normal tissue complication probability (NTCP), the equivalent uniform dose (EUD) under the LQ-Poisson model and the fractionation-corrected Probit-based model for NTCP according to Lyman, Kutcher and Burman. These functions differ from those analysed before in that they cannot be decomposed into elementary EUD or generalized-EUD functions. In addition, we show that applying increasing and concave transformations to the convexified functions is beneficial for the piecewise approximation of the Pareto efficient frontier.

  14. CerebroMatic: A Versatile Toolbox for Spline-Based MRI Template Creation

    PubMed Central

    Wilke, Marko; Altaye, Mekibib; Holland, Scott K.

    2017-01-01

    Brain image spatial normalization and tissue segmentation rely on prior tissue probability maps. Appropriately selecting these tissue maps becomes particularly important when investigating “unusual” populations, such as young children or elderly subjects. When creating such priors, the disadvantage of applying more deformation must be weighed against the benefit of achieving a crisper image. We have previously suggested that statistically modeling demographic variables, instead of simply averaging images, is advantageous. Both aspects (more vs. less deformation and modeling vs. averaging) were explored here. We used imaging data from 1914 subjects, aged 13 months to 75 years, and employed multivariate adaptive regression splines to model the effects of age, field strength, gender, and data quality. Within the spm/cat12 framework, we compared an affine-only with a low- and a high-dimensional warping approach. As expected, more deformation on the individual level results in lower group dissimilarity. Consequently, effects of age in particular are less apparent in the resulting tissue maps when using a more extensive deformation scheme. Using statistically-described parameters, high-quality tissue probability maps could be generated for the whole age range; they are consistently closer to a gold standard than conventionally-generated priors based on 25, 50, or 100 subjects. Distinct effects of field strength, gender, and data quality were seen. We conclude that an extensive matching for generating tissue priors may model much of the variability inherent in the dataset which is then not contained in the resulting priors. Further, the statistical description of relevant parameters (using regression splines) allows for the generation of high-quality tissue probability maps while controlling for known confounds. The resulting CerebroMatic toolbox is available for download at http://irc.cchmc.org/software/cerebromatic.php. PMID:28275348

  15. CerebroMatic: A Versatile Toolbox for Spline-Based MRI Template Creation.

    PubMed

    Wilke, Marko; Altaye, Mekibib; Holland, Scott K

    2017-01-01

    Brain image spatial normalization and tissue segmentation rely on prior tissue probability maps. Appropriately selecting these tissue maps becomes particularly important when investigating "unusual" populations, such as young children or elderly subjects. When creating such priors, the disadvantage of applying more deformation must be weighed against the benefit of achieving a crisper image. We have previously suggested that statistically modeling demographic variables, instead of simply averaging images, is advantageous. Both aspects (more vs. less deformation and modeling vs. averaging) were explored here. We used imaging data from 1914 subjects, aged 13 months to 75 years, and employed multivariate adaptive regression splines to model the effects of age, field strength, gender, and data quality. Within the spm/cat12 framework, we compared an affine-only with a low- and a high-dimensional warping approach. As expected, more deformation on the individual level results in lower group dissimilarity. Consequently, effects of age in particular are less apparent in the resulting tissue maps when using a more extensive deformation scheme. Using statistically-described parameters, high-quality tissue probability maps could be generated for the whole age range; they are consistently closer to a gold standard than conventionally-generated priors based on 25, 50, or 100 subjects. Distinct effects of field strength, gender, and data quality were seen. We conclude that an extensive matching for generating tissue priors may model much of the variability inherent in the dataset which is then not contained in the resulting priors. Further, the statistical description of relevant parameters (using regression splines) allows for the generation of high-quality tissue probability maps while controlling for known confounds. The resulting CerebroMatic toolbox is available for download at http://irc.cchmc.org/software/cerebromatic.php.

  16. Application of Archimedean copulas to the impact assessment of hydro-climatic variables in semi-arid aquifers of western India

    NASA Astrophysics Data System (ADS)

    Wable, Pawan S.; Jha, Madan K.

    2018-02-01

    The effects of rainfall and the El Niño Southern Oscillation (ENSO) on groundwater in a semi-arid basin of India were analyzed using Archimedean copulas considering 17 years of data for monsoon rainfall, post-monsoon groundwater level (PMGL) and ENSO Index. The evaluated dependence among these hydro-climatic variables revealed that PMGL-Rainfall and PMGL-ENSO Index pairs have significant dependence. Hence, these pairs were used for modeling dependence by employing four types of Archimedean copulas: Ali-Mikhail-Haq, Clayton, Gumbel-Hougaard, and Frank. For the copula modeling, the results of probability distributions fitting to these hydro-climatic variables indicated that the PMGL and rainfall time series are best represented by Weibull and lognormal distributions, respectively, while the non-parametric kernel-based normal distribution is the most suitable for the ENSO Index. Further, the PMGL-Rainfall pair is best modeled by the Clayton copula, and the PMGL-ENSO Index pair is best modeled by the Frank copula. The Clayton copula-based conditional probability of PMGL being less than or equal to its average value at a given mean rainfall is above 70% for 33% of the study area. In contrast, the spatial variation of the Frank copula-based probability of PMGL being less than or equal to its average value is 35-40% in 23% of the study area during El Niño phase, while it is below 15% in 35% of the area during the La Niña phase. This copula-based methodology can be applied under data-scarce conditions for exploring the impacts of rainfall and ENSO on groundwater at basin scales.

  17. Assessment and quantification of patient set-up errors in nasopharyngeal cancer patients and their biological and dosimetric impact in terms of generalized equivalent uniform dose (gEUD), tumour control probability (TCP) and normal tissue complication probability (NTCP).

    PubMed

    Boughalia, A; Marcie, S; Fellah, M; Chami, S; Mekki, F

    2015-06-01

    The aim of this study is to assess and quantify patients' set-up errors using an electronic portal imaging device and to evaluate their dosimetric and biological impact in terms of generalized equivalent uniform dose (gEUD) on predictive models, such as the tumour control probability (TCP) and the normal tissue complication probability (NTCP). 20 patients treated for nasopharyngeal cancer were enrolled in the radiotherapy-oncology department of HCA. Systematic and random errors were quantified. The dosimetric and biological impact of these set-up errors on the target volume and the organ at risk (OARs) coverage were assessed using calculation of dose-volume histogram, gEUD, TCP and NTCP. For this purpose, an in-house software was developed and used. The standard deviations (1SDs) of the systematic set-up and random set-up errors were calculated for the lateral and subclavicular fields and gave the following results: ∑ = 0.63 ± (0.42) mm and σ = 3.75 ± (0.79) mm, respectively. Thus a planning organ at risk volume (PRV) margin of 3 mm was defined around the OARs, and a 5-mm margin used around the clinical target volume. The gEUD, TCP and NTCP calculations obtained with and without set-up errors have shown increased values for tumour, where ΔgEUD (tumour) = 1.94% Gy (p = 0.00721) and ΔTCP = 2.03%. The toxicity of OARs was quantified using gEUD and NTCP. The values of ΔgEUD (OARs) vary from 0.78% to 5.95% in the case of the brainstem and the optic chiasm, respectively. The corresponding ΔNTCP varies from 0.15% to 0.53%, respectively. The quantification of set-up errors has a dosimetric and biological impact on the tumour and on the OARs. The developed in-house software using the concept of gEUD, TCP and NTCP biological models has been successfully used in this study. It can be used also to optimize the treatment plan established for our patients. The gEUD, TCP and NTCP may be more suitable tools to assess the treatment plans before treating the patients.

  18. Suboptimal Performance in Cleft Lip/Palate Children- Who is Responsible?

    PubMed

    Lakhkar, Bhavana B

    2016-10-01

    Information in this article is from an observational study comparing intelligence in children with cleft lip and palate with normal children. Both groups performed "draw a man test", the investigator noted the attitude and behaviour of children and their parents. The study shows low, but normal intelligence quotient in children with oral defects as compared to normal. The probable reason for sub-normal performance appeared to be overprotective attitude of parents and poor self esteem of children with oral defects.

  19. Impact of signal scattering and parametric uncertainties on receiver operating characteristics

    NASA Astrophysics Data System (ADS)

    Wilson, D. Keith; Breton, Daniel J.; Hart, Carl R.; Pettit, Chris L.

    2017-05-01

    The receiver operating characteristic (ROC curve), which is a plot of the probability of detection as a function of the probability of false alarm, plays a key role in the classical analysis of detector performance. However, meaningful characterization of the ROC curve is challenging when practically important complications such as variations in source emissions, environmental impacts on the signal propagation, uncertainties in the sensor response, and multiple sources of interference are considered. In this paper, a relatively simple but realistic model for scattered signals is employed to explore how parametric uncertainties impact the ROC curve. In particular, we show that parametric uncertainties in the mean signal and noise power substantially raise the tails of the distributions; since receiver operation with a very low probability of false alarm and a high probability of detection is normally desired, these tails lead to severely degraded performance. Because full a priori knowledge of such parametric uncertainties is rarely available in practice, analyses must typically be based on a finite sample of environmental states, which only partially characterize the range of parameter variations. We show how this effect can lead to misleading assessments of system performance. For the cases considered, approximately 64 or more statistically independent samples of the uncertain parameters are needed to accurately predict the probabilities of detection and false alarm. A connection is also described between selection of suitable distributions for the uncertain parameters, and Bayesian adaptive methods for inferring the parameters.

  20. Stress perturbation associated with the Amazonas and other ancient continental rifts

    USGS Publications Warehouse

    Zoback, M.L.; Richardson, R.M.

    1996-01-01

    The state of stress in the vicinity of old continental rifts is examined to investigate the possibility that crustal structure associated with ancient rifts (specifically a dense rift pillow in the lower crust) may modify substantially the regional stress field. Both shallow (2.0-2.6 km depth) breakout data and deep (20-45 km depth) crustal earthquake focal mechanisms indicate a N to NNE maximum horizontal compression in the vicinity of the Paleozoic Amazonas rift in central Brazil. This compressive stress direction is nearly perpendicular to the rift structure and represents a ???75?? rotation relative to a regional E-W compressive stress direction in the South American plate. Elastic two-dimensional finite element models of the density structure associated with the Amazonas rift (as inferred from independent gravity modeling) indicate that elastic support of this dense feature would generate horizontal rift-normal compressional stresses between 60 and 120 MPa, with values of 80-100 MPa probably most representative of the overall structure. The observed ???75?? stress rotation constrains the ratio of the regional horizontal stress difference to the rift-normal compressive stress to be between 0.25 and 1.0, suggesting that this rift-normal stress may be from 1 to 4 times larger than the regional horizontal stress difference. A general expression for the modification of the normalized local horizontal shear stress (relative to the regional horizontal shear stress) shows that the same ratio of the rift-normal compression relative to the regional horizontal stress difference, which controls the amount of stress rotation, also determines whether the superposed stress increases or decreases the local maximum horizontal shear stress. The potential for fault reactivation of ancient continental rifts in general is analyzed considering both the local stress rotation and modification of horizontal shear stress for both thrust and strike-slip stress regimes. In the Amazonas rift case, because the observed stress rotation only weakly constrains the ratio of the regional horizontal stress difference to the rift-normal compression to be between 0.25 and 1.0, our analysis is inconclusive because the resultant normalized horizontal shear stress may be reduced (for ratios >0.5) or enhanced (for ratios <0.5). Additional information is needed on all three stress magnitudes to predict how a change in horizontal shear stress directly influences the likelihood of faulting in the thrust-faulting stress regime in the vicinity of the Amazonas rift. A rift-normal stress associated with the seismically active New Madrid ancient rift may be sufficient to rotate the horizontal stress field consistent with strike-slip faults parallel to the axis of the rift, although this results in a 20-40% reduction in the local horizontal shear stress within the seismic zone. Sparse stress data in the vicinity of the seismically quiescent Midcontinent rift of the central United States suggest a stress state similar to that of New Madrid, with the local horizontal shear stress potentially reduced by as much as 60%. Thus the markedly different levels of seismic activity associated with these two subparallel ancient rifts is probably due to other factors than stress perturbations due to dense rift pillows. The modeling and analysis here demonstrate that rift-normal compressive stresses are a significant source of stress acting on the lithosphere and that in some cases may be a contributing factor to the association of intraplate seismicity with old zones of continental extension.

  1. Lateralization of temporal lobe epilepsy by multimodal multinomial hippocampal response-driven models.

    PubMed

    Nazem-Zadeh, Mohammad-Reza; Elisevich, Kost V; Schwalb, Jason M; Bagher-Ebadian, Hassan; Mahmoudi, Fariborz; Soltanian-Zadeh, Hamid

    2014-12-15

    Multiple modalities are used in determining laterality in mesial temporal lobe epilepsy (mTLE). It is unclear how much different imaging modalities should be weighted in decision-making. The purpose of this study is to develop response-driven multimodal multinomial models for lateralization of epileptogenicity in mTLE patients based upon imaging features in order to maximize the accuracy of noninvasive studies. The volumes, means and standard deviations of FLAIR intensity and means of normalized ictal-interictal SPECT intensity of the left and right hippocampi were extracted from preoperative images of a retrospective cohort of 45 mTLE patients with Engel class I surgical outcomes, as well as images of a cohort of 20 control, nonepileptic subjects. Using multinomial logistic function regression, the parameters of various univariate and multivariate models were estimated. Based on the Bayesian model averaging (BMA) theorem, response models were developed as compositions of independent univariate models. A BMA model composed of posterior probabilities of univariate response models of hippocampal volumes, means and standard deviations of FLAIR intensity, and means of SPECT intensity with the estimated weighting coefficients of 0.28, 0.32, 0.09, and 0.31, respectively, as well as a multivariate response model incorporating all mentioned attributes, demonstrated complete reliability by achieving a probability of detection of one with no false alarms to establish proper laterality in all mTLE patients. The proposed multinomial multivariate response-driven model provides a reliable lateralization of mesial temporal epileptogenicity including those patients who require phase II assessment. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Development of Thresholds and Exceedance Probabilities for Influent Water Quality to Meet Drinking Water Regulations

    NASA Astrophysics Data System (ADS)

    Reeves, K. L.; Samson, C.; Summers, R. S.; Balaji, R.

    2017-12-01

    Drinking water treatment utilities (DWTU) are tasked with the challenge of meeting disinfection and disinfection byproduct (DBP) regulations to provide safe, reliable drinking water under changing climate and land surface characteristics. DBPs form in drinking water when disinfectants, commonly chlorine, react with organic matter as measured by total organic carbon (TOC), and physical removal of pathogen microorganisms are achieved by filtration and monitored by turbidity removal. Turbidity and TOC in influent waters to DWTUs are expected to increase due to variable climate and more frequent fires and droughts. Traditional methods for forecasting turbidity and TOC require catchment specific data (i.e. streamflow) and have difficulties predicting them under non-stationary climate. A modelling framework was developed to assist DWTUs with assessing their risk for future compliance with disinfection and DBP regulations under changing climate. A local polynomial method was developed to predict surface water TOC using climate data collected from NOAA, Normalized Difference Vegetation Index (NDVI) data from the IRI Data Library, and historical TOC data from three DWTUs in diverse geographic locations. Characteristics from the DWTUs were used in the EPA Water Treatment Plant model to determine thresholds for influent TOC that resulted in DBP concentrations within compliance. Lastly, extreme value theory was used to predict probabilities of threshold exceedances under the current climate. Results from the utilities were used to produce a generalized TOC threshold approach that only requires water temperature and bromide concentration. The threshold exceedance model will be used to estimate probabilities of exceedances under projected climate scenarios. Initial results show that TOC can be forecasted using widely available data via statistical methods, where temperature, precipitation, Palmer Drought Severity Index, and NDVI with various lags were shown to be important predictors of TOC, and TOC thresholds can be determined using water temperature and bromide concentration. Results include a model to predict influent turbidity and turbidity thresholds, similar to the TOC models, as well as probabilities of threshold exceedances for TOC and turbidity under changing climate.

  3. The discrete Laplace exponential family and estimation of Y-STR haplotype frequencies.

    PubMed

    Andersen, Mikkel Meyer; Eriksen, Poul Svante; Morling, Niels

    2013-07-21

    Estimating haplotype frequencies is important in e.g. forensic genetics, where the frequencies are needed to calculate the likelihood ratio for the evidential weight of a DNA profile found at a crime scene. Estimation is naturally based on a population model, motivating the investigation of the Fisher-Wright model of evolution for haploid lineage DNA markers. An exponential family (a class of probability distributions that is well understood in probability theory such that inference is easily made by using existing software) called the 'discrete Laplace distribution' is described. We illustrate how well the discrete Laplace distribution approximates a more complicated distribution that arises by investigating the well-known population genetic Fisher-Wright model of evolution by a single-step mutation process. It was shown how the discrete Laplace distribution can be used to estimate haplotype frequencies for haploid lineage DNA markers (such as Y-chromosomal short tandem repeats), which in turn can be used to assess the evidential weight of a DNA profile found at a crime scene. This was done by making inference in a mixture of multivariate, marginally independent, discrete Laplace distributions using the EM algorithm to estimate the probabilities of membership of a set of unobserved subpopulations. The discrete Laplace distribution can be used to estimate haplotype frequencies with lower prediction error than other existing estimators. Furthermore, the calculations could be performed on a normal computer. This method was implemented in the freely available open source software R that is supported on Linux, MacOS and MS Windows. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Pretest expectations strongly influence interpretation of abnormal laboratory results and further management

    PubMed Central

    2010-01-01

    Background Abnormal results of diagnostic laboratory tests can be difficult to interpret when disease probability is very low. Although most physicians generally do not use Bayesian calculations to interpret abnormal results, their estimates of pretest disease probability and reasons for ordering diagnostic tests may - in a more implicit manner - influence test interpretation and further management. A better understanding of this influence may help to improve test interpretation and management. Therefore, the objective of this study was to examine the influence of physicians' pretest disease probability estimates, and their reasons for ordering diagnostic tests, on test result interpretation, posttest probability estimates and further management. Methods Prospective study among 87 primary care physicians in the Netherlands who each ordered laboratory tests for 25 patients. They recorded their reasons for ordering the tests (to exclude or confirm disease or to reassure patients) and their pretest disease probability estimates. Upon receiving the results they recorded how they interpreted the tests, their posttest probability estimates and further management. Logistic regression was used to analyse whether the pretest probability and the reasons for ordering tests influenced the interpretation, the posttest probability estimates and the decisions on further management. Results The physicians ordered tests for diagnostic purposes for 1253 patients; 742 patients had an abnormal result (64%). Physicians' pretest probability estimates and their reasons for ordering diagnostic tests influenced test interpretation, posttest probability estimates and further management. Abnormal results of tests ordered for reasons of reassurance were significantly more likely to be interpreted as normal (65.8%) compared to tests ordered to confirm a diagnosis or exclude a disease (27.7% and 50.9%, respectively). The odds for abnormal results to be interpreted as normal were much lower when the physician estimated a high pretest disease probability, compared to a low pretest probability estimate (OR = 0.18, 95% CI = 0.07-0.52, p < 0.001). Conclusions Interpretation and management of abnormal test results were strongly influenced by physicians' estimation of pretest disease probability and by the reason for ordering the test. By relating abnormal laboratory results to their pretest expectations, physicians may seek a balance between over- and under-reacting to laboratory test results. PMID:20158908

  5. Pretest expectations strongly influence interpretation of abnormal laboratory results and further management.

    PubMed

    Houben, Paul H H; van der Weijden, Trudy; Winkens, Bjorn; Winkens, Ron A G; Grol, Richard P T M

    2010-02-16

    Abnormal results of diagnostic laboratory tests can be difficult to interpret when disease probability is very low. Although most physicians generally do not use Bayesian calculations to interpret abnormal results, their estimates of pretest disease probability and reasons for ordering diagnostic tests may--in a more implicit manner--influence test interpretation and further management. A better understanding of this influence may help to improve test interpretation and management. Therefore, the objective of this study was to examine the influence of physicians' pretest disease probability estimates, and their reasons for ordering diagnostic tests, on test result interpretation, posttest probability estimates and further management. Prospective study among 87 primary care physicians in the Netherlands who each ordered laboratory tests for 25 patients. They recorded their reasons for ordering the tests (to exclude or confirm disease or to reassure patients) and their pretest disease probability estimates. Upon receiving the results they recorded how they interpreted the tests, their posttest probability estimates and further management. Logistic regression was used to analyse whether the pretest probability and the reasons for ordering tests influenced the interpretation, the posttest probability estimates and the decisions on further management. The physicians ordered tests for diagnostic purposes for 1253 patients; 742 patients had an abnormal result (64%). Physicians' pretest probability estimates and their reasons for ordering diagnostic tests influenced test interpretation, posttest probability estimates and further management. Abnormal results of tests ordered for reasons of reassurance were significantly more likely to be interpreted as normal (65.8%) compared to tests ordered to confirm a diagnosis or exclude a disease (27.7% and 50.9%, respectively). The odds for abnormal results to be interpreted as normal were much lower when the physician estimated a high pretest disease probability, compared to a low pretest probability estimate (OR = 0.18, 95% CI = 0.07-0.52, p < 0.001). Interpretation and management of abnormal test results were strongly influenced by physicians' estimation of pretest disease probability and by the reason for ordering the test. By relating abnormal laboratory results to their pretest expectations, physicians may seek a balance between over- and under-reacting to laboratory test results.

  6. The value of Bayes' theorem for interpreting abnormal test scores in cognitively healthy and clinical samples.

    PubMed

    Gavett, Brandon E

    2015-03-01

    The base rates of abnormal test scores in cognitively normal samples have been a focus of recent research. The goal of the current study is to illustrate how Bayes' theorem uses these base rates--along with the same base rates in cognitively impaired samples and prevalence rates of cognitive impairment--to yield probability values that are more useful for making judgments about the absence or presence of cognitive impairment. Correlation matrices, means, and standard deviations were obtained from the Wechsler Memory Scale--4th Edition (WMS-IV) Technical and Interpretive Manual and used in Monte Carlo simulations to estimate the base rates of abnormal test scores in the standardization and special groups (mixed clinical) samples. Bayes' theorem was applied to these estimates to identify probabilities of normal cognition based on the number of abnormal test scores observed. Abnormal scores were common in the standardization sample (65.4% scoring below a scaled score of 7 on at least one subtest) and more common in the mixed clinical sample (85.6% scoring below a scaled score of 7 on at least one subtest). Probabilities varied according to the number of abnormal test scores, base rates of normal cognition, and cutoff scores. The results suggest that interpretation of base rates obtained from cognitively healthy samples must also account for data from cognitively impaired samples. Bayes' theorem can help neuropsychologists answer questions about the probability that an individual examinee is cognitively healthy based on the number of abnormal test scores observed.

  7. Applications of multiscale change point detections to monthly stream flow and rainfall in Xijiang River in southern China, part I: correlation and variance

    NASA Astrophysics Data System (ADS)

    Zhu, Yuxiang; Jiang, Jianmin; Huang, Changxing; Chen, Yongqin David; Zhang, Qiang

    2018-04-01

    This article, as part I, introduces three algorithms and applies them to both series of the monthly stream flow and rainfall in Xijiang River, southern China. The three algorithms include (1) normalization of probability distribution, (2) scanning U test for change points in correlation between two time series, and (3) scanning F-test for change points in variances. The normalization algorithm adopts the quantile method to normalize data from a non-normal into the normal probability distribution. The scanning U test and F-test have three common features: grafting the classical statistics onto the wavelet algorithm, adding corrections for independence into each statistic criteria at given confidence respectively, and being almost objective and automatic detection on multiscale time scales. In addition, the coherency analyses between two series are also carried out for changes in variance. The application results show that the changes of the monthly discharge are still controlled by natural precipitation variations in Xijiang's fluvial system. Human activities disturbed the ecological balance perhaps in certain content and in shorter spells but did not violate the natural relationships of correlation and variance changes so far.

  8. Bayesics

    NASA Astrophysics Data System (ADS)

    Skilling, John

    2005-11-01

    This tutorial gives a basic overview of Bayesian methodology, from its axiomatic foundation through the conventional development of data analysis and model selection to its rôle in quantum mechanics, and ending with some comments on inference in general human affairs. The central theme is that probability calculus is the unique language within which we can develop models of our surroundings that have predictive capability. These models are patterns of belief; there is no need to claim external reality. 1. Logic and probability 2. Probability and inference 3. Probability and model selection 4. Prior probabilities 5. Probability and frequency 6. Probability and quantum mechanics 7. Probability and fundamentalism 8. Probability and deception 9. Prediction and truth

  9. Standardized likelihood ratio test for comparing several log-normal means and confidence interval for the common mean.

    PubMed

    Krishnamoorthy, K; Oral, Evrim

    2017-12-01

    Standardized likelihood ratio test (SLRT) for testing the equality of means of several log-normal distributions is proposed. The properties of the SLRT and an available modified likelihood ratio test (MLRT) and a generalized variable (GV) test are evaluated by Monte Carlo simulation and compared. Evaluation studies indicate that the SLRT is accurate even for small samples, whereas the MLRT could be quite liberal for some parameter values, and the GV test is in general conservative and less powerful than the SLRT. Furthermore, a closed-form approximate confidence interval for the common mean of several log-normal distributions is developed using the method of variance estimate recovery, and compared with the generalized confidence interval with respect to coverage probabilities and precision. Simulation studies indicate that the proposed confidence interval is accurate and better than the generalized confidence interval in terms of coverage probabilities. The methods are illustrated using two examples.

  10. Present developments in reaching an international consensus for a model-based approach to particle beam therapy.

    PubMed

    Prayongrat, Anussara; Umegaki, Kikuo; van der Schaaf, Arjen; Koong, Albert C; Lin, Steven H; Whitaker, Thomas; McNutt, Todd; Matsufuji, Naruhiro; Graves, Edward; Mizuta, Masahiko; Ogawa, Kazuhiko; Date, Hiroyuki; Moriwaki, Kensuke; Ito, Yoichi M; Kobashi, Keiji; Dekura, Yasuhiro; Shimizu, Shinichi; Shirato, Hiroki

    2018-03-01

    Particle beam therapy (PBT), including proton and carbon ion therapy, is an emerging innovative treatment for cancer patients. Due to the high cost of and limited access to treatment, meticulous selection of patients who would benefit most from PBT, when compared with standard X-ray therapy (XRT), is necessary. Due to the cost and labor involved in randomized controlled trials, the model-based approach (MBA) is used as an alternative means of establishing scientific evidence in medicine, and it can be improved continuously. Good databases and reasonable models are crucial for the reliability of this approach. The tumor control probability and normal tissue complication probability models are good illustrations of the advantages of PBT, but pre-existing NTCP models have been derived from historical patient treatments from the XRT era. This highlights the necessity of prospectively analyzing specific treatment-related toxicities in order to develop PBT-compatible models. An international consensus has been reached at the Global Institution for Collaborative Research and Education (GI-CoRE) joint symposium, concluding that a systematically developed model is required for model accuracy and performance. Six important steps that need to be observed in these considerations include patient selection, treatment planning, beam delivery, dose verification, response assessment, and data analysis. Advanced technologies in radiotherapy and computer science can be integrated to improve the efficacy of a treatment. Model validation and appropriately defined thresholds in a cost-effectiveness centered manner, together with quality assurance in the treatment planning, have to be achieved prior to clinical implementation.

  11. Quantitative Analysis of Land Loss in Coastal Louisiana Using Remote Sensing

    NASA Astrophysics Data System (ADS)

    Wales, P. M.; Kuszmaul, J.; Roberts, C.

    2005-12-01

    For the past thirty-five years the land loss along the Louisiana Coast has been recognized as a growing problem. One of the clearest indicators of this land loss is that in 2000 smooth cord grass (spartina alterniflora) was turning brown well before its normal hibernation period. Over 100,000 acres of marsh were affected by the 2000 browning. In 2001 data were collected using low altitude helicopter based transects of the coast, with 7,400 data points being collected by researchers at the USGS, National Wetlands Research Center, and Louisiana Department of Natural Resources. The surveys contained data describing the characteristics of the marsh, including latitude, longitude, marsh condition, marsh color, percent vegetated, and marsh die-back. Creating a model that combines remote sensing images, field data, and statistical analysis to develop a methodology for estimating the margin of error in measurements of coastal land loss (erosion) is the ultimate goal of the study. A model was successfully created using a series of band combinations (used as predictive variables). The most successful band combinations or predictive variables were the braud value [(Sum Visible TM Bands - Sum Infrared TM Bands)/(Sum Visible TM Bands + Sum Infrared TM Bands)], TM band 7/ TM band 2, brightness, NDVI, wetness, vegetation index, and a 7x7 autocovariate nearest neighbor floating window. The model values were used to generate the logistic regression model. A new image was created based on the logistic regression probability equation where each pixel represents the probability of finding water or non-water at that location in each image. Pixels within each image that have a high probability of representing water have a value close to 1 and pixels with a low probability of representing water have a value close to 0. A logistic regression model is proposed that uses seven independent variables. This model yields an accurate classification in 86.5% of the locations considered in the 1997 and 2001 survey locations. When the logistic regression was modeled to the satellite imagery of the entire Louisiana Coast study area a statewide loss was estimated to be 358 mi2 to 368 mi2, from 1997 to 2001, using two different methods for estimating land loss.

  12. Anti-fatigue activity of sea cucumber peptides prepared from Stichopus japonicus in an endurance swimming rat model.

    PubMed

    Ye, Jing; Shen, Caihong; Huang, Yayan; Zhang, Xueqin; Xiao, Meitian

    2017-10-01

    Sea cucumber (Stichopus japonicus) is a well-known nutritious and luxurious seafood in Asia which has attracted increasing attention because of its nutrition and bioactivities in recent years. In this study, the anti-fatigue activity of sea cucumber peptides (SCP) prepared from S. japonicus was evaluated in a load-induced endurance swimming model. The SCP prepared in this study was mainly made up of low-molecular-weight peptides (<2 kDa). The analysis result of amino acid composition revealed that SCP was rich in glycine, glutamic acid and proline. The endurance capability of rats to fatigue was significantly improved by SCP treatment. Meanwhile, the remarkable alterations of energy metabolic markers, antioxidant enzymes, antioxidant capacity and oxidative stress biomarkers were normalized. Moreover, administration of SCP could modulate alterations of inflammatory cytokines and downregulate the overexpression of TRL4 and NF-κB. SCP has anti-fatigue activity and it exerted its anti-fatigue effect probably through normalizing energy metabolism as well as alleviating oxidative damage and inflammatory responses. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  13. A Thermo-Hydro-Mechanical coupled Numerical modeling of Injection-induced seismicity on a pre-existing fault

    NASA Astrophysics Data System (ADS)

    Kim, Jongchan; Archer, Rosalind

    2017-04-01

    In terms of energy development (oil, gas and geothermal field) and environmental improvement (carbon dioxide sequestration), fluid injection into subsurface has been dramatically increased. As a side effect of these operations, a number of injection-induced seismic activities have also significantly risen. It is known that the main causes of induced seismicity are changes in local shear and normal stresses and pore pressure as well. This mechanism leads to increase in the probability of earthquake occurrence on permeable pre-existing fault zones predominantly. In this 2D fully coupled THM geothermal reservoir numerical simulation of injection-induced seismicity, we investigate the thermal, hydraulic and mechanical behavior of the fracture zone, considering a variety of 1) fault permeability, 2) injection rate and 3) injection temperature to identify major contributing parameters to induced seismic activity. We also calculate spatiotemporal variation of the Coulomb stress which is a combination of shear stress, normal stress and pore pressure and lastly forecast the seismicity rate on the fault zone by computing the seismic prediction model of Dieterich (1994).

  14. Electrochemical oxidation of ampicillin antibiotic at boron-doped diamond electrodes and process optimization using response surface methodology.

    PubMed

    Körbahti, Bahadır K; Taşyürek, Selin

    2015-03-01

    Electrochemical oxidation and process optimization of ampicillin antibiotic at boron-doped diamond electrodes (BDD) were investigated in a batch electrochemical reactor. The influence of operating parameters, such as ampicillin concentration, electrolyte concentration, current density, and reaction temperature, on ampicillin removal, COD removal, and energy consumption was analyzed in order to optimize the electrochemical oxidation process under specified cost-driven constraints using response surface methodology. Quadratic models for the responses satisfied the assumptions of the analysis of variance well according to normal probability, studentized residuals, and outlier t residual plots. Residual plots followed a normal distribution, and outlier t values indicated that the approximations of the fitted models to the quadratic response surfaces were very good. Optimum operating conditions were determined at 618 mg/L ampicillin concentration, 3.6 g/L electrolyte concentration, 13.4 mA/cm(2) current density, and 36 °C reaction temperature. Under response surface optimized conditions, ampicillin removal, COD removal, and energy consumption were obtained as 97.1 %, 92.5 %, and 71.7 kWh/kg CODr, respectively.

  15. Shell-model computed cross sections for charged-current scattering of astrophysical neutrinos off 40Ar

    NASA Astrophysics Data System (ADS)

    Kostensalo, Joel; Suhonen, Jouni; Zuber, K.

    2018-03-01

    Charged-current (anti)neutrino-40Ar cross sections for astrophysical neutrinos have been calculated. The initial and final nuclear states were calculated using the nuclear shell model. The folded solar-neutrino scattering cross section was found to be 1.78 (23 ) ×10-42cm2 , which is higher than what the previous papers have reported. The contributions from the 1- and 2- multipoles were found to be significant at supernova-neutrino energies, confirming the random-phase approximation (RPA) result of a previous study. The effects of neutrino flavor conversions in dense stellar matter (matter oscillations) were found to enhance the neutrino-scattering cross sections significantly for both the normal and inverted mass hierarchies. For the antineutrino scattering, only a small difference between the nonoscillating and inverted-hierarchy cross sections was found, while the normal-hierarchy cross section was 2-3 times larger than that of the nonoscillating cross section, depending on the adopted parametrization of the Fermi-Dirac distribution. This property of the supernova-antineutrino signal could probably be used to distinguish between the two hierarchies in megaton LAr detectors.

  16. Noise-induced transitions in a double-well oscillator with nonlinear dissipation.

    PubMed

    Semenov, Vladimir V; Neiman, Alexander B; Vadivasova, Tatyana E; Anishchenko, Vadim S

    2016-05-01

    We develop a model of bistable oscillator with nonlinear dissipation. Using a numerical simulation and an electronic circuit realization of this system we study its response to additive noise excitations. We show that depending on noise intensity the system undergoes multiple qualitative changes in the structure of its steady-state probability density function (PDF). In particular, the PDF exhibits two pitchfork bifurcations versus noise intensity, which we describe using an effective potential and corresponding normal form of the bifurcation. These stochastic effects are explained by the partition of the phase space by the nullclines of the deterministic oscillator.

  17. Unitary limit in crossed Andreev transport

    DOE PAGES

    Sadovskyy, I. A.; Lesovik, G. B.; Vinokur, V. M.

    2015-10-08

    One of the most promising approaches for generating spin- and energy-entangled electron pairs is splitting a Cooper pair into the metal through spatially separated terminals. Utilizing hybrid systems with the energy-dependent barriers at the superconductor/normal metal (NS) interfaces, one can achieve a practically 100% efficiency outcome of entangled electrons. We investigate a minimalistic one-dimensional model comprising a superconductor and two metallic leads and derive an expression for an electron-to-hole transmission probability as a measure of splitting efficiency. We find the conditions for achieving 100% efficiency and present analytical results for the differential conductance and differential noise.

  18. The deep planetary magnetotail revisited

    NASA Technical Reports Server (NTRS)

    Macek, Wieslaw M.

    1989-01-01

    The magnetotail model of Grzedzielski and Macek (1988) is extended to great distances in the antisolar direction. For typical solar wind parameters at 1 AU and the most probable set of parameters of the model as determined for the ISEE-3 region of 200 earth radii, R(E), the open geotail extends to at least 3000 - 4000 R(E) downstream from earth, where it forms a cavity filled with a dense hot plasma and low magnetic field strengths. The cross section of this cavity in the plane perpendicular to the earth-sun line has dimensions of 300 - 400 R(E) parallel to the ecliptic plane, but only 5 R(E) in the direction normal to the ecliptic. It seems likely that the magnetotail would become filamentary at such distances.

  19. Modelling of energy expended by free swimming spermatozoa in temperature-dependent viscous semen.

    PubMed

    Foo, Jong Yong Abdiel

    2010-01-01

    Derived models of fertilization kinetics have relied upon estimates of the swimming velocity of spermatozoa from the insemination site to a fallopian tube. However, limited derivations are available describing the probability and energy expended when spermatozoa collide with one another. An analytic approach of spermatozoon motion in a linear viscoelastic fluid is adopted to simplify the derivation. The complex kinematics of motion of an inextensible flagellum is modelled as planar flagellar wave of small amplitude. In humans, a temperature difference is expected between the cooler tubal isthmus and the warmer tubal ampulla. Thus, fluidic characteristics of semen such as viscosity can vary along the female reproductive tract. The results suggest that the probability of spermatozoa colliding in relatively lower viscous semen increases by 64.87% for a 0.5 degrees C surge in temperature. Moreover, this increases for a denser concentration of spermatozoa due to the limited semen volume available to manoeuvre. In addition, the propulsive forces and shear stress were 39.35% lower in less viscous semen due to an increase in temperature of only 0.5 degrees C. Hence, the described derivations herein can assist in the understanding of work done by a normal motile spermatozoon in a pool of semen.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morley, Steven

    The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less

  1. Using Multi-Scenario Tsunami Modelling Results combined with Probabilistic Analyses to provide Hazard Information for the South-WestCoast of Indonesia

    NASA Astrophysics Data System (ADS)

    Zosseder, K.; Post, J.; Steinmetz, T.; Wegscheider, S.; Strunz, G.

    2009-04-01

    Indonesia is located at one of the most active geological subduction zones in the world. Following the most recent seaquakes and their subsequent tsunamis in December 2004 and July 2006 it is expected that also in the near future tsunamis are likely to occur due to increased tectonic tensions leading to abrupt vertical seafloor alterations after a century of relative tectonic silence. To face this devastating threat tsunami hazard maps are very important as base for evacuation planning and mitigation strategies. In terms of a tsunami impact the hazard assessment is mostly covered by numerical modelling because the model results normally offer the most precise database for a hazard analysis as they include spatially distributed data and their influence to the hydraulic dynamics. Generally a model result gives a probability for the intensity distribution of a tsunami at the coast (or run up) and the spatial distribution of the maximum inundation area depending on the location and magnitude of the tsunami source used. The boundary condition of the source used for the model is mostly chosen by a worst case approach. Hence the location and magnitude which are likely to occur and which are assumed to generate the worst impact are used to predict the impact at a specific area. But for a tsunami hazard assessment covering a large coastal area, as it is demanded in the GITEWS (German Indonesian Tsunami Early Warning System) project in which the present work is embedded, this approach is not practicable because a lot of tsunami sources can cause an impact at the coast and must be considered. Thus a multi-scenario tsunami model approach is developed to provide a reliable hazard assessment covering large areas. For the Indonesian Early Warning System many tsunami scenarios were modelled by the Alfred Wegener Institute (AWI) at different probable tsunami sources and with different magnitudes along the Sunda Trench. Every modelled scenario delivers the spatial distribution of the inundation for a specific area, the wave height at coast at this area and the estimated times of arrival (ETAs) of the waves, caused by one tsunamigenic source with a specific magnitude. These parameters from the several scenarios can overlap each other along the coast and must be combined to get one comprehensive hazard assessment for all possible future tsunamis at the region under observation. The simplest way to derive the inundation probability along the coast using the multiscenario approach is to overlay all scenario inundation results and to determine how often a point on land will be significantly inundated from the various scenarios. But this does not take into account that the used tsunamigenic sources for the modeled scenarios have different likelihoods of causing a tsunami. Hence a statistical analysis of historical data and geophysical investigation results based on numerical modelling results is added to the hazard assessment, which clearly improves the significance of the hazard assessment. For this purpose the present method is developed and contains a complex logical combination of the diverse probabilities assessed like probability of occurrence for different earthquake magnitudes at different localities, probability of occurrence for a specific wave height at the coast and the probability for every point on land likely to get hit by a tsunami. The values are combined by a logical tree technique and quantified by statistical analysis of historical data and of the tsunami modelling results as mentioned before. This results in a tsunami inundation probability map covering the South West Coast of Indonesia which nevertheless shows a significant spatial diversity offering a good base for evacuation planning and mitigation strategies. Keywords: tsunami hazard assessment, tsunami modelling, probabilistic analysis, early warning

  2. Hypergame theory applied to cyber attack and defense

    NASA Astrophysics Data System (ADS)

    House, James Thomas; Cybenko, George

    2010-04-01

    This work concerns cyber attack and defense in the context of game theory--specifically hypergame theory. Hypergame theory extends classical game theory with the ability to deal with differences in players' expertise, differences in their understanding of game rules, misperceptions, and so forth. Each of these different sub-scenarios, or subgames, is associated with a probability--representing the likelihood that the given subgame is truly "in play" at a given moment. In order to form an optimal attack or defense policy, these probabilities must be learned if they're not known a-priori. We present hidden Markov model and maximum entropy approaches for accurately learning these probabilities through multiple iterations of both normal and modified game play. We also give a widely-applicable approach for the analysis of cases where an opponent is aware that he is being studied, and intentionally plays to spoil the process of learning and thereby obfuscate his attributes. These are considered in the context of a generic, abstract cyber attack example. We demonstrate that machine learning efficacy can be heavily dependent on the goals and styles of participant behavior. To this end detailed simulation results under various combinations of attacker and defender behaviors are presented and analyzed.

  3. Reduction of cardiac and pulmonary complication probabilities after breathing adapted radiotherapy for breast cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korreman, Stine S.; Pedersen, Anders N.; Aarup, Lasse Rye

    Purpose: Substantial reductions of cardio-pulmonary radiation doses can be achieved using voluntary deep inspiration breath-hold (DIBH) or free breathing inspiration gating (IG) in radiotherapy after conserving surgery for breast cancer. The purpose of this study is to evaluate the radiobiological implications of such dosimetric benefits. Methods and Materials: Patients from previously reported studies were pooled for a total of 33 patients. All patients underwent DIBH and free breathing (FB) scans, and 17 patients underwent an additional IG scan. Tangential conformal treatment plans covering the remaining breast, internal mammary, and periclavicular nodes were optimized for each scan, prescription dose 48 Gy.more » Normal tissue complication probabilities were calculated using the relative seriality model for the heart, and the model proposed by Burman et al. for the lung. Results: Previous computed tomography studies showed that both voluntary DIBH and IG provided reduction of the lung V{sub 5} (relative volume receiving more than 50% of prescription dose) on the order of 30-40%, and a 80-90% reduction of the heart V{sub 5} for left-sided cancers. Corresponding pneumonitis probability of 28.1% (range, 0.7-95.6%) for FB could be reduced to 2.6% (range, 0.1-40.1%) for IG, and 4.3% (range, 0.1-59%) for DIBH. The cardiac mortality probability could be reduced from 4.8% (range, 0.1-23.4%) in FB to 0.5% (range, 0.1-2.6%) for IG and 0.1% (range, 0-3.0%) for DIBH. Conclusions: Remarkable potential is shown for simple voluntary DIBH and free breathing IG to reduce the risk of both cardiac mortality and pneumonitis for the common technique of adjuvant tangential breast irradiation.« less

  4. Modelling Spatial Dependence Structures Between Climate Variables by Combining Mixture Models with Copula Models

    NASA Astrophysics Data System (ADS)

    Khan, F.; Pilz, J.; Spöck, G.

    2017-12-01

    Spatio-temporal dependence structures play a pivotal role in understanding the meteorological characteristics of a basin or sub-basin. This further affects the hydrological conditions and consequently will provide misleading results if these structures are not taken into account properly. In this study we modeled the spatial dependence structure between climate variables including maximum, minimum temperature and precipitation in the Monsoon dominated region of Pakistan. For temperature, six, and for precipitation four meteorological stations have been considered. For modelling the dependence structure between temperature and precipitation at multiple sites, we utilized C-Vine, D-Vine and Student t-copula models. For temperature, multivariate mixture normal distributions and for precipitation gamma distributions have been used as marginals under the copula models. A comparison was made between C-Vine, D-Vine and Student t-copula by observational and simulated spatial dependence structure to choose an appropriate model for the climate data. The results show that all copula models performed well, however, there are subtle differences in their performances. The copula models captured the patterns of spatial dependence structures between climate variables at multiple meteorological sites, however, the t-copula showed poor performance in reproducing the dependence structure with respect to magnitude. It was observed that important statistics of observed data have been closely approximated except of maximum values for temperature and minimum values for minimum temperature. Probability density functions of simulated data closely follow the probability density functions of observational data for all variables. C and D-Vines are better tools when it comes to modelling the dependence between variables, however, Student t-copulas compete closely for precipitation. Keywords: Copula model, C-Vine, D-Vine, Spatial dependence structure, Monsoon dominated region of Pakistan, Mixture models, EM algorithm.

  5. Are the O stars in WR+O binaries exceptionally rapid rotators?

    NASA Astrophysics Data System (ADS)

    Reeve, Dominic; Howarth, Ian D.

    2018-05-01

    We examine claims of strong gravity-darkening effects in the O-star components of WR+O binaries. We generate synthetic spectra for a wide range of parameters, and show that the line-width results are consistent with extensive measurements of O stars that are either single or are members of `normal' binaries. By contrast, the WR+O results are at the extremes of, or outside, the distributions of both models and other observations. Remeasurement of the WR+O spectra shows that they can be reconciled with other results by judicious choice of pseudo-continuum normalization. With this interpretation, the supersynchronous rotation previously noted for the O-star components in the WR+O binaries with the longest orbital periods appears to be unexceptional. Our investigation is therefore consistent with the aphorism that if the title of a paper ends with a question mark, the answer is probably `no'.

  6. Microgravity as a novel environmental signal affecting Salmonella enterica serovar Typhimurium virulence

    NASA Technical Reports Server (NTRS)

    Nickerson, C. A.; Ott, C. M.; Mister, S. J.; Morrow, B. J.; Burns-Keliher, L.; Pierson, D. L.

    2000-01-01

    The effects of spaceflight on the infectious disease process have only been studied at the level of the host immune response and indicate a blunting of the immune mechanism in humans and animals. Accordingly, it is necessary to assess potential changes in microbial virulence associated with spaceflight which may impact the probability of in-flight infectious disease. In this study, we investigated the effect of altered gravitational vectors on Salmonella virulence in mice. Salmonella enterica serovar Typhimurium grown under modeled microgravity (MMG) were more virulent and were recovered in higher numbers from the murine spleen and liver following oral infection compared to organisms grown under normal gravity. Furthermore, MMG-grown salmonellae were more resistant to acid stress and macrophage killing and exhibited significant differences in protein synthesis than did normal-gravity-grown cells. Our results indicate that the environment created by simulated microgravity represents a novel environmental regulatory factor of Salmonella virulence.

  7. Probabilistic Tsunami Hazard Assessment along Nankai Trough (1) An assessment based on the information of the forthcoming earthquake that Earthquake Research Committee(2013) evaluated

    NASA Astrophysics Data System (ADS)

    Hirata, K.; Fujiwara, H.; Nakamura, H.; Osada, M.; Morikawa, N.; Kawai, S.; Ohsumi, T.; Aoi, S.; Yamamoto, N.; Matsuyama, H.; Toyama, N.; Kito, T.; Murashima, Y.; Murata, Y.; Inoue, T.; Saito, R.; Takayama, J.; Akiyama, S.; Korenaga, M.; Abe, Y.; Hashimoto, N.

    2015-12-01

    The Earthquake Research Committee(ERC)/HERP, Government of Japan (2013) revised their long-term evaluation of the forthcoming large earthquake along the Nankai Trough; the next earthquake is estimated M8 to 9 class, and the probability (P30) that the next earthquake will occur within the next 30 years (from Jan. 1, 2013) is 60% to 70%. In this study, we assess tsunami hazards (maximum coastal tsunami heights) in the near future, in terms of a probabilistic approach, from the next earthquake along Nankai Trough, on the basis of ERC(2013)'s report. The probabilistic tsunami hazard assessment that we applied is as follows; (1) Characterized earthquake fault models (CEFMs) are constructed on each of the 15 hypothetical source areas (HSA) that ERC(2013) showed. The characterization rule follows Toyama et al.(2015, JpGU). As results, we obtained total of 1441 CEFMs. (2) We calculate tsunamis due to CEFMs by solving nonlinear, finite-amplitude, long-wave equations with advection and bottom friction terms by finite-difference method. Run-up computation on land is included. (3) A time predictable model predicts the recurrent interval of the present seismic cycle is T=88.2 years (ERC,2013). We fix P30 = 67% by applying the renewal process based on BPT distribution with T and alpha=0.24 as its aperiodicity. (4) We divide the probability P30 into P30(i) for i-th subgroup consisting of the earthquakes occurring in each of 15 HSA by following a probability re-distribution concept (ERC,2014). Then each earthquake (CEFM) in i-th subgroup is assigned a probability P30(i)/N where N is the number of CEFMs in each sub-group. Note that such re-distribution concept of the probability is nothing but tentative because the present seismology cannot give deep knowledge enough to do it. Epistemic logic-tree approach may be required in future. (5) We synthesize a number of tsunami hazard curves at every evaluation points on coasts by integrating the information about 30 years occurrence probabilities P30(i) for all earthquakes (CEFMs) and calculated maximum coastal tsunami heights. In the synthesis, aleatory uncertainties relating to incompleteness of governing equations, CEFM modeling, bathymetry and topography data, etc, are modeled assuming a log-normal probabilistic distribution. Examples of tsunami hazard curves will be presented.

  8. Mixed membership trajectory models of cognitive impairment in the multicenter AIDS cohort study.

    PubMed

    Molsberry, Samantha A; Lecci, Fabrizio; Kingsley, Lawrence; Junker, Brian; Reynolds, Sandra; Goodkin, Karl; Levine, Andrew J; Martin, Eileen; Miller, Eric N; Munro, Cynthia A; Ragin, Ann; Sacktor, Ned; Becker, James T

    2015-03-27

    The longitudinal trajectories that individuals may take from a state of normal cognition to HIV-associated dementia are unknown. We applied a novel statistical methodology to identify trajectories to cognitive impairment, and factors that affected the 'closeness' of an individual to one of the canonical trajectories. The Multicenter AIDS Cohort Study (MACS) is a four-site longitudinal study of the natural and treated history of HIV disease among gay and bisexual men. Using data from 3892 men (both HIV-infected and HIV-uninfected) enrolled in the neuropsychology substudy of the MACS, a Mixed Membership Trajectory Model (MMTM) was applied to capture the pathways from normal cognitive function to mild impairment to severe impairment. MMTMs allow the data to identify canonical pathways and to model the effects of risk factors on an individual's 'closeness' to these trajectories. First, we identified three distinct trajectories to cognitive impairment: 'normal aging' (low probability of mild impairment until age 60); 'premature aging' (mild impairment starting at age 45-50); and 'unhealthy' (mild impairment in 20s and 30s) profiles. Second, clinically defined AIDS, and not simply HIV disease, was associated with closeness to the premature aging trajectory, and, third, hepatitis-C infection, depression, race, recruitment cohort and confounding conditions all affected individual's closeness to these trajectories. These results provide new insight into the natural history of cognitive dysfunction in HIV disease and provide evidence for a potential difference in the pathophysiology of the development of cognitive impairment based on trajectories to impairment.

  9. The short-term variability of bacterial vaginosis diagnosed by Nugent Gram stain criteria among sexually active women in Rakai, Uganda.

    PubMed

    Thoma, Marie E; Gray, Ronald H; Kiwanuka, Noah; Aluma, Simon; Wang, Mei-Cheng; Sewankambo, Nelson; Wawer, Maria J

    2011-02-01

    Studies evaluating clinical and behavioral factors related to short-term fluctuations in vaginal microbiota are limited. We sought to describe changes in vaginal microbiota evaluated by Gram stain and assess factors associated with progression to and resolution of bacterial vaginosis (BV) at weekly intervals. A cohort of 255 sexually experienced, postmenarcheal women provided self-collected vaginal swabs to assess vaginal microbiota by Nugent score criteria at weekly visits for up to 2 years contributing 16,757 sequential observations. Absolute differences in Nugent scores (0-10) and transition probabilities of vaginal microbiota states classified by Nugent score into normal (0-3), intermediate (4-6), and BV (7-10) between visits were estimated. Allowing each woman to serve as her own control, weekly time-varying factors associated with progression from normal microbiota to BV and resolution of BV to normal microbiota were estimated using conditional logistic regression. The distribution of absolute difference in Nugent scores was fairly symmetric with a mode of 0 (no change) and a standard deviation of 2.64. Transition probabilities showed weekly persistence, was highest for normal (76.1%) and BV (73.6%) states; whereas, intermediate states had similar probabilities of progression (36.6%), resolution (36.0%), and persistence (27.4%). Weekly fluctuation between normal and BV states was associated with menstrual cycle phase, recency of sex, treatment for vaginal symptoms, pregnancy, and prior Nugent score. Weekly changes in vaginal microbiota were common in this population. Clinical and behavioral characteristics were associated with vaginal microbiota transitioning, which may be used to inform future studies and clinical management of BV.

  10. A diagnostic approach in Alzheimer`s disease using three-dimensional stereotactic surface projections of Fluorine-18-FDG PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minoshima, S.; Frey, K.A.; Koeppe, R.A.

    1995-07-01

    To improve the diagnostic performance of PET as an aid in evaluating patients suspected of having Alzheimer`s disease, the authors developed a fully automated method which generates comprehensive image presentations and objective diagnostic indices. Fluorine-18-fluorodeoxyglucose PET image sets were collected from 37 patients with probable Alzheimer`s disease (including questionable and mild dementia), 22 normal subjects and 5 patients with cerebrovascular disease. Following stereotactic anatomic standardization, metabolic activity on an individual`s PET image set was extracted to a set of predefined surface pixels (three-dimensional stereotactic surface projection, 3D-SSP), which was used in the subsequent analysis. A normal database was created bymore » averaging extracted datasets of the normal subjects. Patients` datasets were compared individually with the normal database by calculating a Z-score on a pixel-by-pixel basis and were displayed in 3D-SSP views for visual inspections. Diagnostic indices were then generated based on averaged Z-scores for the association cortices. Patterns and severities of metabolic reduction in patients with probable Alzheimer`s disease were seen in the standard 3D-SSP views of extracted raw data and statistical Z-scores. When discriminating patients with probable Alzheimer`s disease from normal subjects, diagnostic indices of the parietal association cortex and unilaterally averaged parietal-temporal-frontal cortex showed sensitivities of 95% and 97%, respectively, with a specificity of 100%. Neither index yielded false-positive results for cerebrovascular disease. 3D-SSP enables quantitative data extraction and reliable localization of metabolic abnormalities by means of stereotactic coordinates. The proposed method is a promising approach for interpreting functional brain PET scans. 45 refs., 5 figs.« less

  11. Aberrant rhythmic expression of cryptochrome2 regulates the radiosensitivity of rat gliomas.

    PubMed

    Fan, Wang; Caiyan, Li; Ling, Zhu; Jiayun, Zhao

    2017-09-29

    In this study, we investigated the role of the clock regulatory protein cryptochrome 2 (Cry2) in determining the radiosensitivity of C6 glioma cells in a rat model. We observed that Cry2 mRNA and protein levels showed aberrant rhythmic periodicity of 8 h in glioma tissues, compared to 24 h in normal brain tissue. Cry2 mRNA and protein levels did not respond to irradiation in normal tissues, but both were increased at the ZT4 (low Cry2) and ZT8 (high Cry2) time points in gliomas. Immunohistochemical staining of PCNA and TUNEL assays demonstrated that high Cry2 expression in glioma tissues was associated with increased cell proliferation and decreased apoptosis. Western blot analysis showed that glioma cell fate was independent of p53, but was probably dependent on p73, which was more highly expressed at ZT4 (low Cry2) than at ZT8 (high Cry2). Levels of both p53 and p73 were unaffected by irradiation in normal brain tissues. These findings suggest aberrant rhythmic expression of Cry2 influence on radiosensitivity in rat gliomas.

  12. Estimation of distribution overlap of urn models.

    PubMed

    Hampton, Jerrad; Lladser, Manuel E

    2012-01-01

    A classical problem in statistics is estimating the expected coverage of a sample, which has had applications in gene expression, microbial ecology, optimization, and even numismatics. Here we consider a related extension of this problem to random samples of two discrete distributions. Specifically, we estimate what we call the dissimilarity probability of a sample, i.e., the probability of a draw from one distribution not being observed in [Formula: see text] draws from another distribution. We show our estimator of dissimilarity to be a [Formula: see text]-statistic and a uniformly minimum variance unbiased estimator of dissimilarity over the largest appropriate range of [Formula: see text]. Furthermore, despite the non-Markovian nature of our estimator when applied sequentially over [Formula: see text], we show it converges uniformly in probability to the dissimilarity parameter, and we present criteria when it is approximately normally distributed and admits a consistent jackknife estimator of its variance. As proof of concept, we analyze V35 16S rRNA data to discern between various microbial environments. Other potential applications concern any situation where dissimilarity of two discrete distributions may be of interest. For instance, in SELEX experiments, each urn could represent a random RNA pool and each draw a possible solution to a particular binding site problem over that pool. The dissimilarity of these pools is then related to the probability of finding binding site solutions in one pool that are absent in the other.

  13. Semiparametric temporal process regression of survival-out-of-hospital.

    PubMed

    Zhan, Tianyu; Schaubel, Douglas E

    2018-05-23

    The recurrent/terminal event data structure has undergone considerable methodological development in the last 10-15 years. An example of the data structure that has arisen with increasing frequency involves the recurrent event being hospitalization and the terminal event being death. We consider the response Survival-Out-of-Hospital, defined as a temporal process (indicator function) taking the value 1 when the subject is currently alive and not hospitalized, and 0 otherwise. Survival-Out-of-Hospital is a useful alternative strategy for the analysis of hospitalization/survival in the chronic disease setting, with the response variate representing a refinement to survival time through the incorporation of an objective quality-of-life component. The semiparametric model we consider assumes multiplicative covariate effects and leaves unspecified the baseline probability of being alive-and-out-of-hospital. Using zero-mean estimating equations, the proposed regression parameter estimator can be computed without estimating the unspecified baseline probability process, although baseline probabilities can subsequently be estimated for any time point within the support of the censoring distribution. We demonstrate that the regression parameter estimator is asymptotically normal, and that the baseline probability function estimator converges to a Gaussian process. Simulation studies are performed to show that our estimating procedures have satisfactory finite sample performances. The proposed methods are applied to the Dialysis Outcomes and Practice Patterns Study (DOPPS), an international end-stage renal disease study.

  14. Carotid artery intima-media thickness measurement in children with normal and increased body mass index: a comparison of three techniques.

    PubMed

    El Jalbout, Ramy; Cloutier, Guy; Cardinal, Marie-Hélène Roy; Henderson, Mélanie; Lapierre, Chantale; Soulez, Gilles; Dubois, Josée

    2018-05-09

    Common carotid artery intima-media thickness is a marker of subclinical atherosclerosis. In children, increased intima-media thickness is associated with obesity and the risk of cardiovascular events in adulthood. To compare intima-media thickness measurements using B-mode ultrasound, radiofrequency (RF) echo tracking, and RF speckle probability distribution in children with normal and increased body mass index (BMI). We prospectively measured intima-media thickness in 120 children randomly selected from two groups of a longitudinal cohort: normal BMI and increased BMI, defined by BMI ≥85th percentile for age and gender. We followed Mannheim recommendations. We used M'Ath-Std for automated B-mode imaging, M-line processing of RF signal amplitude for RF echo tracking, and RF signal segmentation and averaging using probability distributions defining image speckle. Statistical analysis included Wilcoxon and Mann-Whitney tests, and Pearson correlation coefficient and intra-class correlation coefficient (ICC). Children were 10-13 years old (mean: 11.7 years); 61% were boys. The mean age was 11.4 years (range: 10.0-13.1 years) for the normal BMI group and 12.0 years (range: 10.1-13.5 years) for the increased BMI group. The normal BMI group included 58% boys and the increased BMI group 63% boys. RF echo tracking method was successful in 79 children as opposed to 114 for the B-mode method and all 120 for the probability distribution method. Techniques were weakly correlated: ICC=0.34 (95% confidence interval [CI]: 0.27-0.39). Intima-media thickness was significantly higher in the increased BMI than normal BMI group using the RF techniques and borderline for the B-mode technique. Mean differences between weight groups were: B-mode, 0.02 mm (95% CI: 0.00 to 0.04), P=0.05; RF echo tracking, 0.03 mm (95% CI: 0.01 to 0.05), P=0.01; and RF speckle probability distribution, 0.03 mm (95% CI: 0.01 to 0.05), P=0.002. Though techniques are not interchangeable, all showed increased intima-media thickness in children with increased BMI. RF echo tracking method had the lowest success rate at calculating intima-media thickness. For patient follow-up and cohort comparisons, the same technique should be used throughout.

  15. Confidence Intervals for True Scores Using the Skew-Normal Distribution

    ERIC Educational Resources Information Center

    Garcia-Perez, Miguel A.

    2010-01-01

    A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…

  16. Computer program determines exact two-sided tolerance limits for normal distributions

    NASA Technical Reports Server (NTRS)

    Friedman, H. A.; Webb, S. R.

    1968-01-01

    Computer program determines by numerical integration the exact statistical two-sided tolerance limits, when the proportion between the limits is at least a specified number. The program is limited to situations in which the underlying probability distribution for the population sampled is the normal distribution with unknown mean and variance.

  17. How to Introduce Historically the Normal Distribution in Engineering Education: A Classroom Experiment

    ERIC Educational Resources Information Center

    Blanco, Monica; Ginovart, Marta

    2010-01-01

    Little has been explored with regard to introducing historical aspects in the undergraduate statistics classroom in engineering studies. This article focuses on the design, implementation and assessment of a specific activity concerning the introduction of the normal probability curve and related aspects from a historical dimension. Following a…

  18. Laplacian normalization and random walk on heterogeneous networks for disease-gene prioritization.

    PubMed

    Zhao, Zhi-Qin; Han, Guo-Sheng; Yu, Zu-Guo; Li, Jinyan

    2015-08-01

    Random walk on heterogeneous networks is a recently emerging approach to effective disease gene prioritization. Laplacian normalization is a technique capable of normalizing the weight of edges in a network. We use this technique to normalize the gene matrix and the phenotype matrix before the construction of the heterogeneous network, and also use this idea to define the transition matrices of the heterogeneous network. Our method has remarkably better performance than the existing methods for recovering known gene-phenotype relationships. The Shannon information entropy of the distribution of the transition probabilities in our networks is found to be smaller than the networks constructed by the existing methods, implying that a higher number of top-ranked genes can be verified as disease genes. In fact, the most probable gene-phenotype relationships ranked within top 3 or top 5 in our gene lists can be confirmed by the OMIM database for many cases. Our algorithms have shown remarkably superior performance over the state-of-the-art algorithms for recovering gene-phenotype relationships. All Matlab codes can be available upon email request. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Improving CSF biomarker accuracy in predicting prevalent and incident Alzheimer disease

    PubMed Central

    Fagan, A.M.; Williams, M.M.; Ghoshal, N.; Aeschleman, M.; Grant, E.A.; Marcus, D.S.; Mintun, M.A.; Holtzman, D.M.; Morris, J.C.

    2011-01-01

    Objective: To investigate factors, including cognitive and brain reserve, which may independently predict prevalent and incident dementia of the Alzheimer type (DAT) and to determine whether inclusion of identified factors increases the predictive accuracy of the CSF biomarkers Aβ42, tau, ptau181, tau/Aβ42, and ptau181/Aβ42. Methods: Logistic regression identified variables that predicted prevalent DAT when considered together with each CSF biomarker in a cross-sectional sample of 201 participants with normal cognition and 46 with DAT. The area under the receiver operating characteristic curve (AUC) from the resulting model was compared with the AUC generated using the biomarker alone. In a second sample with normal cognition at baseline and longitudinal data available (n = 213), Cox proportional hazards models identified variables that predicted incident DAT together with each biomarker, and the models' concordance probability estimate (CPE), which was compared to the CPE generated using the biomarker alone. Results: APOE genotype including an ε4 allele, male gender, and smaller normalized whole brain volumes (nWBV) were cross-sectionally associated with DAT when considered together with every biomarker. In the longitudinal sample (mean follow-up = 3.2 years), 14 participants (6.6%) developed DAT. Older age predicted a faster time to DAT in every model, and greater education predicted a slower time in 4 of 5 models. Inclusion of ancillary variables resulted in better cross-sectional prediction of DAT for all biomarkers (p < 0.0021), and better longitudinal prediction for 4 of 5 biomarkers (p < 0.0022). Conclusions: The predictive accuracy of CSF biomarkers is improved by including age, education, and nWBV in analyses. PMID:21228296

  20. The impacts of precipitation amount simulation on hydrological modeling in Nordic watersheds

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Brissette, Fancois; Chen, Jie

    2013-04-01

    Stochastic modeling of daily precipitation is very important for hydrological modeling, especially when no observed data are available. Precipitation is usually modeled by two component model: occurrence generation and amount simulation. For occurrence simulation, the most common method is the first-order two-state Markov chain due to its simplification and good performance. However, various probability distributions have been reported to simulate precipitation amount, and spatiotemporal differences exist in the applicability of different distribution models. Therefore, assessing the applicability of different distribution models is necessary in order to provide more accurate precipitation information. Six precipitation probability distributions (exponential, Gamma, Weibull, skewed normal, mixed exponential, and hybrid exponential/Pareto distributions) are directly and indirectly evaluated on their ability to reproduce the original observed time series of precipitation amount. Data from 24 weather stations and two watersheds (Chute-du-Diable and Yamaska watersheds) in the province of Quebec (Canada) are used for this assessment. Various indices or statistics, such as the mean, variance, frequency distribution and extreme values are used to quantify the performance in simulating the precipitation and discharge. Performance in reproducing key statistics of the precipitation time series is well correlated to the number of parameters of the distribution function, and the three-parameter precipitation models outperform the other models, with the mixed exponential distribution being the best at simulating daily precipitation. The advantage of using more complex precipitation distributions is not as clear-cut when the simulated time series are used to drive a hydrological model. While the advantage of using functions with more parameters is not nearly as obvious, the mixed exponential distribution appears nonetheless as the best candidate for hydrological modeling. The implications of choosing a distribution function with respect to hydrological modeling and climate change impact studies are also discussed.

  1. A Cellular Automaton model for pedestrian counterflow with swapping

    NASA Astrophysics Data System (ADS)

    Tao, Y. Z.; Dong, L. Y.

    2017-06-01

    In this paper, we propose a new floor field Cellular Automaton (CA) model with considering the swapping behaviors of pedestrians. The neighboring pedestrians in opposite directions take swapping in a probability decided by the linear density of pedestrian flow. The swapping which happens simultaneously with the normal movement is introduced to eliminate the gridlock in low density region. Numerical results show that the fundamental diagram is in good agreement with the measured data. Then the model is applied to investigate the counterflow and four typical states such as free flow, lane, intermediate and congestion states are found. More attention is paid on the intermediate state which lane-formation and local congestions switch in an irregular manner. The swapping plays a vital role in reducing the gridlock. Furthermore, the influence of the corridor size and individual's eyesight on counterflow are discussed in detail.

  2. Biological effects and equivalent doses in radiotherapy: A software solution

    PubMed Central

    Voyant, Cyril; Julian, Daniel; Roustit, Rudy; Biffi, Katia; Lantieri, Céline

    2013-01-01

    Background The limits of TDF (time, dose, and fractionation) and linear quadratic models have been known for a long time. Medical physicists and physicians are required to provide fast and reliable interpretations regarding delivered doses or any future prescriptions relating to treatment changes. Aim We, therefore, propose a calculation interface under the GNU license to be used for equivalent doses, biological doses, and normal tumor complication probability (Lyman model). Materials and methods The methodology used draws from several sources: the linear-quadratic-linear model of Astrahan, the repopulation effects of Dale, and the prediction of multi-fractionated treatments of Thames. Results and conclusions The results are obtained from an algorithm that minimizes an ad-hoc cost function, and then compared to an equivalent dose computed using standard calculators in seven French radiotherapy centers. PMID:24936319

  3. Probability based models for estimation of wildfire risk

    Treesearch

    Haiganoush Preisler; D. R. Brillinger; R. E. Burgan; John Benoit

    2004-01-01

    We present a probability-based model for estimating fire risk. Risk is defined using three probabilities: the probability of fire occurrence; the conditional probability of a large fire given ignition; and the unconditional probability of a large fire. The model is based on grouped data at the 1 km²-day cell level. We fit a spatially and temporally explicit non-...

  4. A stochastic model for tumor geometry evolution during radiation therapy in cervical cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yifang; Lee, Chi-Guhn; Chan, Timothy C. Y., E-mail: tcychan@mie.utoronto.ca

    2014-02-15

    Purpose: To develop mathematical models to predict the evolution of tumor geometry in cervical cancer undergoing radiation therapy. Methods: The authors develop two mathematical models to estimate tumor geometry change: a Markov model and an isomorphic shrinkage model. The Markov model describes tumor evolution by investigating the change in state (either tumor or nontumor) of voxels on the tumor surface. It assumes that the evolution follows a Markov process. Transition probabilities are obtained using maximum likelihood estimation and depend on the states of neighboring voxels. The isomorphic shrinkage model describes tumor shrinkage or growth in terms of layers of voxelsmore » on the tumor surface, instead of modeling individual voxels. The two proposed models were applied to data from 29 cervical cancer patients treated at Princess Margaret Cancer Centre and then compared to a constant volume approach. Model performance was measured using sensitivity and specificity. Results: The Markov model outperformed both the isomorphic shrinkage and constant volume models in terms of the trade-off between sensitivity (target coverage) and specificity (normal tissue sparing). Generally, the Markov model achieved a few percentage points in improvement in either sensitivity or specificity compared to the other models. The isomorphic shrinkage model was comparable to the Markov approach under certain parameter settings. Convex tumor shapes were easier to predict. Conclusions: By modeling tumor geometry change at the voxel level using a probabilistic model, improvements in target coverage and normal tissue sparing are possible. Our Markov model is flexible and has tunable parameters to adjust model performance to meet a range of criteria. Such a model may support the development of an adaptive paradigm for radiation therapy of cervical cancer.« less

  5. The Statistical Value of Raw Fluorescence Signal in Luminex xMAP Based Multiplex Immunoassays

    PubMed Central

    Breen, Edmond J.; Tan, Woei; Khan, Alamgir

    2016-01-01

    Tissue samples (plasma, saliva, serum or urine) from 169 patients classified as either normal or having one of seven possible diseases are analysed across three 96-well plates for the presences of 37 analytes using cytokine inflammation multiplexed immunoassay panels. Censoring for concentration data caused problems for analysis of the low abundant analytes. Using fluorescence analysis over concentration based analysis allowed analysis of these low abundant analytes. Mixed-effects analysis on the resulting fluorescence and concentration responses reveals a combination of censoring and mapping the fluorescence responses to concentration values, through a 5PL curve, changed observed analyte concentrations. Simulation verifies this, by showing a dependence on the mean florescence response and its distribution on the observed analyte concentration levels. Differences from normality, in the fluorescence responses, can lead to differences in concentration estimates and unreliable probabilities for treatment effects. It is seen that when fluorescence responses are normally distributed, probabilities of treatment effects for fluorescence based t-tests has greater statistical power than the same probabilities from concentration based t-tests. We add evidence that the fluorescence response, unlike concentration values, doesn’t require censoring and we show with respect to differential analysis on the fluorescence responses that background correction is not required. PMID:27243383

  6. Similarity based false-positive reduction for breast cancer using radiographic and pathologic imaging features

    NASA Astrophysics Data System (ADS)

    Pai, Akshay; Samala, Ravi K.; Zhang, Jianying; Qian, Wei

    2010-03-01

    Mammography reading by radiologists and breast tissue image interpretation by pathologists often leads to high False Positive (FP) Rates. Similarly, current Computer Aided Diagnosis (CADx) methods tend to concentrate more on sensitivity, thus increasing the FP rates. A novel method is introduced here which employs similarity based method to decrease the FP rate in the diagnosis of microcalcifications. This method employs the Principal Component Analysis (PCA) and the similarity metrics in order to achieve the proposed goal. The training and testing set is divided into generalized (Normal and Abnormal) and more specific (Abnormal, Normal, Benign) classes. The performance of this method as a standalone classification system is evaluated in both the cases (general and specific). In another approach the probability of each case belonging to a particular class is calculated. If the probabilities are too close to classify, the augmented CADx system can be instructed to have a detailed analysis of such cases. In case of normal cases with high probability, no further processing is necessary, thus reducing the computation time. Hence, this novel method can be employed in cascade with CADx to reduce the FP rate and also avoid unnecessary computational time. Using this methodology, a false positive rate of 8% and 11% is achieved for mammography and cellular images respectively.

  7. Anticipating the severity of the fire season in Northern Portugal using statistical models based on meteorological indices of fire danger

    NASA Astrophysics Data System (ADS)

    Nunes, Sílvia A.; DaCamara, Carlos C.; Turkman, Kamil F.; Ermida, Sofia L.; Calado, Teresa J.

    2017-04-01

    Like in other regions of Mediterranean Europe, climate and weather are major drivers of fire activity in Portugal. The aim of the present study is to assess the role played by meteorological factors on inter-annual variability of burned area over a region of Portugal characterized by large fire activity. Monthly cumulated values of burned area in August are obtained from the fire database of ICNF, the Portuguese authority for forests. The role of meteorological factors is characterized by means of Daily Severity Rating, DSR, an index of meteorological fire danger, which is derived from meteorological fields as obtained from ECMWF Interim Reanalysis. The study area is characterized by the predominance of forest, with high percentages of maritime pine and eucalyptus, two species with high flammability in summer. The time series of recorded burned area in August during 1980-2011 is highly correlated (correlation coefficient of 0.93) with the one for whole Portugal. First, a normal distribution model is fitted to the 32-year sample of decimal logarithms of monthly burned area. The model is improved by introducing two covariates:(1) the top-down meteorological factor (DSRtd) which consists of daily cumulated values of DSR since April 1 to July 31 and may be viewed as the cumulated stress on vegetation due to meteorological conditions during the pre-fire season; (2) the bottom-up factor (DSRbu) which consists of the square root of the mean of the squared daily deviations (restricted to days with positive departures of DSR from the corresponding long term mean) and may be viewed as the contribution of days characterized by extreme weather conditions favoring the onset and spreading of wildfires. Three different statistical models are then developed: the "climate anomaly" model, using DSRtd as covariate, the "weather anomaly", using DSRbu as covariate, and the "combined" model using both variables as covariates. These models are used to define background fire danger, fire weather danger and combined fire danger, respectively quantifying the contribution of DSRtd, DSRbu and both covariates to increasing or decreasing the probability of having extremely high/low values of burned area in August. Using the information obtained by the "combined" model it is possible to calculate the minimum/ maximum value of DSRbu for a given year to be modelled as severe/weak. The probability is then made using a normal distribution of the data series of DSRbu, if the probability is below 20% than the year will be considered as not belonging to that classification. This classification is able to correctly identify 34 out of the 36 years studied. This results can be of extreme use to forest managers and firefighters when deciding which the best fire preventing measures are and where to allocate the resources.

  8. Universal characteristics of fractal fluctuations in prime number distribution

    NASA Astrophysics Data System (ADS)

    Selvam, A. M.

    2014-11-01

    The frequency of occurrence of prime numbers at unit number spacing intervals exhibits self-similar fractal fluctuations concomitant with inverse power law form for power spectrum generic to dynamical systems in nature such as fluid flows, stock market fluctuations and population dynamics. The physics of long-range correlations exhibited by fractals is not yet identified. A recently developed general systems theory visualizes the eddy continuum underlying fractals to result from the growth of large eddies as the integrated mean of enclosed small scale eddies, thereby generating a hierarchy of eddy circulations or an inter-connected network with associated long-range correlations. The model predictions are as follows: (1) The probability distribution and power spectrum of fractals follow the same inverse power law which is a function of the golden mean. The predicted inverse power law distribution is very close to the statistical normal distribution for fluctuations within two standard deviations from the mean of the distribution. (2) Fractals signify quantum-like chaos since variance spectrum represents probability density distribution, a characteristic of quantum systems such as electron or photon. (3) Fractal fluctuations of frequency distribution of prime numbers signify spontaneous organization of underlying continuum number field into the ordered pattern of the quasiperiodic Penrose tiling pattern. The model predictions are in agreement with the probability distributions and power spectra for different sets of frequency of occurrence of prime numbers at unit number interval for successive 1000 numbers. Prime numbers in the first 10 million numbers were used for the study.

  9. Diagnostic Performance and Utility of Quantitative EEG Analyses in Delirium: Confirmatory Results From a Large Retrospective Case-Control Study.

    PubMed

    Fleischmann, Robert; Tränkner, Steffi; Bathe-Peters, Rouven; Rönnefarth, Maria; Schmidt, Sein; Schreiber, Stephan J; Brandt, Stephan A

    2018-03-01

    The lack of objective disease markers is a major cause of misdiagnosis and nonstandardized approaches in delirium. Recent studies conducted in well-selected patients and confined study environments suggest that quantitative electroencephalography (qEEG) can provide such markers. We hypothesize that qEEG helps remedy diagnostic uncertainty not only in well-defined study cohorts but also in a heterogeneous hospital population. In this retrospective case-control study, EEG power spectra of delirious patients and age-/gender-matched controls (n = 31 and n = 345, respectively) were fitted in a linear model to test their performance as binary classifiers. We subsequently evaluated the diagnostic performance of the best classifiers in control samples with normal EEGs (n = 534) and real-world samples including pathologic findings (n = 4294). Test reliability was estimated through split-half analyses. We found that the combination of spectral power at F3-P4 at 2 Hz (area under the curve [AUC] = .994) and C3-O1 at 19 Hz (AUC = .993) provided a sensitivity of 100% and a specificity of 99% to identify delirious patients among normal controls. These classifiers also yielded a false positive rate as low as 5% and increased the pretest probability of being delirious by 57% in an unselected real-world sample. Split-half reliabilities were .98 and .99, respectively. This retrospective study yielded preliminary evidence that qEEG provides excellent diagnostic performance to identify delirious patients even outside confined study environments. It furthermore revealed reduced beta power as a novel specific finding in delirium and that a normal EEG excludes delirium. Prospective studies including parameters of pretest probability and delirium severity are required to elaborate on these promising findings.

  10. Financial derivative pricing under probability operator via Esscher transfomation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Achi, Godswill U., E-mail: achigods@yahoo.com

    2014-10-24

    The problem of pricing contingent claims has been extensively studied for non-Gaussian models, and in particular, Black- Scholes formula has been derived for the NIG asset pricing model. This approach was first developed in insurance pricing{sup 9} where the original distortion function was defined in terms of the normal distribution. This approach was later studied6 where they compared the standard Black-Scholes contingent pricing and distortion based contingent pricing. So, in this paper, we aim at using distortion operators by Cauchy distribution under a simple transformation to price contingent claim. We also show that we can recuperate the Black-Sholes formula usingmore » the distribution. Similarly, in a financial market in which the asset price represented by a stochastic differential equation with respect to Brownian Motion, the price mechanism based on characteristic Esscher measure can generate approximate arbitrage free financial derivative prices. The price representation derived involves probability Esscher measure and Esscher Martingale measure and under a new complex valued measure φ (u) evaluated at the characteristic exponents φ{sub x}(u) of X{sub t} we recuperate the Black-Scholes formula for financial derivative prices.« less

  11. Empirical likelihood method for non-ignorable missing data problems.

    PubMed

    Guan, Zhong; Qin, Jing

    2017-01-01

    Missing response problem is ubiquitous in survey sampling, medical, social science and epidemiology studies. It is well known that non-ignorable missing is the most difficult missing data problem where the missing of a response depends on its own value. In statistical literature, unlike the ignorable missing data problem, not many papers on non-ignorable missing data are available except for the full parametric model based approach. In this paper we study a semiparametric model for non-ignorable missing data in which the missing probability is known up to some parameters, but the underlying distributions are not specified. By employing Owen (1988)'s empirical likelihood method we can obtain the constrained maximum empirical likelihood estimators of the parameters in the missing probability and the mean response which are shown to be asymptotically normal. Moreover the likelihood ratio statistic can be used to test whether the missing of the responses is non-ignorable or completely at random. The theoretical results are confirmed by a simulation study. As an illustration, the analysis of a real AIDS trial data shows that the missing of CD4 counts around two years are non-ignorable and the sample mean based on observed data only is biased.

  12. A hydrodynamic study of a slow nova outburst. [computerized simulation of thermonuclear runaway in white dwarf envelope

    NASA Technical Reports Server (NTRS)

    Sparks, W. M.; Starrfield, S.; Truran, J. W.

    1978-01-01

    The paper reports use of a Lagrangian implicit hydrodynamics computer code incorporating a full nuclear-reaction network to follow a thermonuclear runaway in the hydrogen-rich envelope of a 1.25 solar-mass white dwarf. In this evolutionary sequence the envelope was assumed to be of normal (solar) composition and the resulting outburst closely resembles that of the slow nova HR Del. In contrast, previous CNO-enhanced models resemble fast nova outbursts. The slow-nova model ejects material by radiation pressure when the high luminosity of the rekindled hydrogen shell source exceeds the local Eddington luminosity of the outer layers. This is in contrast to the fast nova outburst where ejection is caused by the decay of the beta(+)-unstable nuclei. Nevertheless, radiation pressure probably plays a major role in ejecting material from the fast nova remnants. Therefore, the sequence from slow to fast novae can be interpreted as a sequence of white dwarfs with increasing amounts of enhanced CNO nuclei in their hydrogen envelopes, although other parameters such as the white-dwarf mass and accretion rate probably contribute to the observed variation between novae.

  13. Stochastic approach to the derivation of emission limits for wastewater treatment plants.

    PubMed

    Stransky, D; Kabelkova, I; Bares, V

    2009-01-01

    Stochastic approach to the derivation of WWTP emission limits meeting probabilistically defined environmental quality standards (EQS) is presented. The stochastic model is based on the mixing equation with input data defined by probability density distributions and solved by Monte Carlo simulations. The approach was tested on a study catchment for total phosphorus (P(tot)). The model assumes input variables independency which was proved for the dry-weather situation. Discharges and P(tot) concentrations both in the study creek and WWTP effluent follow log-normal probability distribution. Variation coefficients of P(tot) concentrations differ considerably along the stream (c(v)=0.415-0.884). The selected value of the variation coefficient (c(v)=0.420) affects the derived mean value (C(mean)=0.13 mg/l) of the P(tot) EQS (C(90)=0.2 mg/l). Even after supposed improvement of water quality upstream of the WWTP to the level of the P(tot) EQS, the WWTP emission limits calculated would be lower than the values of the best available technology (BAT). Thus, minimum dilution ratios for the meaningful application of the combined approach to the derivation of P(tot) emission limits for Czech streams are discussed.

  14. LFSPMC: Linear feature selection program using the probability of misclassification

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Marion, B. P.

    1975-01-01

    The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.

  15. Short communication: cheminformatics analysis to identify predictors of antiviral drug penetration into the female genital tract.

    PubMed

    Thompson, Corbin G; Sedykh, Alexander; Nicol, Melanie R; Muratov, Eugene; Fourches, Denis; Tropsha, Alexander; Kashuba, Angela D M

    2014-11-01

    The exposure of oral antiretroviral (ARV) drugs in the female genital tract (FGT) is variable and almost unpredictable. Identifying an efficient method to find compounds with high tissue penetration would streamline the development of regimens for both HIV preexposure prophylaxis and viral reservoir targeting. Here we describe the cheminformatics investigation of diverse drugs with known FGT penetration using cluster analysis and quantitative structure-activity relationships (QSAR) modeling. A literature search over the 1950-2012 period identified 58 compounds (including 21 ARVs and representing 13 drug classes) associated with their actual concentration data for cervical or vaginal tissue, or cervicovaginal fluid. Cluster analysis revealed significant trends in the penetrative ability for certain chemotypes. QSAR models to predict genital tract concentrations normalized to blood plasma concentrations were developed with two machine learning techniques utilizing drugs' molecular descriptors and pharmacokinetic parameters as inputs. The QSAR model with the highest predictive accuracy had R(2)test=0.47. High volume of distribution, high MRP1 substrate probability, and low MRP4 substrate probability were associated with FGT concentrations ≥1.5-fold plasma concentrations. However, due to the limited FGT data available, prediction performances of all models were low. Despite this limitation, we were able to support our findings by correctly predicting the penetration class of rilpivirine and dolutegravir. With more data to enrich the models, we believe these methods could potentially enhance the current approach of clinical testing.

  16. Lung lobe segmentation based on statistical atlas and graph cuts

    NASA Astrophysics Data System (ADS)

    Nimura, Yukitaka; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku

    2012-03-01

    This paper presents a novel method that can extract lung lobes by utilizing probability atlas and multilabel graph cuts. Information about pulmonary structures plays very important role for decision of the treatment strategy and surgical planning. The human lungs are divided into five anatomical regions, the lung lobes. Precise segmentation and recognition of lung lobes are indispensable tasks in computer aided diagnosis systems and computer aided surgery systems. A lot of methods for lung lobe segmentation are proposed. However, these methods only target the normal cases. Therefore, these methods cannot extract the lung lobes in abnormal cases, such as COPD cases. To extract lung lobes in abnormal cases, this paper propose a lung lobe segmentation method based on probability atlas of lobe location and multilabel graph cuts. The process consists of three components; normalization based on the patient's physique, probability atlas generation, and segmentation based on graph cuts. We apply this method to six cases of chest CT images including COPD cases. Jaccard index was 79.1%.

  17. Reconstruction of time-varying tidal flat topography using optical remote sensing imageries

    NASA Astrophysics Data System (ADS)

    Tseng, Kuo-Hsin; Kuo, Chung-Yen; Lin, Tang-Huang; Huang, Zhi-Cheng; Lin, Yu-Ching; Liao, Wen-Hung; Chen, Chi-Farn

    2017-09-01

    Tidal flats (TFs) occupy approximately 7% of the total coastal shelf areas worldwide. However, TFs are unavailable in most global digital elevation models (DEMs) due to water-impermeable nature of existing remote sensing approaches (e.g., radar used for WorldDEM™ and Shuttle Radar Topography Mission DEM and optical stereo-pairs used for ASTER Global Digital Elevation Map Version 2). However, this problem can be circumvented using remote sensing imageries to observe land exposure at different tidal heights during each revisit. This work exploits Landsat-4/-5/-7/-8 Thematic Mapper (TM)/Enhanced TM Plus/Operational Land Imager imageries to reconstruct topography of a TF, namely, Hsiang-Shan Wetland in Taiwan, to unveil its formation and temporal changes since the 1980s. We first classify water areas by applying modified normalized difference water index to each Landsat image and normalize chances of water exposure to create an inundation probability map. This map is then scaled by tidal amplitudes extracted from DTU10 tide model to convert the probabilities into actual elevations. After building DEM at intertidal zone, a water level-area curve is established, and accuracy of DEM is validated by sea level (SL) at the timing of each Landsat snapshot. A 22-year (1992-2013) dataset composed of 227 Landsat scenes are analyzed and compared with tide gauge data. Root-mean-square differences of SL reaches 48 cm with a correlation coefficient of 0.93, indicating that the present technique is useful for constructing accurate coastal DEMs, and that products can be utilized for estimating instant SL. This study shows the possibility of exploring evolution of intertidal zones using an archive of optical remote sensing imageries. The technique developed in the present study potentially helps in quantifying SL from the start of optical remote sensing era.

  18. Intentional avoidance of the esophagus using intensity modulated radiation therapy to reduce dysphagia after palliative thoracic radiation.

    PubMed

    Granton, Patrick V; Palma, David A; Louie, Alexander V

    2017-01-26

    Palliative thoracic radiotherapy is an effective technique to alleviate symptoms of disease burden in advanced-stage lung cancer patients. Previous randomized controlled studies demonstrated a survival benefit in patients with good performance status at radiation doses of 35Gy 10 or greater but with an increased incidence of esophagitis. The objective of this planning study was to assess the potential impact of esophageal-sparing IMRT (ES-IMRT) compared to the current standard of care using parallel-opposed pair beams (POP). In this study, 15 patients with lung cancer treated to a dose of 30Gy in 10 fractions between August 2015 and January 2016 were identified. Radiation treatment plans were optimized using ES-IMRT by limiting the max esophagus point dose to 24Gy. Using published Lyman-Kutcher-Burman normal tissue complication probabilities (LKB-NTCP) models, both plans were evaluated for the likelihood of esophagitis (≥ grade 2) and pneumonitis (≥ grade 2). Using ES-IMRT, the median esophageal and lung mean doses reduced from 16 and 8Gy to 7 and 7Gy, respectively. Using the LKB models, the theoretical probability of symptomatic esophagitis and pneumonitis reduced from 13 to 2%, and from 5 to 3%, respectively. The median normalize total dose (NTD mean) accounting for fraction size for the GTV and PTV of the clinically approved POP plans compared to the ES-IMRT plans were similar. Advanced radiotherapy techniques such as ES-IMRT may have clinical utility in reducing treatment-related toxicity in advanced lung cancer patients. Our data suggests that the rate of esophagitis can be reduced without compromising local control.

  19. Multiscale Characterization of the Probability Density Functions of Velocity and Temperature Increment Fields

    NASA Astrophysics Data System (ADS)

    DeMarco, Adam Ward

    The turbulent motions with the atmospheric boundary layer exist over a wide range of spatial and temporal scales and are very difficult to characterize. Thus, to explore the behavior of such complex flow enviroments, it is customary to examine their properties from a statistical perspective. Utilizing the probability density functions of velocity and temperature increments, deltau and deltaT, respectively, this work investigates their multiscale behavior to uncover the unique traits that have yet to be thoroughly studied. Utilizing diverse datasets, including idealized, wind tunnel experiments, atmospheric turbulence field measurements, multi-year ABL tower observations, and mesoscale models simulations, this study reveals remarkable similiarities (and some differences) between the small and larger scale components of the probability density functions increments fields. This comprehensive analysis also utilizes a set of statistical distributions to showcase their ability to capture features of the velocity and temperature increments' probability density functions (pdfs) across multiscale atmospheric motions. An approach is proposed for estimating their pdfs utilizing the maximum likelihood estimation (MLE) technique, which has never been conducted utilizing atmospheric data. Using this technique, we reveal the ability to estimate higher-order moments accurately with a limited sample size, which has been a persistent concern for atmospheric turbulence research. With the use robust Goodness of Fit (GoF) metrics, we quantitatively reveal the accuracy of the distributions to the diverse dataset. Through this analysis, it is shown that the normal inverse Gaussian (NIG) distribution is a prime candidate to be used as an estimate of the increment pdfs fields. Therefore, using the NIG model and its parameters, we display the variations in the increments over a range of scales revealing some unique scale-dependent qualities under various stability and ow conditions. This novel approach can provide a method of characterizing increment fields with the sole use of only four pdf parameters. Also, we investigate the capability of the current state-of-the-art mesoscale atmospheric models to predict the features and highlight the potential for use for future model development. With the knowledge gained in this study, a number of applications can benefit by using our methodology, including the wind energy and optical wave propagation fields.

  20. Neutrino mass priors for cosmology from random matrices

    NASA Astrophysics Data System (ADS)

    Long, Andrew J.; Raveri, Marco; Hu, Wayne; Dodelson, Scott

    2018-02-01

    Cosmological measurements of structure are placing increasingly strong constraints on the sum of the neutrino masses, Σ mν, through Bayesian inference. Because these constraints depend on the choice for the prior probability π (Σ mν), we argue that this prior should be motivated by fundamental physical principles rather than the ad hoc choices that are common in the literature. The first step in this direction is to specify the prior directly at the level of the neutrino mass matrix Mν, since this is the parameter appearing in the Lagrangian of the particle physics theory. Thus by specifying a probability distribution over Mν, and by including the known squared mass splittings, we predict a theoretical probability distribution over Σ mν that we interpret as a Bayesian prior probability π (Σ mν). Assuming a basis-invariant probability distribution on Mν, also known as the anarchy hypothesis, we find that π (Σ mν) peaks close to the smallest Σ mν allowed by the measured mass splittings, roughly 0.06 eV (0.1 eV) for normal (inverted) ordering, due to the phenomenon of eigenvalue repulsion in random matrices. We consider three models for neutrino mass generation: Dirac, Majorana, and Majorana via the seesaw mechanism; differences in the predicted priors π (Σ mν) allow for the possibility of having indications about the physical origin of neutrino masses once sufficient experimental sensitivity is achieved. We present fitting functions for π (Σ mν), which provide a simple means for applying these priors to cosmological constraints on the neutrino masses or marginalizing over their impact on other cosmological parameters.

Top