Science.gov

Sample records for normal probability model

  1. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    SciTech Connect

    Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van't

    2012-03-15

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  2. Improving normal tissue complication probability models: the need to adopt a "data-pooling" culture.

    PubMed

    Deasy, Joseph O; Bentzen, Søren M; Jackson, Andrew; Ten Haken, Randall K; Yorke, Ellen D; Constine, Louis S; Sharma, Ashish; Marks, Lawrence B

    2010-03-01

    Clinical studies of the dependence of normal tissue response on dose-volume factors are often confusingly inconsistent, as the QUANTEC reviews demonstrate. A key opportunity to accelerate progress is to begin storing high-quality datasets in repositories. Using available technology, multiple repositories could be conveniently queried, without divulging protected health information, to identify relevant sources of data for further analysis. After obtaining institutional approvals, data could then be pooled, greatly enhancing the capability to construct predictive models that are more widely applicable and better powered to accurately identify key predictive factors (whether dosimetric, image-based, clinical, socioeconomic, or biological). Data pooling has already been carried out effectively in a few normal tissue complication probability studies and should become a common strategy. PMID:20171511

  3. Improving Normal Tissue Complication Probability Models: The Need to Adopt a 'Data-Pooling' Culture

    SciTech Connect

    Deasy, Joseph O.; Bentzen, Soren M.; Jackson, Andrew; Ten Haken, Randall K.; Yorke, Ellen D.; Constine, Louis S.; Sharma, Ashish; Marks, Lawrence B.

    2010-03-01

    Clinical studies of the dependence of normal tissue response on dose-volume factors are often confusingly inconsistent, as the QUANTEC reviews demonstrate. A key opportunity to accelerate progress is to begin storing high-quality datasets in repositories. Using available technology, multiple repositories could be conveniently queried, without divulging protected health information, to identify relevant sources of data for further analysis. After obtaining institutional approvals, data could then be pooled, greatly enhancing the capability to construct predictive models that are more widely applicable and better powered to accurately identify key predictive factors (whether dosimetric, image-based, clinical, socioeconomic, or biological). Data pooling has already been carried out effectively in a few normal tissue complication probability studies and should become a common strategy.

  4. IMPROVING NORMAL TISSUE COMPLICATION PROBABILITY MODELS: THE NEED TO ADOPT A “DATA-POOLING” CULTURE

    PubMed Central

    Deasy, Joseph O.; Bentzen, Søren M.; Jackson, Andrew; Ten Haken, Randall K.; Yorke, Ellen D.; Constine, Louis S.; Sharma, Ashish; Marks, Lawrence B.

    2010-01-01

    Clinical studies of the dependence of normal tissue response on dose-volume factors are often confusingly inconsistent, as the QUANTEC reviews demonstrate. A key opportunity to accelerate progress is to begin storing high-quality datasets in repositories. Using available technology, multiple repositories could be conveniently queried, without divulging protected health information, to identify relevant sources of data for further analysis. After obtaining institutional approvals, data could then be pooled, greatly enhancing the capability to construct predictive models that are more widely applicable and better powered to accurately identify key predictive factors (whether dosimetric, image-based, clinical, socioeconomic, or biological). Data pooling has already been carried out effectively in a few normal tissue complication probability studies and should become a common strategy. PMID:20171511

  5. Multivariate Normal Tissue Complication Probability Modeling of Heart Valve Dysfunction in Hodgkin Lymphoma Survivors

    SciTech Connect

    Cella, Laura; Liuzzi, Raffaele; Conson, Manuel; D’Avino, Vittoria; Salvatore, Marco; Pacelli, Roberto

    2013-10-01

    Purpose: To establish a multivariate normal tissue complication probability (NTCP) model for radiation-induced asymptomatic heart valvular defects (RVD). Methods and Materials: Fifty-six patients treated with sequential chemoradiation therapy for Hodgkin lymphoma (HL) were retrospectively reviewed for RVD events. Clinical information along with whole heart, cardiac chambers, and lung dose distribution parameters was collected, and the correlations to RVD were analyzed by means of Spearman's rank correlation coefficient (Rs). For the selection of the model order and parameters for NTCP modeling, a multivariate logistic regression method using resampling techniques (bootstrapping) was applied. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC). Results: When we analyzed the whole heart, a 3-variable NTCP model including the maximum dose, whole heart volume, and lung volume was shown to be the optimal predictive model for RVD (Rs = 0.573, P<.001, AUC = 0.83). When we analyzed the cardiac chambers individually, for the left atrium and for the left ventricle, an NTCP model based on 3 variables including the percentage volume exceeding 30 Gy (V30), cardiac chamber volume, and lung volume was selected as the most predictive model (Rs = 0.539, P<.001, AUC = 0.83; and Rs = 0.557, P<.001, AUC = 0.82, respectively). The NTCP values increase as heart maximum dose or cardiac chambers V30 increase. They also increase with larger volumes of the heart or cardiac chambers and decrease when lung volume is larger. Conclusions: We propose logistic NTCP models for RVD considering not only heart irradiation dose but also the combined effects of lung and heart volumes. Our study establishes the statistical evidence of the indirect effect of lung size on radio-induced heart toxicity.

  6. Rectal bleeding, fecal incontinence, and high stool frequency after conformal radiotherapy for prostate cancer: Normal tissue complication probability modeling

    SciTech Connect

    Peeters, Stephanie; Hoogeman, Mischa S.; Heemsbergen, Wilma D.; Hart, Augustinus; Koper, Peter C.M.; Lebesque, Joos V. . E-mail: j.lebesque@nki.nl

    2006-09-01

    Purpose: To analyze whether inclusion of predisposing clinical features in the Lyman-Kutcher-Burman (LKB) normal tissue complication probability (NTCP) model improves the estimation of late gastrointestinal toxicity. Methods and Materials: This study includes 468 prostate cancer patients participating in a randomized trial comparing 68 with 78 Gy. We fitted the probability of developing late toxicity within 3 years (rectal bleeding, high stool frequency, and fecal incontinence) with the original, and a modified LKB model, in which a clinical feature (e.g., history of abdominal surgery) was taken into account by fitting subset specific TD50s. The ratio of these TD50s is the dose-modifying factor for that clinical feature. Dose distributions of anorectal (bleeding and frequency) and anal wall (fecal incontinence) were used. Results: The modified LKB model gave significantly better fits than the original LKB model. Patients with a history of abdominal surgery had a lower tolerance to radiation than did patients without previous surgery, with a dose-modifying factor of 1.1 for bleeding and of 2.5 for fecal incontinence. The dose-response curve for bleeding was approximately two times steeper than that for frequency and three times steeper than that for fecal incontinence. Conclusions: Inclusion of predisposing clinical features significantly improved the estimation of the NTCP. For patients with a history of abdominal surgery, more severe dose constraints should therefore be used during treatment plan optimization.

  7. Normal Tissue Complication Probability Modeling of Radiation-Induced Hypothyroidism After Head-and-Neck Radiation Therapy

    SciTech Connect

    Bakhshandeh, Mohsen; Hashemi, Bijan; Mahdavi, Seied Rabi Mehdi; Nikoofar, Alireza; Vasheghani, Maryam; Kazemnejad, Anoshirvan

    2013-02-01

    Purpose: To determine the dose-response relationship of the thyroid for radiation-induced hypothyroidism in head-and-neck radiation therapy, according to 6 normal tissue complication probability models, and to find the best-fit parameters of the models. Methods and Materials: Sixty-five patients treated with primary or postoperative radiation therapy for various cancers in the head-and-neck region were prospectively evaluated. Patient serum samples (tri-iodothyronine, thyroxine, thyroid-stimulating hormone [TSH], free tri-iodothyronine, and free thyroxine) were measured before and at regular time intervals until 1 year after the completion of radiation therapy. Dose-volume histograms (DVHs) of the patients' thyroid gland were derived from their computed tomography (CT)-based treatment planning data. Hypothyroidism was defined as increased TSH (subclinical hypothyroidism) or increased TSH in combination with decreased free thyroxine and thyroxine (clinical hypothyroidism). Thyroid DVHs were converted to 2 Gy/fraction equivalent doses using the linear-quadratic formula with {alpha}/{beta} = 3 Gy. The evaluated models included the following: Lyman with the DVH reduced to the equivalent uniform dose (EUD), known as LEUD; Logit-EUD; mean dose; relative seriality; individual critical volume; and population critical volume models. The parameters of the models were obtained by fitting the patients' data using a maximum likelihood analysis method. The goodness of fit of the models was determined by the 2-sample Kolmogorov-Smirnov test. Ranking of the models was made according to Akaike's information criterion. Results: Twenty-nine patients (44.6%) experienced hypothyroidism. None of the models was rejected according to the evaluation of the goodness of fit. The mean dose model was ranked as the best model on the basis of its Akaike's information criterion value. The D{sub 50} estimated from the models was approximately 44 Gy. Conclusions: The implemented normal tissue

  8. The Benefits of Including Clinical Factors in Rectal Normal Tissue Complication Probability Modeling After Radiotherapy for Prostate Cancer

    SciTech Connect

    Defraene, Gilles; Van den Bergh, Laura; Al-Mamgani, Abrahim; Haustermans, Karin; Heemsbergen, Wilma; Van den Heuvel, Frank; Lebesque, Joos V.

    2012-03-01

    Purpose: To study the impact of clinical predisposing factors on rectal normal tissue complication probability modeling using the updated results of the Dutch prostate dose-escalation trial. Methods and Materials: Toxicity data of 512 patients (conformally treated to 68 Gy [n = 284] and 78 Gy [n = 228]) with complete follow-up at 3 years after radiotherapy were studied. Scored end points were rectal bleeding, high stool frequency, and fecal incontinence. Two traditional dose-based models (Lyman-Kutcher-Burman (LKB) and Relative Seriality (RS) and a logistic model were fitted using a maximum likelihood approach. Furthermore, these model fits were improved by including the most significant clinical factors. The area under the receiver operating characteristic curve (AUC) was used to compare the discriminating ability of all fits. Results: Including clinical factors significantly increased the predictive power of the models for all end points. In the optimal LKB, RS, and logistic models for rectal bleeding and fecal incontinence, the first significant (p = 0.011-0.013) clinical factor was 'previous abdominal surgery.' As second significant (p = 0.012-0.016) factor, 'cardiac history' was included in all three rectal bleeding fits, whereas including 'diabetes' was significant (p = 0.039-0.048) in fecal incontinence modeling but only in the LKB and logistic models. High stool frequency fits only benefitted significantly (p = 0.003-0.006) from the inclusion of the baseline toxicity score. For all models rectal bleeding fits had the highest AUC (0.77) where it was 0.63 and 0.68 for high stool frequency and fecal incontinence, respectively. LKB and logistic model fits resulted in similar values for the volume parameter. The steepness parameter was somewhat higher in the logistic model, also resulting in a slightly lower D{sub 50}. Anal wall DVHs were used for fecal incontinence, whereas anorectal wall dose best described the other two endpoints. Conclusions: Comparable

  9. Prediction of radiation-induced liver disease by Lyman normal-tissue complication probability model in three-dimensional conformal radiation therapy for primary liver carcinoma

    SciTech Connect

    Xu ZhiYong; Liang Shixiong; Zhu Ji; Zhu Xiaodong; Zhao Jiandong; Lu Haijie; Yang Yunli; Chen Long; Wang Anyu; Fu Xiaolong; Jiang Guoliang . E-mail: jianggl@21cn.com

    2006-05-01

    Purpose: To describe the probability of RILD by application of the Lyman-Kutcher-Burman normal-tissue complication (NTCP) model for primary liver carcinoma (PLC) treated with hypofractionated three-dimensional conformal radiotherapy (3D-CRT). Methods and Materials: A total of 109 PLC patients treated by 3D-CRT were followed for RILD. Of these patients, 93 were in liver cirrhosis of Child-Pugh Grade A, and 16 were in Child-Pugh Grade B. The Michigan NTCP model was used to predict the probability of RILD, and then the modified Lyman NTCP model was generated for Child-Pugh A and Child-Pugh B patients by maximum-likelihood analysis. Results: Of all patients, 17 developed RILD in which 8 were of Child-Pugh Grade A, and 9 were of Child-Pugh Grade B. The prediction of RILD by the Michigan model was underestimated for PLC patients. The modified n, m, TD{sub 5} (1) were 1.1, 0.28, and 40.5 Gy and 0.7, 0.43, and 23 Gy for patients with Child-Pugh A and B, respectively, which yielded better estimations of RILD probability. The hepatic tolerable doses (TD{sub 5}) would be MDTNL of 21 Gy and 6 Gy, respectively, for Child-Pugh A and B patients. Conclusions: The Michigan model was probably not fit to predict RILD in PLC patients. A modified Lyman NTCP model for RILD was recommended.

  10. Normal tissue complication probability model parameter estimation for xerostomia in head and neck cancer patients based on scintigraphy and quality of life assessments

    PubMed Central

    2012-01-01

    Background With advances in modern radiotherapy (RT), many patients with head and neck (HN) cancer can be effectively cured. However, xerostomia is a common complication in patients after RT for HN cancer. The purpose of this study was to use the Lyman–Kutcher–Burman (LKB) model to derive parameters for the normal tissue complication probability (NTCP) for xerostomia based on scintigraphy assessments and quality of life (QoL) questionnaires. We performed validation tests of the Quantitative Analysis of Normal Tissue Effects in the Clinic (QUANTEC) guidelines against prospectively collected QoL and salivary scintigraphic data. Methods Thirty-one patients with HN cancer were enrolled. Salivary excretion factors (SEFs) measured by scintigraphy and QoL data from self-reported questionnaires were used for NTCP modeling to describe the incidence of grade 3+ xerostomia. The NTCP parameters estimated from the QoL and SEF datasets were compared. Model performance was assessed using Pearson’s chi-squared test, Nagelkerke’s R2, the area under the receiver operating characteristic curve, and the Hosmer–Lemeshow test. The negative predictive value (NPV) was checked for the rate of correctly predicting the lack of incidence. Pearson’s chi-squared test was used to test the goodness of fit and association. Results Using the LKB NTCP model and assuming n=1, the dose for uniform irradiation of the whole or partial volume of the parotid gland that results in 50% probability of a complication (TD50) and the slope of the dose–response curve (m) were determined from the QoL and SEF datasets, respectively. The NTCP-fitted parameters for local disease were TD50=43.6 Gy and m=0.18 with the SEF data, and TD50=44.1 Gy and m=0.11 with the QoL data. The rate of grade 3+ xerostomia for treatment plans meeting the QUANTEC guidelines was specifically predicted, with a NPV of 100%, using either the QoL or SEF dataset. Conclusions Our study shows the agreement between the NTCP

  11. Predicting Grade 3 Acute Diarrhea During Radiation Therapy for Rectal Cancer Using a Cutoff-Dose Logistic Regression Normal Tissue Complication Probability Model

    SciTech Connect

    Robertson, John M.; Soehn, Matthias; Yan Di

    2010-05-01

    Purpose: Understanding the dose-volume relationship of small bowel irradiation and severe acute diarrhea may help reduce the incidence of this side effect during adjuvant treatment for rectal cancer. Methods and Materials: Consecutive patients treated curatively for rectal cancer were reviewed, and the maximum grade of acute diarrhea was determined. The small bowel was outlined on the treatment planning CT scan, and a dose-volume histogram was calculated for the initial pelvic treatment (45 Gy). Logistic regression models were fitted for varying cutoff-dose levels from 5 to 45 Gy in 5-Gy increments. The model with the highest LogLikelihood was used to develop a cutoff-dose normal tissue complication probability (NTCP) model. Results: There were a total of 152 patients (48% preoperative, 47% postoperative, 5% other), predominantly treated prone (95%) with a three-field technique (94%) and a protracted venous infusion of 5-fluorouracil (78%). Acute Grade 3 diarrhea occurred in 21%. The largest LogLikelihood was found for the cutoff-dose logistic regression model with 15 Gy as the cutoff-dose, although the models for 20 Gy and 25 Gy had similar significance. According to this model, highly significant correlations (p <0.001) between small bowel volumes receiving at least 15 Gy and toxicity exist in the considered patient population. Similar findings applied to both the preoperatively (p = 0.001) and postoperatively irradiated groups (p = 0.001). Conclusion: The incidence of Grade 3 diarrhea was significantly correlated with the volume of small bowel receiving at least 15 Gy using a cutoff-dose NTCP model.

  12. Impact of Chemotherapy on Normal Tissue Complication Probability Models of Acute Hematologic Toxicity in Patients Receiving Pelvic Intensity Modulated Radiation Therapy

    SciTech Connect

    Bazan, Jose G.; Luxton, Gary; Kozak, Margaret M.; Anderson, Eric M.; Hancock, Steven L.; Kapp, Daniel S.; Kidd, Elizabeth A.; Koong, Albert C.; Chang, Daniel T.

    2013-12-01

    Purpose: To determine how chemotherapy agents affect radiation dose parameters that correlate with acute hematologic toxicity (HT) in patients treated with pelvic intensity modulated radiation therapy (P-IMRT) and concurrent chemotherapy. Methods and Materials: We assessed HT in 141 patients who received P-IMRT for anal, gynecologic, rectal, or prostate cancers, 95 of whom received concurrent chemotherapy. Patients were separated into 4 groups: mitomycin (MMC) + 5-fluorouracil (5FU, 37 of 141), platinum ± 5FU (Cis, 32 of 141), 5FU (26 of 141), and P-IMRT alone (46 of 141). The pelvic bone was contoured as a surrogate for pelvic bone marrow (PBM) and divided into subsites: ilium, lower pelvis, and lumbosacral spine (LSS). The volumes of each region receiving 5-40 Gy were calculated. The endpoint for HT was grade ≥3 (HT3+) leukopenia, neutropenia or thrombocytopenia. Normal tissue complication probability was calculated using the Lyman-Kutcher-Burman model. Logistic regression was used to analyze association between HT3+ and dosimetric parameters. Results: Twenty-six patients experienced HT3+: 10 of 37 (27%) MMC, 14 of 32 (44%) Cis, 2 of 26 (8%) 5FU, and 0 of 46 P-IMRT. PBM dosimetric parameters were correlated with HT3+ in the MMC group but not in the Cis group. LSS dosimetric parameters were well correlated with HT3+ in both the MMC and Cis groups. Constrained optimization (0model resulted in n=1, m = 0.11, TD{sub 50} = 31 Gy for LSS in the MMC group and n=1, m = 0.27, TD{sub 50} = 35 Gy for LSS in the Cis group. Conclusions: The incidence of HT3+ depends on type of chemotherapy received. Patients receiving P-IMRT ± 5FU have better bone marrow tolerance than those receiving irradiation concurrent with either Cis or MMC. Treatment with MMC has a lower TD{sub 50} and more steeply rising normal tissue complication probability curve compared with treatment with Cis. Dose tolerance of PBM and the LSS subsite may be lower for

  13. Normal Tissue Complication Probability Modeling of Acute Hematologic Toxicity in Patients Treated With Intensity-Modulated Radiation Therapy for Squamous Cell Carcinoma of the Anal Canal

    SciTech Connect

    Bazan, Jose G.; Luxton, Gary; Mok, Edward C.; Koong, Albert C.; Chang, Daniel T.

    2012-11-01

    Purpose: To identify dosimetric parameters that correlate with acute hematologic toxicity (HT) in patients with squamous cell carcinoma of the anal canal treated with definitive chemoradiotherapy (CRT). Methods and Materials: We analyzed 33 patients receiving CRT. Pelvic bone (PBM) was contoured for each patient and divided into subsites: ilium, lower pelvis (LP), and lumbosacral spine (LSS). The volume of each region receiving at least 5, 10, 15, 20, 30, and 40 Gy was calculated. Endpoints included grade {>=}3 HT (HT3+) and hematologic event (HE), defined as any grade {>=}2 HT with a modification in chemotherapy dose. Normal tissue complication probability (NTCP) was evaluated with the Lyman-Kutcher-Burman (LKB) model. Logistic regression was used to test associations between HT and dosimetric/clinical parameters. Results: Nine patients experienced HT3+ and 15 patients experienced HE. Constrained optimization of the LKB model for HT3+ yielded the parameters m = 0.175, n = 1, and TD{sub 50} = 32 Gy. With this model, mean PBM doses of 25 Gy, 27.5 Gy, and 31 Gy result in a 10%, 20%, and 40% risk of HT3+, respectively. Compared with patients with mean PBM dose of <30 Gy, patients with mean PBM dose {>=}30 Gy had a 14-fold increase in the odds of developing HT3+ (p = 0.005). Several low-dose radiation parameters (i.e., PBM-V10) were associated with the development of HT3+ and HE. No association was found with the ilium, LP, or clinical factors. Conclusions: LKB modeling confirms the expectation that PBM acts like a parallel organ, implying that the mean dose to the organ is a useful predictor for toxicity. Low-dose radiation to the PBM was also associated with clinically significant HT. Keeping the mean PBM dose <22.5 Gy and <25 Gy is associated with a 5% and 10% risk of HT, respectively.

  14. Multinomial mixture model with heterogeneous classification probabilities

    USGS Publications Warehouse

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  15. Optimizing the parameters of the Lyman-Kutcher-Burman, Källman, and Logit+EUD models for the rectum - a comparison between normal tissue complication probability and clinical data

    NASA Astrophysics Data System (ADS)

    Trojková, Darina; Judas, Libor; Trojek, Tomáš

    2014-11-01

    Minimizing the late rectal toxicity of prostate cancer patients is a very important and widely-discussed topic. Normal tissue complication probability (NTCP) models can be used to evaluate competing treatment plans. In our work, the parameters of the Lyman-Kutcher-Burman (LKB), Källman, and Logit+EUD models are optimized by minimizing the Brier score for a group of 302 prostate cancer patients. The NTCP values are calculated and are compared with the values obtained using previously published values for the parameters. χ2 Statistics were calculated as a check of goodness of optimization.

  16. Normal tissue complication probability (NTCP) modelling using spatial dose metrics and machine learning methods for severe acute oral mucositis resulting from head and neck radiotherapy

    PubMed Central

    Dean, Jamie A; Wong, Kee H; Welsh, Liam C; Jones, Ann-Britt; Schick, Ulrike; Newbold, Kate L; Bhide, Shreerang A; Harrington, Kevin J; Nutting, Christopher M; Gulliford, Sarah L

    2016-01-01

    Background and Purpose Severe acute mucositis commonly results from head and neck (chemo)radiotherapy. A predictive model of mucositis could guide clinical decision-making and inform treatment planning. We aimed to generate such a model using spatial dose metrics and machine learning. Material and Methods Predictive models of severe acute mucositis were generated using radiotherapy dose (dose-volume and spatial dose metrics) and clinical data. Penalised logistic regression, support vector classification and random forest classification (RFC) models were generated and compared. Internal validation was performed (with 100-iteration cross-validation), using multiple metrics, including area under the receiver operating characteristic curve (AUC) and calibration slope, to assess performance. Associations between covariates and severe mucositis were explored using the models. Results The dose-volume-based models (standard) performed equally to those incorporating spatial information. Discrimination was similar between models, but the RFCstandard had the best calibration. The mean AUC and calibration slope for this model were 0.71 (s.d.=0.09) and 3.9 (s.d.=2.2), respectively. The volumes of oral cavity receiving intermediate and high doses were associated with severe mucositis. Conclusions The RFCstandard model performance is modest-to-good, but should be improved, and requires external validation. Reducing the volumes of oral cavity receiving intermediate and high doses may reduce mucositis incidence. PMID:27240717

  17. DISJUNCTIVE NORMAL SHAPE MODELS

    PubMed Central

    Ramesh, Nisha; Mesadi, Fitsum; Cetin, Mujdat; Tasdizen, Tolga

    2016-01-01

    A novel implicit parametric shape model is proposed for segmentation and analysis of medical images. Functions representing the shape of an object can be approximated as a union of N polytopes. Each polytope is obtained by the intersection of M half-spaces. The shape function can be approximated as a disjunction of conjunctions, using the disjunctive normal form. The shape model is initialized using seed points defined by the user. We define a cost function based on the Chan-Vese energy functional. The model is differentiable, hence, gradient based optimization algorithms are used to find the model parameters. PMID:27403233

  18. Bivariate normal, conditional and rectangular probabilities: A computer program with applications

    NASA Technical Reports Server (NTRS)

    Swaroop, R.; Brownlow, J. D.; Ashwworth, G. R.; Winter, W. R.

    1980-01-01

    Some results for the bivariate normal distribution analysis are presented. Computer programs for conditional normal probabilities, marginal probabilities, as well as joint probabilities for rectangular regions are given: routines for computing fractile points and distribution functions are also presented. Some examples from a closed circuit television experiment are included.

  19. Site occupancy models with heterogeneous detection probabilities

    USGS Publications Warehouse

    Royle, J. Andrew

    2006-01-01

    Models for estimating the probability of occurrence of a species in the presence of imperfect detection are important in many ecological disciplines. In these ?site occupancy? models, the possibility of heterogeneity in detection probabilities among sites must be considered because variation in abundance (and other factors) among sampled sites induces variation in detection probability (p). In this article, I develop occurrence probability models that allow for heterogeneous detection probabilities by considering several common classes of mixture distributions for p. For any mixing distribution, the likelihood has the general form of a zero-inflated binomial mixture for which inference based upon integrated likelihood is straightforward. A recent paper by Link (2003, Biometrics 59, 1123?1130) demonstrates that in closed population models used for estimating population size, different classes of mixture distributions are indistinguishable from data, yet can produce very different inferences about population size. I demonstrate that this problem can also arise in models for estimating site occupancy in the presence of heterogeneous detection probabilities. The implications of this are discussed in the context of an application to avian survey data and the development of animal monitoring programs.

  20. Computational Modelling and Simulation Fostering New Approaches in Learning Probability

    ERIC Educational Resources Information Center

    Kuhn, Markus; Hoppe, Ulrich; Lingnau, Andreas; Wichmann, Astrid

    2006-01-01

    Discovery learning in mathematics in the domain of probability based on hands-on experiments is normally limited because of the difficulty in providing sufficient materials and data volume in terms of repetitions of the experiments. Our cooperative, computational modelling and simulation environment engages students and teachers in composing and…

  1. Approximating Multivariate Normal Orthant Probabilities. ONR Technical Report. [Biometric Lab Report No. 90-1.

    ERIC Educational Resources Information Center

    Gibbons, Robert D.; And Others

    The probability integral of the multivariate normal distribution (ND) has received considerable attention since W. F. Sheppard's (1900) and K. Pearson's (1901) seminal work on the bivariate ND. This paper evaluates the formula that represents the "n x n" correlation matrix of the "chi(sub i)" and the standardized multivariate normal density…

  2. Multiple model cardinalized probability hypothesis density filter

    NASA Astrophysics Data System (ADS)

    Georgescu, Ramona; Willett, Peter

    2011-09-01

    The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.

  3. A Quantum Probability Model of Causal Reasoning

    PubMed Central

    Trueblood, Jennifer S.; Busemeyer, Jerome R.

    2012-01-01

    People can often outperform statistical methods and machine learning algorithms in situations that involve making inferences about the relationship between causes and effects. While people are remarkably good at causal reasoning in many situations, there are several instances where they deviate from expected responses. This paper examines three situations where judgments related to causal inference problems produce unexpected results and describes a quantum inference model based on the axiomatic principles of quantum probability theory that can explain these effects. Two of the three phenomena arise from the comparison of predictive judgments (i.e., the conditional probability of an effect given a cause) with diagnostic judgments (i.e., the conditional probability of a cause given an effect). The third phenomenon is a new finding examining order effects in predictive causal judgments. The quantum inference model uses the notion of incompatibility among different causes to account for all three phenomena. Psychologically, the model assumes that individuals adopt different points of view when thinking about different causes. The model provides good fits to the data and offers a coherent account for all three causal reasoning effects thus proving to be a viable new candidate for modeling human judgment. PMID:22593747

  4. Molecular clouds have power-law probability distribution functions (not log-normal)

    NASA Astrophysics Data System (ADS)

    Alves, Joao; Lombardi, Marco; Lada, Charles

    2015-08-01

    We investigate the shape of the probability distribution of column densities (PDF) in molecular clouds. Through the use of low-noise, extinction-calibrated Planck-Herschel emission data for eight molecular clouds, we demonstrate that, contrary to common belief, the PDFs of molecular clouds are not described well by log-normal functions, but are instead power laws with exponents close to two and with breaks between AK≃0.1 and 0.2mag, so close to the CO self-shielding limit and not far from the transition between molecular and atomic gas. Additionally, we argue that the intrinsic functional form of the PDF cannot be securely determined below AK≃0.1mag, limiting our ability to investigate more complex models for the shape of the cloud PDF.

  5. Extrapolation of Normal Tissue Complication Probability for Different Fractionations in Liver Irradiation

    SciTech Connect

    Tai An; Erickson, Beth; Li, X. Allen

    2009-05-01

    Purpose: The ability to predict normal tissue complication probability (NTCP) is essential for NTCP-based treatment planning. The purpose of this work is to estimate the Lyman NTCP model parameters for liver irradiation from published clinical data of different fractionation regimens. A new expression of normalized total dose (NTD) is proposed to convert NTCP data between different treatment schemes. Method and Materials: The NTCP data of radiation- induced liver disease (RILD) from external beam radiation therapy for primary liver cancer patients were selected for analysis. The data were collected from 4 institutions for tumor sizes in the range of of 8-10 cm. The dose per fraction ranged from 1.5 Gy to 6 Gy. A modified linear-quadratic model with two components corresponding to radiosensitive and radioresistant cells in the normal liver tissue was proposed to understand the new NTD formalism. Results: There are five parameters in the model: TD{sub 50}, m, n, {alpha}/{beta} and f. With two parameters n and {alpha}/{beta} fixed to be 1.0 and 2.0 Gy, respectively, the extracted parameters from the fitting are TD{sub 50}(1) = 40.3 {+-} 8.4Gy, m =0.36 {+-} 0.09, f = 0.156 {+-} 0.074 Gy and TD{sub 50}(1) = 23.9 {+-} 5.3Gy, m = 0.41 {+-} 0.15, f = 0.0 {+-} 0.04 Gy for patients with liver cirrhosis scores of Child-Pugh A and Child-Pugh B, respectively. The fitting results showed that the liver cirrhosis score significantly affects fractional dose dependence of NTD. Conclusion: The Lyman parameters generated presently and the new form of NTD may be used to predict NTCP for treatment planning of innovative liver irradiation with different fractionations, such as hypofractioned stereotactic body radiation therapy.

  6. PABS: A Computer Program to Normalize Emission Probabilities and Calculate Realistic Uncertainties

    SciTech Connect

    Caron, D. S.; Browne, E.; Norman, E. B.

    2009-08-21

    The program PABS normalizes relative particle emission probabilities to an absolute scale and calculates the relevant uncertainties on this scale. The program is written in Java using the JDK 1.6 library. For additional information about system requirements, the code itself, and compiling from source, see the README file distributed with this program. The mathematical procedures used are given below.

  7. Social Science and the Bayesian Probability Explanation Model

    NASA Astrophysics Data System (ADS)

    Yin, Jie; Zhao, Lei

    2014-03-01

    C. G. Hempel, one of the logical empiricists, who builds up his probability explanation model by using the empiricist view of probability, this model encountered many difficulties in the scientific explanation in which Hempel is difficult to make a reasonable defense. Based on the bayesian probability theory, the Bayesian probability model provides an approach of a subjective probability explanation based on the subjective probability, using the subjectivist view of probability. On the one hand, this probability model establishes the epistemological status of the subject in the social science; On the other hand, it provides a feasible explanation model for the social scientific explanation, which has important methodological significance.

  8. Radiobiological Impact of Reduced Margins and Treatment Technique for Prostate Cancer in Terms of Tumor Control Probability (TCP) and Normal Tissue Complication Probability (NTCP)

    SciTech Connect

    Jensen, Ingelise; Carl, Jesper; Lund, Bente; Larsen, Erik H.; Nielsen, Jane

    2011-07-01

    Dose escalation in prostate radiotherapy is limited by normal tissue toxicities. The aim of this study was to assess the impact of margin size on tumor control and side effects for intensity-modulated radiation therapy (IMRT) and 3D conformal radiotherapy (3DCRT) treatment plans with increased dose. Eighteen patients with localized prostate cancer were enrolled. 3DCRT and IMRT plans were compared for a variety of margin sizes. A marker detectable on daily portal images was presupposed for narrow margins. Prescribed dose was 82 Gy within 41 fractions to the prostate clinical target volume (CTV). Tumor control probability (TCP) calculations based on the Poisson model including the linear quadratic approach were performed. Normal tissue complication probability (NTCP) was calculated for bladder, rectum and femoral heads according to the Lyman-Kutcher-Burman method. All plan types presented essentially identical TCP values and very low NTCP for bladder and femoral heads. Mean doses for these critical structures reached a minimum for IMRT with reduced margins. Two endpoints for rectal complications were analyzed. A marked decrease in NTCP for IMRT plans with narrow margins was seen for mild RTOG grade 2/3 as well as for proctitis/necrosis/stenosis/fistula, for which NTCP <7% was obtained. For equivalent TCP values, sparing of normal tissue was demonstrated with the narrow margin approach. The effect was more pronounced for IMRT than 3DCRT, with respect to NTCP for mild, as well as severe, rectal complications.

  9. Model estimates hurricane wind speed probabilities

    NASA Astrophysics Data System (ADS)

    Mumane, Richard J.; Barton, Chris; Collins, Eric; Donnelly, Jeffrey; Eisner, James; Emanuel, Kerry; Ginis, Isaac; Howard, Susan; Landsea, Chris; Liu, Kam-biu; Malmquist, David; McKay, Megan; Michaels, Anthony; Nelson, Norm; O Brien, James; Scott, David; Webb, Thompson, III

    In the United States, intense hurricanes (category 3, 4, and 5 on the Saffir/Simpson scale) with winds greater than 50 m s -1 have caused more damage than any other natural disaster [Pielke and Pielke, 1997]. Accurate estimates of wind speed exceedance probabilities (WSEP) due to intense hurricanes are therefore of great interest to (re)insurers, emergency planners, government officials, and populations in vulnerable coastal areas.The historical record of U.S. hurricane landfall is relatively complete only from about 1900, and most model estimates of WSEP are derived from this record. During the 1899-1998 period, only two category-5 and 16 category-4 hurricanes made landfall in the United States. The historical record therefore provides only a limited sample of the most intense hurricanes.

  10. Datamining approaches for modeling tumor control probability

    PubMed Central

    Naqa, Issam El; Deasy, Joseph O.; Mu, Yi; Huang, Ellen; Hope, Andrew J.; Lindsay, Patricia E.; Apte, Aditya; Alaly, James; Bradley, Jeffrey D.

    2016-01-01

    Background Tumor control probability (TCP) to radiotherapy is determined by complex interactions between tumor biology, tumor microenvironment, radiation dosimetry, and patient-related variables. The complexity of these heterogeneous variable interactions constitutes a challenge for building predictive models for routine clinical practice. We describe a datamining framework that can unravel the higher order relationships among dosimetric dose-volume prognostic variables, interrogate various radiobiological processes, and generalize to unseen data before when applied prospectively. Material and methods Several datamining approaches are discussed that include dose-volume metrics, equivalent uniform dose, mechanistic Poisson model, and model building methods using statistical regression and machine learning techniques. Institutional datasets of non-small cell lung cancer (NSCLC) patients are used to demonstrate these methods. The performance of the different methods was evaluated using bivariate Spearman rank correlations (rs). Over-fitting was controlled via resampling methods. Results Using a dataset of 56 patients with primary NCSLC tumors and 23 candidate variables, we estimated GTV volume and V75 to be the best model parameters for predicting TCP using statistical resampling and a logistic model. Using these variables, the support vector machine (SVM) kernel method provided superior performance for TCP prediction with an rs = 0.68 on leave-one-out testing compared to logistic regression (rs = 0.4), Poisson-based TCP (rs = 0.33), and cell kill equivalent uniform dose model (rs = 0.17). Conclusions The prediction of treatment response can be improved by utilizing datamining approaches, which are able to unravel important non-linear complex interactions among model variables and have the capacity to predict on unseen data for prospective clinical applications. PMID:20192878

  11. Probability of regenerating a normal limb after bite injury in the Mexican axolotl (Ambystoma mexicanum)

    PubMed Central

    Thompson, Sierra; Muzinic, Laura; Muzinic, Christopher; Niemiller, Matthew L.

    2014-01-01

    Abstract Multiple factors are thought to cause limb abnormalities in amphibian populations by altering processes of limb development and regeneration. We examined adult and juvenile axolotls (Ambystoma mexicanum) in the Ambystoma Genetic Stock Center (AGSC) for limb and digit abnormalities to investigate the probability of normal regeneration after bite injury. We observed that 80% of larval salamanders show evidence of bite injury at the time of transition from group housing to solitary housing. Among 717 adult axolotls that were surveyed, which included solitary‐housed males and group‐housed females, approximately half presented abnormalities, including examples of extra or missing digits and limbs, fused digits, and digits growing from atypical anatomical positions. Bite injury probably explains these limb defects, and not abnormal development, because limbs with normal anatomy regenerated after performing rostral amputations. We infer that only 43% of AGSC larvae will present four anatomically normal looking adult limbs after incurring a bite injury. Our results show regeneration of normal limb anatomy to be less than perfect after bite injury. PMID:25745564

  12. Classical probability model for Bell inequality

    NASA Astrophysics Data System (ADS)

    Khrennikov, Andrei

    2014-04-01

    We show that by taking into account randomness of realization of experimental contexts it is possible to construct common Kolmogorov space for data collected for these contexts, although they can be incompatible. We call such a construction "Kolmogorovization" of contextuality. This construction of common probability space is applied to Bell's inequality. It is well known that its violation is a consequence of collecting statistical data in a few incompatible experiments. In experiments performed in quantum optics contexts are determined by selections of pairs of angles (θi,θ'j) fixing orientations of polarization beam splitters. Opposite to the common opinion, we show that statistical data corresponding to measurements of polarizations of photons in the singlet state, e.g., in the form of correlations, can be described in the classical probabilistic framework. The crucial point is that in constructing the common probability space one has to take into account not only randomness of the source (as Bell did), but also randomness of context-realizations (in particular, realizations of pairs of angles (θi, θ'j)). One may (but need not) say that randomness of "free will" has to be accounted for.

  13. Regularized Finite Mixture Models for Probability Trajectories

    ERIC Educational Resources Information Center

    Shedden, Kerby; Zucker, Robert A.

    2008-01-01

    Finite mixture models are widely used in the analysis of growth trajectory data to discover subgroups of individuals exhibiting similar patterns of behavior over time. In practice, trajectories are usually modeled as polynomials, which may fail to capture important features of the longitudinal pattern. Focusing on dichotomous response measures, we…

  14. Other probable cases in the subquark model

    NASA Astrophysics Data System (ADS)

    Li, Tie-Zhong

    1982-05-01

    Except for flavor, color, subcolor and generation etc., there might be some other unknown quantum numbers for subquarks and the statistics what subquarks obey might not be of the Fermi type. With these factors in consideration, we re-study the Casalbuoni-Gatto model and get some different results. Aspirant FNRS.

  15. Normal probabilities for Cape Kennedy wind components: Monthly reference periods for all flight azimuths. Altitudes 0 to 70 kilometers

    NASA Technical Reports Server (NTRS)

    Falls, L. W.

    1973-01-01

    This document replaces Cape Kennedy empirical wind component statistics which are presently being used for aerospace engineering applications that require component wind probabilities for various flight azimuths and selected altitudes. The normal (Gaussian) distribution is presented as an adequate statistical model to represent component winds at Cape Kennedy. Head-, tail-, and crosswind components are tabulated for all flight azimuths for altitudes from 0 to 70 km by monthly reference periods. Wind components are given for 11 selected percentiles ranging from 0.135 percent to 99,865 percent for each month. Results of statistical goodness-of-fit tests are presented to verify the use of the Gaussian distribution as an adequate model to represent component winds at Cape Kennedy, Florida.

  16. Normal probabilities for Vandenberg AFB wind components - monthly reference periods for all flight azimuths, 0- to 70-km altitudes

    NASA Technical Reports Server (NTRS)

    Falls, L. W.

    1975-01-01

    Vandenberg Air Force Base (AFB), California, wind component statistics are presented to be used for aerospace engineering applications that require component wind probabilities for various flight azimuths and selected altitudes. The normal (Gaussian) distribution is presented as a statistical model to represent component winds at Vandenberg AFB. Head tail, and crosswind components are tabulated for all flight azimuths for altitudes from 0 to 70 km by monthly reference periods. Wind components are given for 11 selected percentiles ranging from 0.135 percent to 99.865 percent for each month. The results of statistical goodness-of-fit tests are presented to verify the use of the Gaussian distribution as an adequate model to represent component winds at Vandenberg AFB.

  17. Probability of Future Observations Exceeding One-Sided, Normal, Upper Tolerance Limits

    SciTech Connect

    Edwards, Timothy S.

    2014-10-29

    Normal tolerance limits are frequently used in dynamic environments specifications of aerospace systems as a method to account for aleatory variability in the environments. Upper tolerance limits, when used in this way, are computed from records of the environment and used to enforce conservatism in the specification by describing upper extreme values the environment may take in the future. Components and systems are designed to withstand these extreme loads to ensure they do not fail under normal use conditions. The degree of conservatism in the upper tolerance limits is controlled by specifying the coverage and confidence level (usually written in “coverage/confidence” form). Moreover, in high-consequence systems it is common to specify tolerance limits at 95% or 99% coverage and confidence at the 50% or 90% level. Despite the ubiquity of upper tolerance limits in the aerospace community, analysts and decision-makers frequently misinterpret their meaning. The misinterpretation extends into the standards that govern much of the acceptance and qualification of commercial and government aerospace systems. As a result, the risk of a future observation of the environment exceeding the upper tolerance limit is sometimes significantly underestimated by decision makers. This note explains the meaning of upper tolerance limits and a related measure, the upper prediction limit. So, the objective of this work is to clarify the probability of exceeding these limits in flight so that decision-makers can better understand the risk associated with exceeding design and test levels during flight and balance the cost of design and development with that of mission failure.

  18. Probability of Future Observations Exceeding One-Sided, Normal, Upper Tolerance Limits

    DOE PAGESBeta

    Edwards, Timothy S.

    2014-10-29

    Normal tolerance limits are frequently used in dynamic environments specifications of aerospace systems as a method to account for aleatory variability in the environments. Upper tolerance limits, when used in this way, are computed from records of the environment and used to enforce conservatism in the specification by describing upper extreme values the environment may take in the future. Components and systems are designed to withstand these extreme loads to ensure they do not fail under normal use conditions. The degree of conservatism in the upper tolerance limits is controlled by specifying the coverage and confidence level (usually written inmore » “coverage/confidence” form). Moreover, in high-consequence systems it is common to specify tolerance limits at 95% or 99% coverage and confidence at the 50% or 90% level. Despite the ubiquity of upper tolerance limits in the aerospace community, analysts and decision-makers frequently misinterpret their meaning. The misinterpretation extends into the standards that govern much of the acceptance and qualification of commercial and government aerospace systems. As a result, the risk of a future observation of the environment exceeding the upper tolerance limit is sometimes significantly underestimated by decision makers. This note explains the meaning of upper tolerance limits and a related measure, the upper prediction limit. So, the objective of this work is to clarify the probability of exceeding these limits in flight so that decision-makers can better understand the risk associated with exceeding design and test levels during flight and balance the cost of design and development with that of mission failure.« less

  19. Probability density function modeling for sub-powered interconnects

    NASA Astrophysics Data System (ADS)

    Pater, Flavius; Amaricǎi, Alexandru

    2016-06-01

    This paper proposes three mathematical models for reliability probability density function modeling the interconnect supplied at sub-threshold voltages: spline curve approximations, Gaussian models,and sine interpolation. The proposed analysis aims at determining the most appropriate fitting for the switching delay - probability of correct switching for sub-powered interconnects. We compare the three mathematical models with the Monte-Carlo simulations of interconnects for 45 nm CMOS technology supplied at 0.25V.

  20. Normalization of Gravitational Acceleration Models

    NASA Technical Reports Server (NTRS)

    Eckman, Randy A.; Brown, Aaron J.; Adamo, Daniel R.

    2011-01-01

    Unlike the uniform density spherical shell approximations of Newton, the con- sequence of spaceflight in the real universe is that gravitational fields are sensitive to the nonsphericity of their generating central bodies. The gravitational potential of a nonspherical central body is typically resolved using spherical harmonic approximations. However, attempting to directly calculate the spherical harmonic approximations results in at least two singularities which must be removed in order to generalize the method and solve for any possible orbit, including polar orbits. Three unique algorithms have been developed to eliminate these singularities by Samuel Pines [1], Bill Lear [2], and Robert Gottlieb [3]. This paper documents the methodical normalization of two1 of the three known formulations for singularity-free gravitational acceleration (namely, the Lear [2] and Gottlieb [3] algorithms) and formulates a general method for defining normalization parameters used to generate normalized Legendre Polynomials and ALFs for any algorithm. A treatment of the conventional formulation of the gravitational potential and acceleration is also provided, in addition to a brief overview of the philosophical differences between the three known singularity-free algorithms.

  1. Gendist: An R Package for Generated Probability Distribution Models

    PubMed Central

    Abu Bakar, Shaiful Anuar; Nadarajah, Saralees; ABSL Kamarul Adzhar, Zahrul Azmir; Mohamed, Ibrahim

    2016-01-01

    In this paper, we introduce the R package gendist that computes the probability density function, the cumulative distribution function, the quantile function and generates random values for several generated probability distribution models including the mixture model, the composite model, the folded model, the skewed symmetric model and the arc tan model. These models are extensively used in the literature and the R functions provided here are flexible enough to accommodate various univariate distributions found in other R packages. We also show its applications in graphing, estimation, simulation and risk measurements. PMID:27272043

  2. Aggregate and Individual Replication Probability within an Explicit Model of the Research Process

    ERIC Educational Resources Information Center

    Miller, Jeff; Schwarz, Wolf

    2011-01-01

    We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by…

  3. Review of Literature for Model Assisted Probability of Detection

    SciTech Connect

    Meyer, Ryan M.; Crawford, Susan L.; Lareau, John P.; Anderson, Michael T.

    2014-09-30

    This is a draft technical letter report for NRC client documenting a literature review of model assisted probability of detection (MAPOD) for potential application to nuclear power plant components for improvement of field NDE performance estimations.

  4. The Normalization Model of Attention

    PubMed Central

    Reynolds, John H.; Heeger, David J.

    2009-01-01

    Attention has been found to have a wide variety of effects on the responses of neurons in visual cortex. We describe a model of attention that exhibits each of these different forms of attentional modulation, depending on the stimulus conditions and the spread (or selectivity) of the attention field in the model. The model helps reconcile proposals that have been taken to represent alternative theories of attention. We argue that the variety and complexity of the results reported in the literature emerge from the variety of empirical protocols that were used, such that the results observed in any one experiment depended on the stimulus conditions and the subject’s attentional strategy, a notion that we define precisely in terms of the attention field in the model, but that has not typically been completely under experimental control. PMID:19186161

  5. Modeling Finite-Time Failure Probabilities in Risk Analysis Applications.

    PubMed

    Dimitrova, Dimitrina S; Kaishev, Vladimir K; Zhao, Shouqi

    2015-10-01

    In this article, we introduce a framework for analyzing the risk of systems failure based on estimating the failure probability. The latter is defined as the probability that a certain risk process, characterizing the operations of a system, reaches a possibly time-dependent critical risk level within a finite-time interval. Under general assumptions, we define two dually connected models for the risk process and derive explicit expressions for the failure probability and also the joint probability of the time of the occurrence of failure and the excess of the risk process over the risk level. We illustrate how these probabilistic models and results can be successfully applied in several important areas of risk analysis, among which are systems reliability, inventory management, flood control via dam management, infectious disease spread, and financial insolvency. Numerical illustrations are also presented. PMID:26010201

  6. Validation of Normal Tissue Complication Probability Predictions in Individual Patient: Late Rectal Toxicity

    SciTech Connect

    Semenenko, Vladimir A.; Tarima, Sergey S.; Devisetty, Kiran; Pelizzari, Charles A.; Liauw, Stanley L.

    2013-03-15

    Purpose: To perform validation of risk predictions for late rectal toxicity (LRT) in prostate cancer obtained using a new approach to synthesize published normal tissue complication data. Methods and Materials: A published study survey was performed to identify the dose-response relationships for LRT derived from nonoverlapping patient populations. To avoid mixing models based on different symptoms, the emphasis was placed on rectal bleeding. The selected models were used to compute the risk estimates of grade 2+ and grade 3+ LRT for an independent validation cohort composed of 269 prostate cancer patients with known toxicity outcomes. Risk estimates from single studies were combined to produce consolidated risk estimates. An agreement between the actuarial toxicity incidence 3 years after radiation therapy completion and single-study or consolidated risk estimates was evaluated using the concordance correlation coefficient. Goodness of fit for the consolidated risk estimates was assessed using the Hosmer-Lemeshow test. Results: A total of 16 studies of grade 2+ and 5 studies of grade 3+ LRT met the inclusion criteria. The consolidated risk estimates of grade 2+ and 3+ LRT were constructed using 3 studies each. For grade 2+ LRT, the concordance correlation coefficient for the consolidated risk estimates was 0.537 compared with 0.431 for the best-fit single study. For grade 3+ LRT, the concordance correlation coefficient for the consolidated risk estimates was 0.477 compared with 0.448 for the best-fit single study. No evidence was found for a lack of fit for the consolidated risk estimates using the Hosmer-Lemeshow test (P=.531 and P=.397 for grade 2+ and 3+ LRT, respectively). Conclusions: In a large cohort of prostate cancer patients, selected sets of consolidated risk estimates were found to be more accurate predictors of LRT than risk estimates derived from any single study.

  7. UV Multi-scatter Propagation Model of Point Probability Method

    NASA Astrophysics Data System (ADS)

    Lu, Bai; Zhensen, Wu; Haiying, Li

    Based on the multi-scatter propagation model of Monte Carlo, an improved geometric model is proposed. The model is ameliorated by using the point probability method. Comparison is made between the multiple scattering propagation models and the single-scatter propagation model in calculation time and relative error. The effect of complex weather, stumbling block and the transmitter and the receiver in different height are discussed. It is shown that although the single-scatter propagation model can be evaluated easily from standard numerical integration but this model cannot describe general non-line-of sight propagation problem. While the improved point probability multi-scatter Monte Carlo model may be used to more general case.

  8. Naive Probability: Model-Based Estimates of Unique Events.

    PubMed

    Khemlani, Sangeet S; Lotstein, Max; Johnson-Laird, Philip N

    2015-08-01

    We describe a dual-process theory of how individuals estimate the probabilities of unique events, such as Hillary Clinton becoming U.S. President. It postulates that uncertainty is a guide to improbability. In its computer implementation, an intuitive system 1 simulates evidence in mental models and forms analog non-numerical representations of the magnitude of degrees of belief. This system has minimal computational power and combines evidence using a small repertoire of primitive operations. It resolves the uncertainty of divergent evidence for single events, for conjunctions of events, and for inclusive disjunctions of events, by taking a primitive average of non-numerical probabilities. It computes conditional probabilities in a tractable way, treating the given event as evidence that may be relevant to the probability of the dependent event. A deliberative system 2 maps the resulting representations into numerical probabilities. With access to working memory, it carries out arithmetical operations in combining numerical estimates. Experiments corroborated the theory's predictions. Participants concurred in estimates of real possibilities. They violated the complete joint probability distribution in the predicted ways, when they made estimates about conjunctions: P(A), P(B), P(A and B), disjunctions: P(A), P(B), P(A or B or both), and conditional probabilities P(A), P(B), P(B|A). They were faster to estimate the probabilities of compound propositions when they had already estimated the probabilities of each of their components. We discuss the implications of these results for theories of probabilistic reasoning. PMID:25363706

  9. A Skew-Normal Mixture Regression Model

    ERIC Educational Resources Information Center

    Liu, Min; Lin, Tsung-I

    2014-01-01

    A challenge associated with traditional mixture regression models (MRMs), which rest on the assumption of normally distributed errors, is determining the number of unobserved groups. Specifically, even slight deviations from normality can lead to the detection of spurious classes. The current work aims to (a) examine how sensitive the commonly…

  10. Linearity of Quantum Probability Measure and Hardy's Model

    NASA Astrophysics Data System (ADS)

    Fujikawa, Kazuo; Oh, C. H.; Zhang, Chengjie

    2014-01-01

    We re-examine d = 4 hidden-variables model for a system of two spin-1/2 particles in view of the concrete model of Hardy, who analyzed the criterion of entanglement without referring to inequality. The basis of our analysis is the linearity of the probability measure related to the Born probability interpretation, which excludes noncontextual hidden-variables model in d≥3. To be specific, we note the inconsistency of the noncontextual hidden-variables model in d = 4 with the linearity of the quantum mechanical probability measure in the sense <ψ|aṡσ ⊗b ṡσ|ψ>+ <ψ|a ṡσ ⊗b‧ ṡσ|ψ> = <ψ|aṡσ⊗(b + b‧)ṡσ|ψ> for noncollinear b and b‧. It is then shown that Hardy's model in d = 4 does not lead to a unique mathematical expression in the demonstration of the discrepancy of local realism (hidden-variables model) with entanglement and thus his proof is incomplete. We identify the origin of this nonuniqueness with the nonuniqueness of translating quantum mechanical expressions into expressions in hidden-variables model, which results from the failure of the above linearity of the probability measure. In contrast, if the linearity of the probability measure is strictly imposed, which tantamounts to asking that the noncontextual hidden-variables model in d = 4 gives the Clauser-Horne-Shimony-Holt (CHSH) inequality ||≤2 uniquely, it is shown that the hidden-variables model can describe only separable quantum mechanical states; this conclusion is in perfect agreement with the so-called Gisin's theorem which states that ||≤2 implies separable states.

  11. Gap probability - Measurements and models of a pecan orchard

    NASA Technical Reports Server (NTRS)

    Strahler, Alan H.; Li, Xiaowen; Moody, Aaron; Liu, YI

    1992-01-01

    Measurements and models are compared for gap probability in a pecan orchard. Measurements are based on panoramic photographs of 50* by 135 view angle made under the canopy looking upwards at regular positions along transects between orchard trees. The gap probability model is driven by geometric parameters at two levels-crown and leaf. Crown level parameters include the shape of the crown envelope and spacing of crowns; leaf level parameters include leaf size and shape, leaf area index, and leaf angle, all as functions of canopy position.

  12. Simulation modeling of the probability of magmatic disruption of the potential Yucca Mountain Site

    SciTech Connect

    Crowe, B.M.; Perry, F.V.; Valentine, G.A.; Wallmann, P.C.; Kossik, R.

    1993-11-01

    The first phase of risk simulation modeling was completed for the probability of magmatic disruption of a potential repository at Yucca Mountain. E1, the recurrence rate of volcanic events, is modeled using bounds from active basaltic volcanic fields and midpoint estimates of E1. The cumulative probability curves for El are generated by simulation modeling using a form of a triangular distribution. The 50% estimates are about 5 to 8 {times} 10{sup 8} events yr{sup {minus}1}. The simulation modeling shows that the cumulative probability distribution for E1 is more sensitive to the probability bounds then the midpoint estimates. The E2 (disruption probability) is modeled through risk simulation using a normal distribution and midpoint estimates from multiple alternative stochastic and structural models. The 50% estimate of E2 is 4.3 {times} 10{sup {minus}3} The probability of magmatic disruption of the potential Yucca Mountain site is 2.5 {times} 10{sup {minus}8} yr{sup {minus}1}. This median estimate decreases to 9.6 {times} 10{sup {minus}9} yr{sup {minus}1} if E1 is modified for the structural models used to define E2. The Repository Integration Program was tested to compare releases of a simulated repository (without volcanic events) to releases from time histories which may include volcanic disruptive events. Results show that the performance modeling can be used for sensitivity studies of volcanic effects.

  13. Model-independent trend of α-preformation probability

    NASA Astrophysics Data System (ADS)

    Qian, YiBin; Ren, ZhongZhou

    2013-08-01

    The α-preformation probability is directly deduced from experimental α decay energies and half-lives in an analytical way without any modified parameters. Several other model-deduced results, are used to compare with that of the present study. The key role played by the shell effects in the α-preformation process is indicated in all these cases. In detail, the α-preformation factors of different theoretical extractions are found to have similar behavior for one given isotopic chain, implying the model-independent varying trend of the preformation probability of α particle. In addition, the formation probability of heavier particle in cluster radioactivity is also obtained, and this confirms the relationship between the cluster preformation factor and the product of the cluster and daughter proton numbers.

  14. Camera-Model Identification Using Markovian Transition Probability Matrix

    NASA Astrophysics Data System (ADS)

    Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei

    Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.

  15. Investigation of an empirical probability measure based test for multivariate normality

    SciTech Connect

    Booker, J.M.; Johnson, M.E.; Beckman, R.J.

    1984-01-01

    Foutz (1980) derived a goodness of fit test for a hypothesis specifying a continuous, p-variate distribution. The test statistic is both distribution-free and independent of p. In adapting the Foutz test for multivariate normality, we consider using chi/sup 2/ and rescaled beta variates in constructing statistically equivalent blocks. The Foutz test is compared to other multivariate normality tests developed by Hawkins (1981) and Malkovich and Afifi (1973). The set of alternative distributions tested include Pearson type II and type VII, Johnson translations, Plackett, and distributions arising from Khintchine's theorem. Univariate alternatives from the general class developed by Johnson et al. (1980) were also used. An empirical study confirms the independence of the test statistic on p even when parameters are estimated. In general, the Foutz test is less conservative under the null hypothesis but has poorer power under most alternatives than the other tests.

  16. Modeling highway travel time distribution with conditional probability models

    SciTech Connect

    Oliveira Neto, Francisco Moraes; Chin, Shih-Miao; Hwang, Ho-Ling; Han, Lee

    2014-01-01

    ABSTRACT Under the sponsorship of the Federal Highway Administration's Office of Freight Management and Operations, the American Transportation Research Institute (ATRI) has developed performance measures through the Freight Performance Measures (FPM) initiative. Under this program, travel speed information is derived from data collected using wireless based global positioning systems. These telemetric data systems are subscribed and used by trucking industry as an operations management tool. More than one telemetric operator submits their data dumps to ATRI on a regular basis. Each data transmission contains truck location, its travel time, and a clock time/date stamp. Data from the FPM program provides a unique opportunity for studying the upstream-downstream speed distributions at different locations, as well as different time of the day and day of the week. This research is focused on the stochastic nature of successive link travel speed data on the continental United States Interstates network. Specifically, a method to estimate route probability distributions of travel time is proposed. This method uses the concepts of convolution of probability distributions and bivariate, link-to-link, conditional probability to estimate the expected distributions for the route travel time. Major contribution of this study is the consideration of speed correlation between upstream and downstream contiguous Interstate segments through conditional probability. The established conditional probability distributions, between successive segments, can be used to provide travel time reliability measures. This study also suggests an adaptive method for calculating and updating route travel time distribution as new data or information is added. This methodology can be useful to estimate performance measures as required by the recent Moving Ahead for Progress in the 21st Century Act (MAP 21).

  17. Probability theory for 3-layer remote sensing radiative transfer model: univariate case.

    PubMed

    Ben-David, Avishai; Davidson, Charles E

    2012-04-23

    A probability model for a 3-layer radiative transfer model (foreground layer, cloud layer, background layer, and an external source at the end of line of sight) has been developed. The 3-layer model is fundamentally important as the primary physical model in passive infrared remote sensing. The probability model is described by the Johnson family of distributions that are used as a fit for theoretically computed moments of the radiative transfer model. From the Johnson family we use the SU distribution that can address a wide range of skewness and kurtosis values (in addition to addressing the first two moments, mean and variance). In the limit, SU can also describe lognormal and normal distributions. With the probability model one can evaluate the potential for detecting a target (vapor cloud layer), the probability of observing thermal contrast, and evaluate performance (receiver operating characteristics curves) in clutter-noise limited scenarios. This is (to our knowledge) the first probability model for the 3-layer remote sensing geometry that treats all parameters as random variables and includes higher-order statistics. PMID:22535093

  18. Distributed estimation and joint probabilities estimation by entropy model

    NASA Astrophysics Data System (ADS)

    Fassinut-Mombot, B.; Zribi, M.; Choquel, J. B.

    2001-05-01

    This paper proposes the use of Entropy Model for distributed estimation system. Entropy Model is an entropic technique based on the minimization of conditional entropy and developed for Multi-Source/Sensor Information Fusion (MSIF) problem. We address the problem of distributed estimation from independent observations involving multiple sources, i.e., the problem of estimating or selecting one of several identity declaration, or hypothesis concerning an observed object. Two problems are considered in Entropy Model. In order to fuse observations using Entropy Model, it is necessary to know or estimate the conditional probabilities and by equivalent the joint probabilities. A common practice for estimating probability distributions from data when nothing is known (without a priori knowledge), one should prefer distributions that are as uniform as possible, that is, have maximal entropy. Next, the problem of combining (or ``fusing'') observations relating to identity hypotheses and selecting the most appropriate hypothesis about the object's identity is addressed. Much future work remains, but the results indicate that Entropy Model is a promising technique for distributed estimation. .

  19. Fixation probability in a two-locus intersexual selection model.

    PubMed

    Durand, Guillermo; Lessard, Sabin

    2016-06-01

    We study a two-locus model of intersexual selection in a finite haploid population reproducing according to a discrete-time Moran model with a trait locus expressed in males and a preference locus expressed in females. We show that the probability of ultimate fixation of a single mutant allele for a male ornament introduced at random at the trait locus given any initial frequency state at the preference locus is increased by weak intersexual selection and recombination, weak or strong. Moreover, this probability exceeds the initial frequency of the mutant allele even in the case of a costly male ornament if intersexual selection is not too weak. On the other hand, the probability of ultimate fixation of a single mutant allele for a female preference towards a male ornament introduced at random at the preference locus is increased by weak intersexual selection and weak recombination if the female preference is not costly, and is strong enough in the case of a costly male ornament. The analysis relies on an extension of the ancestral recombination-selection graph for samples of haplotypes to take into account events of intersexual selection, while the symbolic calculation of the fixation probabilities is made possible in a reasonable time by an optimizing algorithm. PMID:27059474

  20. A propagation model of computer virus with nonlinear vaccination probability

    NASA Astrophysics Data System (ADS)

    Gan, Chenquan; Yang, Xiaofan; Liu, Wanping; Zhu, Qingyi

    2014-01-01

    This paper is intended to examine the effect of vaccination on the spread of computer viruses. For that purpose, a novel computer virus propagation model, which incorporates a nonlinear vaccination probability, is proposed. A qualitative analysis of this model reveals that, depending on the value of the basic reproduction number, either the virus-free equilibrium or the viral equilibrium is globally asymptotically stable. The results of simulation experiments not only demonstrate the validity of our model, but also show the effectiveness of nonlinear vaccination strategies. Through parameter analysis, some effective strategies for eradicating viruses are suggested.

  1. Distinction between early normal intrauterine pregnancies and pathological pregnancies by means of a logistic model.

    PubMed

    Thorburn, J; Bryman, I; Hahlin, M

    1992-01-01

    The probability of an unclear very early pregnancy being a normal intrauterine pregnancy was estimated using a logistic model. Five diagnostic measures of prognostic value were identified in the model: (i) daily change in human chorionic gonadotrophin (HCG), (ii) results of transvaginal ultrasound, (iii) vaginal bleeding, (iv) serum progesterone level and (v) risk score for ectopic pregnancy. With the use of this model, the probability of a normal intrauterine pregnancy has been estimated as 96.7%. PMID:1551947

  2. A model to assess dust explosion occurrence probability.

    PubMed

    Hassan, Junaid; Khan, Faisal; Amyotte, Paul; Ferdous, Refaul

    2014-03-15

    Dust handling poses a potential explosion hazard in many industrial facilities. The consequences of a dust explosion are often severe and similar to a gas explosion; however, its occurrence is conditional to the presence of five elements: combustible dust, ignition source, oxidant, mixing and confinement. Dust explosion researchers have conducted experiments to study the characteristics of these elements and generate data on explosibility. These experiments are often costly but the generated data has a significant scope in estimating the probability of a dust explosion occurrence. This paper attempts to use existing information (experimental data) to develop a predictive model to assess the probability of a dust explosion occurrence in a given environment. The pro-posed model considers six key parameters of a dust explosion: dust particle diameter (PD), minimum ignition energy (MIE), minimum explosible concentration (MEC), minimum ignition temperature (MIT), limiting oxygen concentration (LOC) and explosion pressure (Pmax). A conditional probabilistic approach has been developed and embedded in the proposed model to generate a nomograph for assessing dust explosion occurrence. The generated nomograph provides a quick assessment technique to map the occurrence probability of a dust explosion for a given environment defined with the six parameters. PMID:24486616

  3. Opinion dynamics model with weighted influence: Exit probability and dynamics

    NASA Astrophysics Data System (ADS)

    Biswas, Soham; Sinha, Suman; Sen, Parongama

    2013-08-01

    We introduce a stochastic model of binary opinion dynamics in which the opinions are determined by the size of the neighboring domains. The exit probability here shows a step function behavior, indicating the existence of a separatrix distinguishing two different regions of basin of attraction. This behavior, in one dimension, is in contrast to other well known opinion dynamics models where no such behavior has been observed so far. The coarsening study of the model also yields novel exponent values. A lower value of persistence exponent is obtained in the present model, which involves stochastic dynamics, when compared to that in a similar type of model with deterministic dynamics. This apparently counterintuitive result is justified using further analysis. Based on these results, it is concluded that the proposed model belongs to a unique dynamical class.

  4. Quantum Probability -- A New Direction for Modeling in Cognitive Science

    NASA Astrophysics Data System (ADS)

    Roy, Sisir

    2014-07-01

    Human cognition is still a puzzling issue in research and its appropriate modeling. It depends on how the brain behaves at that particular instance and identifies and responds to a signal among myriads of noises that are present in the surroundings (called external noise) as well as in the neurons themselves (called internal noise). Thus it is not surprising to assume that the functionality consists of various uncertainties, possibly a mixture of aleatory and epistemic uncertainties. It is also possible that a complicated pathway consisting of both types of uncertainties in continuum play a major role in human cognition. For more than 200 years mathematicians and philosophers have been using probability theory to describe human cognition. Recently in several experiments with human subjects, violation of traditional probability theory has been clearly revealed in plenty of cases. Literature survey clearly suggests that classical probability theory fails to model human cognition beyond a certain limit. While the Bayesian approach may seem to be a promising candidate to this problem, the complete success story of Bayesian methodology is yet to be written. The major problem seems to be the presence of epistemic uncertainty and its effect on cognition at any given time. Moreover the stochasticity in the model arises due to the unknown path or trajectory (definite state of mind at each time point), a person is following. To this end a generalized version of probability theory borrowing ideas from quantum mechanics may be a plausible approach. A superposition state in quantum theory permits a person to be in an indefinite state at each point of time. Such an indefinite state allows all the states to have the potential to be expressed at each moment. Thus a superposition state appears to be able to represent better, the uncertainty, ambiguity or conflict experienced by a person at any moment demonstrating that mental states follow quantum mechanics during perception and

  5. A Normalization Model of Multisensory Integration

    PubMed Central

    Ohshiro, Tomokazu; Angelaki, Dora E.; DeAngelis, Gregory C.

    2011-01-01

    Responses of neurons that integrate multiple sensory inputs are traditionally characterized in terms of a set of empirical principles. However, a simple computational framework that accounts for these empirical features of multisensory integration has not been established. We propose that divisive normalization, acting at the stage of multisensory integration, can account for many of the empirical principles of multisensory integration exhibited by single neurons, such as the principle of inverse effectiveness and the spatial principle. This model, which employs a simple functional operation (normalization) for which there is considerable experimental support, also accounts for the recent observation that the mathematical rule by which multisensory neurons combine their inputs changes with cue reliability. The normalization model, which makes a strong testable prediction regarding cross-modal suppression, may therefore provide a simple unifying computational account of the key features of multisensory integration by neurons. PMID:21552274

  6. Estimation and prediction of maximum daily rainfall at Sagar Island using best fit probability models

    NASA Astrophysics Data System (ADS)

    Mandal, S.; Choudhury, B. U.

    2015-07-01

    Sagar Island, setting on the continental shelf of Bay of Bengal, is one of the most vulnerable deltas to the occurrence of extreme rainfall-driven climatic hazards. Information on probability of occurrence of maximum daily rainfall will be useful in devising risk management for sustaining rainfed agrarian economy vis-a-vis food and livelihood security. Using six probability distribution models and long-term (1982-2010) daily rainfall data, we studied the probability of occurrence of annual, seasonal and monthly maximum daily rainfall (MDR) in the island. To select the best fit distribution models for annual, seasonal and monthly time series based on maximum rank with minimum value of test statistics, three statistical goodness of fit tests, viz. Kolmogorove-Smirnov test (K-S), Anderson Darling test ( A 2 ) and Chi-Square test ( X 2) were employed. The fourth probability distribution was identified from the highest overall score obtained from the three goodness of fit tests. Results revealed that normal probability distribution was best fitted for annual, post-monsoon and summer seasons MDR, while Lognormal, Weibull and Pearson 5 were best fitted for pre-monsoon, monsoon and winter seasons, respectively. The estimated annual MDR were 50, 69, 86, 106 and 114 mm for return periods of 2, 5, 10, 20 and 25 years, respectively. The probability of getting an annual MDR of >50, >100, >150, >200 and >250 mm were estimated as 99, 85, 40, 12 and 03 % level of exceedance, respectively. The monsoon, summer and winter seasons exhibited comparatively higher probabilities (78 to 85 %) for MDR of >100 mm and moderate probabilities (37 to 46 %) for >150 mm. For different recurrence intervals, the percent probability of MDR varied widely across intra- and inter-annual periods. In the island, rainfall anomaly can pose a climatic threat to the sustainability of agricultural production and thus needs adequate adaptation and mitigation measures.

  7. Improving Conceptual Models Using AEM Data and Probability Distributions

    NASA Astrophysics Data System (ADS)

    Davis, A. C.; Munday, T. J.; Christensen, N. B.

    2012-12-01

    With emphasis being placed on uncertainty in groundwater modelling and prediction, coupled with questions concerning the value of geophysical methods in hydrogeology, it is important to ask meaningful questions of hydrogeophysical data and inversion results. For example, to characterise aquifers using electromagnetic (EM) data, we ask questions such as "Given that the electrical conductivity of aquifer 'A' is less than x, where is that aquifer elsewhere in the survey area?" The answer may be given by examining inversion models, selecting locations and layers that satisfy the condition 'conductivity <= x', and labelling them as aquifer 'A'. One difficulty with this approach is that the inversion model result often be considered to be the only model for the data. In reality it is just one image of the subsurface that, given the method and the regularisation imposed in the inversion, agrees with measured data within a given error bound. We have no idea whether the final model realised by the inversion satisfies the global minimum error, or whether it is simply in a local minimum. There is a distribution of inversion models that satisfy the error tolerance condition: the final model is not the only one, nor is it necessarily the correct one. AEM inversions are often linearised in the calculation of the parameter sensitivity: we rely on the second derivatives in the Taylor expansion, thus the minimum model has all layer parameters distributed about their mean parameter value with well-defined variance. We investigate the validity of the minimum model, and its uncertainty, by examining the full posterior covariance matrix. We ask questions of the minimum model, and answer them in a probabilistically. The simplest question we can pose is "What is the probability that all layer resistivity values are <= a cut-off value?" We can calculate through use of the erf or the erfc functions. The covariance values of the inversion become marginalised in the integration: only the

  8. Defining prior probabilities for hydrologic model structures in UK catchments

    NASA Astrophysics Data System (ADS)

    Clements, Michiel; Pianosi, Francesca; Wagener, Thorsten; Coxon, Gemma; Freer, Jim; Booij, Martijn

    2014-05-01

    The selection of a model structure is an essential part of the hydrological modelling process. Recently flexible modeling frameworks have been proposed where hybrid model structures can be obtained by mixing together components from a suite of existing hydrological models. When sufficient and reliable data are available, this framework can be successfully utilised to identify the most appropriate structure, and associated optimal parameters, for a given catchment by maximizing the different models ability to reproduce the desired range of flow behaviour. In this study, we use a flexible modelling framework to address a rather different question: can the most appropriate model structure be inferred a priori (i.e without using flow observations) from catchment characteristics like topography, geology, land use, and climate? Furthermore and more generally, can we define priori probabilities of different model structures as a function of catchment characteristics? To address these questions we propose a two-step methodology and demonstrate it by application to a national database of meteo-hydrological data and catchment characteristics for 89 catchments across the UK. In the first step, each catchment is associated with its most appropriate model structure. We consider six possible structures obtained by combining two soil moisture accounting components widely used in the UK (Penman and PDM) and three different flow routing modules (linear, parallel, leaky). We measure the suitability of a model structure by the probability of finding behavioural parameterizations for that model structure when applied to the catchment under study. In the second step, we use regression analysis to establish a relation between selected model structures and the catchment characteristics. Specifically, we apply Classification And Regression Trees (CART) and show that three catchment characteristics, the Base Flow Index, the Runoff Coefficient and the mean Drainage Path Slope, can be used

  9. Predictions of Geospace Drivers By the Probability Distribution Function Model

    NASA Astrophysics Data System (ADS)

    Bussy-Virat, C.; Ridley, A. J.

    2014-12-01

    Geospace drivers like the solar wind speed, interplanetary magnetic field (IMF), and solar irradiance have a strong influence on the density of the thermosphere and the near-Earth space environment. This has important consequences on the drag on satellites that are in low orbit and therefore on their position. One of the basic problems with space weather prediction is that these drivers can only be measured about one hour before they affect the environment. In order to allow for adequate planning for some members of the commercial, military, or civilian communities, reliable long-term space weather forecasts are needed. The study presents a model for predicting geospace drivers up to five days in advance. This model uses the same general technique to predict the solar wind speed, the three components of the IMF, and the solar irradiance F10.7. For instance, it uses Probability distribution functions (PDFs) to relate the current solar wind speed and slope to the future solar wind speed, as well as the solar wind speed to the solar wind speed one solar rotation in the future. The PDF Model has been compared to other models for predictions of the speed. It has been found that it is better than using the current solar wind speed (i.e., persistence), and better than the Wang-Sheeley-Arge Model for prediction horizons of 24 hours. Once the drivers are predicted, and the uncertainty on the drivers are specified, the density in the thermosphere can be derived using various models of the thermosphere, such as the Global Ionosphere Thermosphere Model. In addition, uncertainties on the densities can be estimated, based on ensembles of simulations. From the density and uncertainty predictions, satellite positions, as well as the uncertainty in those positions can be estimated. These can assist operators in determining the probability of collisions between objects in low Earth orbit.

  10. Estimating transition probabilities among everglades wetland communities using multistate models

    USGS Publications Warehouse

    Hotaling, A.S.; Martin, J.; Kitchens, W.M.

    2009-01-01

    In this study we were able to provide the first estimates of transition probabilities of wet prairie and slough vegetative communities in Water Conservation Area 3A (WCA3A) of the Florida Everglades and to identify the hydrologic variables that determine these transitions. These estimates can be used in management models aimed at restoring proportions of wet prairie and slough habitats to historical levels in the Everglades. To determine what was driving the transitions between wet prairie and slough communities we evaluated three hypotheses: seasonality, impoundment, and wet and dry year cycles using likelihood-based multistate models to determine the main driver of wet prairie conversion in WCA3A. The most parsimonious model included the effect of wet and dry year cycles on vegetative community conversions. Several ecologists have noted wet prairie conversion in southern WCA3A but these are the first estimates of transition probabilities among these community types. In addition, to being useful for management of the Everglades we believe that our framework can be used to address management questions in other ecosystems. ?? 2009 The Society of Wetland Scientists.

  11. Three-dimensional heart dose reconstruction to estimate normal tissue complication probability after breast irradiation using portal dosimetry

    SciTech Connect

    Louwe, R. J. W.; Wendling, M.; Herk, M. B. van; Mijnheer, B. J.

    2007-04-15

    Irradiation of the heart is one of the major concerns during radiotherapy of breast cancer. Three-dimensional (3D) treatment planning would therefore be useful but cannot always be performed for left-sided breast treatments, because CT data may not be available. However, even if 3D dose calculations are available and an estimate of the normal tissue damage can be made, uncertainties in patient positioning may significantly influence the heart dose during treatment. Therefore, 3D reconstruction of the actual heart dose during breast cancer treatment using electronic imaging portal device (EPID) dosimetry has been investigated. A previously described method to reconstruct the dose in the patient from treatment portal images at the radiological midsurface was used in combination with a simple geometrical model of the irradiated heart volume to enable calculation of dose-volume histograms (DVHs), to independently verify this aspect of the treatment without using 3D data from a planning CT scan. To investigate the accuracy of our method, the DVHs obtained with full 3D treatment planning system (TPS) calculations and those obtained after resampling the TPS dose in the radiological midsurface were compared for fifteen breast cancer patients for whom CT data were available. In addition, EPID dosimetry as well as 3D dose calculations using our TPS, film dosimetry, and ionization chamber measurements were performed in an anthropomorphic phantom. It was found that the dose reconstructed using EPID dosimetry and the dose calculated with the TPS agreed within 1.5% in the lung/heart region. The dose-volume histograms obtained with EPID dosimetry were used to estimate the normal tissue complication probability (NTCP) for late excess cardiac mortality. Although the accuracy of these NTCP calculations might be limited due to the uncertainty in the NTCP model, in combination with our portal dosimetry approach it allows incorporation of the actual heart dose. For the anthropomorphic

  12. Recent Advances in Model-Assisted Probability of Detection

    NASA Technical Reports Server (NTRS)

    Thompson, R. Bruce; Brasche, Lisa J.; Lindgren, Eric; Swindell, Paul; Winfree, William P.

    2009-01-01

    The increased role played by probability of detection (POD) in structural integrity programs, combined with the significant time and cost associated with the purely empirical determination of POD, provides motivation for alternate means to estimate this important metric of NDE techniques. One approach to make the process of POD estimation more efficient is to complement limited empirical experiments with information from physics-based models of the inspection process or controlled laboratory experiments. The Model-Assisted Probability of Detection (MAPOD) Working Group was formed by the Air Force Research Laboratory, the FAA Technical Center, and NASA to explore these possibilities. Since the 2004 inception of the MAPOD Working Group, 11 meetings have been held in conjunction with major NDE conferences. This paper will review the accomplishments of this group, which includes over 90 members from around the world. Included will be a discussion of strategies developed to combine physics-based and empirical understanding, draft protocols that have been developed to guide application of the strategies, and demonstrations that have been or are being carried out in a number of countries. The talk will conclude with a discussion of future directions, which will include documentation of benefits via case studies, development of formal protocols for engineering practice, as well as a number of specific technical issues.

  13. Normal brain ageing: models and mechanisms

    PubMed Central

    Toescu, Emil C

    2005-01-01

    Normal ageing is associated with a degree of decline in a number of cognitive functions. Apart from the issues raised by the current attempts to expand the lifespan, understanding the mechanisms and the detailed metabolic interactions involved in the process of normal neuronal ageing continues to be a challenge. One model, supported by a significant amount of experimental evidence, views the cellular ageing as a metabolic state characterized by an altered function of the metabolic triad: mitochondria–reactive oxygen species (ROS)–intracellular Ca2+. The perturbation in the relationship between the members of this metabolic triad generate a state of decreased homeostatic reserve, in which the aged neurons could maintain adequate function during normal activity, as demonstrated by the fact that normal ageing is not associated with widespread neuronal loss, but become increasingly vulnerable to the effects of excessive metabolic loads, usually associated with trauma, ischaemia or neurodegenerative processes. This review will concentrate on some of the evidence showing altered mitochondrial function with ageing and also discuss some of the functional consequences that would result from such events, such as alterations in mitochondrial Ca2+ homeostasis, ATP production and generation of ROS. PMID:16321805

  14. Probability of detection models for eddy current NDE methods

    SciTech Connect

    Rajesh, S.N.

    1993-04-30

    The development of probability of detection (POD) models for a variety of nondestructive evaluation (NDE) methods is motivated by a desire to quantify the variability introduced during the process of testing. Sources of variability involved in eddy current methods of NDE include those caused by variations in liftoff, material properties, probe canting angle, scan format, surface roughness and measurement noise. This thesis presents a comprehensive POD model for eddy current NDE. Eddy current methods of nondestructive testing are used widely in industry to inspect a variety of nonferromagnetic and ferromagnetic materials. The development of a comprehensive POD model is therefore of significant importance. The model incorporates several sources of variability characterized by a multivariate Gaussian distribution and employs finite element analysis to predict the signal distribution. The method of mixtures is then used for estimating optimal threshold values. The research demonstrates the use of a finite element model within a probabilistic framework to the spread in the measured signal for eddy current nondestructive methods. Using the signal distributions for various flaw sizes the POD curves for varying defect parameters have been computed. In contrast to experimental POD models, the cost of generating such curves is very low and complex defect shapes can be handled very easily. The results are also operator independent.

  15. Modelling the Probability of Landslides Impacting Road Networks

    NASA Astrophysics Data System (ADS)

    Taylor, F. E.; Malamud, B. D.

    2012-04-01

    During a landslide triggering event, the threat of landslides blocking roads poses a risk to logistics, rescue efforts and communities dependant on those road networks. Here we present preliminary results of a stochastic model we have developed to evaluate the probability of landslides intersecting a simple road network during a landslide triggering event and apply simple network indices to measure the state of the road network in the affected region. A 4000 x 4000 cell array with a 5 m x 5 m resolution was used, with a pre-defined simple road network laid onto it, and landslides 'randomly' dropped onto it. Landslide areas (AL) were randomly selected from a three-parameter inverse gamma probability density function, consisting of a power-law decay of about -2.4 for medium and large values of AL and an exponential rollover for small values of AL; the rollover (maximum probability) occurs at about AL = 400 m2 This statistical distribution was chosen based on three substantially complete triggered landslide inventories recorded in existing literature. The number of landslide areas (NL) selected for each triggered event iteration was chosen to have an average density of 1 landslide km-2, i.e. NL = 400 landslide areas chosen randomly for each iteration, and was based on several existing triggered landslide event inventories. A simple road network was chosen, in a 'T' shape configuration, with one road 1 x 4000 cells (5 m x 20 km) in a 'T' formation with another road 1 x 2000 cells (5 m x 10 km). The landslide areas were then randomly 'dropped' over the road array and indices such as the location, size (ABL) and number of road blockages (NBL) recorded. This process was performed 500 times (iterations) in a Monte-Carlo type simulation. Initial results show that for a landslide triggering event with 400 landslides over a 400 km2 region, the number of road blocks per iteration, NBL,ranges from 0 to 7. The average blockage area for the 500 iterations (A¯ BL) is about 3000 m

  16. A non linear multiple regression approach for inferring the probability distribution of hydrological model errors

    NASA Astrophysics Data System (ADS)

    Montanari, A.

    2006-12-01

    This contribution introduces a statistically based approach for uncertainty assessment in hydrological modeling, in an optimality context. Indeed, in several real world applications, there is the need for the user to select a model that is deemed to be the best possible choice accordingly to a given goodness of fit criteria. In this case, it is extremely important to assess the model uncertainty, intended as the range around the model output within which the measured hydrological variable is expected to fall with a given probability. This indication allows the user to quantify the risk associated to a decision that is based on the model response. The technique proposed here is carried out by inferring the probability distribution of the hydrological model error through a non linear multiple regression approach, depending on an arbitrary number of selected conditioning variables. These may include the current and previous model output as well as internal state variables of the model. The purpose is to indirectly relate the model error to the sources of uncertainty, through the conditioning variables. The method can be applied to any model of arbitrary complexity, included distributed approaches. The probability distribution of the model error is derived in the Gaussian space, through a meta-Gaussian approach. The normal quantile transform is applied in order to make the marginal probability distribution of the model error and the conditioning variables Gaussian. Then the above marginal probability distributions are related through the multivariate Gaussian distribution, whose parameters are estimated via multiple regression. Application of the inverse of the normal quantile transform allows the user to derive the confidence limits of the model output for an assigned significance level. The proposed technique is valid under statistical assumptions, that are essentially those conditioning the validity of the multiple regression in the Gaussian space. Statistical tests

  17. Modeling pore corrosion in normally open gold- plated copper connectors.

    SciTech Connect

    Battaile, Corbett Chandler; Moffat, Harry K.; Sun, Amy Cha-Tien; Enos, David George; Serna, Lysle M.; Sorensen, Neil Robert

    2008-09-01

    The goal of this study is to model the electrical response of gold plated copper electrical contacts exposed to a mixed flowing gas stream consisting of air containing 10 ppb H{sub 2}S at 30 C and a relative humidity of 70%. This environment accelerates the attack normally observed in a light industrial environment (essentially a simplified version of the Battelle Class 2 environment). Corrosion rates were quantified by measuring the corrosion site density, size distribution, and the macroscopic electrical resistance of the aged surface as a function of exposure time. A pore corrosion numerical model was used to predict both the growth of copper sulfide corrosion product which blooms through defects in the gold layer and the resulting electrical contact resistance of the aged surface. Assumptions about the distribution of defects in the noble metal plating and the mechanism for how corrosion blooms affect electrical contact resistance were needed to complete the numerical model. Comparisons are made to the experimentally observed number density of corrosion sites, the size distribution of corrosion product blooms, and the cumulative probability distribution of the electrical contact resistance. Experimentally, the bloom site density increases as a function of time, whereas the bloom size distribution remains relatively independent of time. These two effects are included in the numerical model by adding a corrosion initiation probability proportional to the surface area along with a probability for bloom-growth extinction proportional to the corrosion product bloom volume. The cumulative probability distribution of electrical resistance becomes skewed as exposure time increases. While the electrical contact resistance increases as a function of time for a fraction of the bloom population, the median value remains relatively unchanged. In order to model this behavior, the resistance calculated for large blooms has been weighted more heavily.

  18. Biomechanical modelling of normal pressure hydrocephalus.

    PubMed

    Dutta-Roy, Tonmoy; Wittek, Adam; Miller, Karol

    2008-07-19

    This study investigates the mechanics of normal pressure hydrocephalus (NPH) growth using a computational approach. We created a generic 3-D brain mesh of a healthy human brain and modelled the brain parenchyma as single phase and biphasic continuum. In our model, hyperelastic constitutive law and finite deformation theory described deformations within the brain parenchyma. We used a value of 155.77Pa for the shear modulus (mu) of the brain parenchyma. Additionally, in our model, contact boundary definitions constrained the brain outer surface inside the skull. We used transmantle pressure difference to load the model. Fully nonlinear, implicit finite element procedures in the time domain were used to obtain the deformations of the ventricles and the brain. To the best of our knowledge, this was the first 3-D, fully nonlinear model investigating NPH growth mechanics. Clinicians generally accept that at most 1mm of Hg transmantle pressure difference (133.416Pa) is associated with the condition of NPH. Our computations showed that transmantle pressure difference of 1mm of Hg (133.416Pa) did not produce NPH for either single phase or biphasic model of the brain parenchyma. A minimum transmantle pressure difference of 1.764mm of Hg (235.44Pa) was required to produce the clinical condition of NPH. This suggested that the hypothesis of a purely mechanical basis for NPH growth needs to be revised. We also showed that under equal transmantle pressure difference load, there were no significant differences between the computed ventricular volumes for biphasic and incompressible/nearly incompressible single phase model of the brain parenchyma. As a result, there was no major advantage gained by using a biphasic model for the brain parenchyma. We propose that for modelling NPH, nearly incompressible single phase model of the brain parenchyma was adequate. Single phase treatment of the brain parenchyma simplified the mathematical description of the NPH model and resulted in

  19. Aerosol Behavior Log-Normal Distribution Model.

    Energy Science and Technology Software Center (ESTSC)

    2001-10-22

    HAARM3, an acronym for Heterogeneous Aerosol Agglomeration Revised Model 3, is the third program in the HAARM series developed to predict the time-dependent behavior of radioactive aerosols under postulated LMFBR accident conditions. HAARM3 was developed to include mechanisms of aerosol growth and removal which had not been accounted for in the earlier models. In addition, experimental measurements obtained on sodium oxide aerosols have been incorporated in the code. As in HAARM2, containment gas temperature, pressure,more » and temperature gradients normal to interior surfaces are permitted to vary with time. The effects of reduced density on sodium oxide agglomerate behavior and of nonspherical shape of particles on aerosol behavior mechanisms are taken into account, and aerosol agglomeration due to turbulent air motion is considered. Also included is a capability to calculate aerosol concentration attenuation factors and to restart problems requiring long computing times.« less

  20. Low-probability flood risk modeling for New York City.

    PubMed

    Aerts, Jeroen C J H; Lin, Ning; Botzen, Wouter; Emanuel, Kerry; de Moel, Hans

    2013-05-01

    The devastating impact by Hurricane Sandy (2012) again showed New York City (NYC) is one of the most vulnerable cities to coastal flooding around the globe. The low-lying areas in NYC can be flooded by nor'easter storms and North Atlantic hurricanes. The few studies that have estimated potential flood damage for NYC base their damage estimates on only a single, or a few, possible flood events. The objective of this study is to assess the full distribution of hurricane flood risk in NYC. This is done by calculating potential flood damage with a flood damage model that uses many possible storms and surge heights as input. These storms are representative for the low-probability/high-impact flood hazard faced by the city. Exceedance probability-loss curves are constructed under different assumptions about the severity of flood damage. The estimated flood damage to buildings for NYC is between US$59 and 129 millions/year. The damage caused by a 1/100-year storm surge is within a range of US$2 bn-5 bn, while this is between US$5 bn and 11 bn for a 1/500-year storm surge. An analysis of flood risk in each of the five boroughs of NYC finds that Brooklyn and Queens are the most vulnerable to flooding. This study examines several uncertainties in the various steps of the risk analysis, which resulted in variations in flood damage estimations. These uncertainties include: the interpolation of flood depths; the use of different flood damage curves; and the influence of the spectra of characteristics of the simulated hurricanes. PMID:23383711

  1. Modelling non-normal data: The relationship between the skew-normal factor model and the quadratic factor model.

    PubMed

    Smits, Iris A M; Timmerman, Marieke E; Stegeman, Alwin

    2016-05-01

    Maximum likelihood estimation of the linear factor model for continuous items assumes normally distributed item scores. We consider deviations from normality by means of a skew-normally distributed factor model or a quadratic factor model. We show that the item distributions under a skew-normal factor are equivalent to those under a quadratic model up to third-order moments. The reverse only holds if the quadratic loadings are equal to each other and within certain bounds. We illustrate that observed data which follow any skew-normal factor model can be so well approximated with the quadratic factor model that the models are empirically indistinguishable, and that the reverse does not hold in general. The choice between the two models to account for deviations of normality is illustrated by an empirical example from clinical psychology. PMID:26566696

  2. A Probability Model of Accuracy in Deception Detection Experiments.

    ERIC Educational Resources Information Center

    Park, Hee Sun; Levine, Timothy R.

    2001-01-01

    Extends the recent work on the veracity effect in deception detection. Explains the probabilistic nature of a receiver's accuracy in detecting deception and analyzes a receiver's detection of deception in terms of set theory and conditional probability. Finds that accuracy is shown to be a function of the relevant conditional probability and the…

  3. Estimation of State Transition Probabilities: A Neural Network Model

    NASA Astrophysics Data System (ADS)

    Saito, Hiroshi; Takiyama, Ken; Okada, Masato

    2015-12-01

    Humans and animals can predict future states on the basis of acquired knowledge. This prediction of the state transition is important for choosing the best action, and the prediction is only possible if the state transition probability has already been learned. However, how our brains learn the state transition probability is unknown. Here, we propose a simple algorithm for estimating the state transition probability by utilizing the state prediction error. We analytically and numerically confirmed that our algorithm is able to learn the probability completely with an appropriate learning rate. Furthermore, our learning rule reproduced experimentally reported psychometric functions and neural activities in the lateral intraparietal area in a decision-making task. Thus, our algorithm might describe the manner in which our brains learn state transition probabilities and predict future states.

  4. Probability distributed time delays: integrating spatial effects into temporal models

    PubMed Central

    2010-01-01

    Background In order to provide insights into the complex biochemical processes inside a cell, modelling approaches must find a balance between achieving an adequate representation of the physical phenomena and keeping the associated computational cost within reasonable limits. This issue is particularly stressed when spatial inhomogeneities have a significant effect on system's behaviour. In such cases, a spatially-resolved stochastic method can better portray the biological reality, but the corresponding computer simulations can in turn be prohibitively expensive. Results We present a method that incorporates spatial information by means of tailored, probability distributed time-delays. These distributions can be directly obtained by single in silico or a suitable set of in vitro experiments and are subsequently fed into a delay stochastic simulation algorithm (DSSA), achieving a good compromise between computational costs and a much more accurate representation of spatial processes such as molecular diffusion and translocation between cell compartments. Additionally, we present a novel alternative approach based on delay differential equations (DDE) that can be used in scenarios of high molecular concentrations and low noise propagation. Conclusions Our proposed methodologies accurately capture and incorporate certain spatial processes into temporal stochastic and deterministic simulations, increasing their accuracy at low computational costs. This is of particular importance given that time spans of cellular processes are generally larger (possibly by several orders of magnitude) than those achievable by current spatially-resolved stochastic simulators. Hence, our methodology allows users to explore cellular scenarios under the effects of diffusion and stochasticity in time spans that were, until now, simply unfeasible. Our methodologies are supported by theoretical considerations on the different modelling regimes, i.e. spatial vs. delay-temporal, as indicated

  5. Modelling probabilities of heavy precipitation by regional approaches

    NASA Astrophysics Data System (ADS)

    Gaal, L.; Kysely, J.

    2009-09-01

    Extreme precipitation events are associated with large negative consequences for human society, mainly as they may trigger floods and landslides. The recent series of flash floods in central Europe (affecting several isolated areas) on June 24-28, 2009, the worst one over several decades in the Czech Republic as to the number of persons killed and the extent of damage to buildings and infrastructure, is an example. Estimates of growth curves and design values (corresponding e.g. to 50-yr and 100-yr return periods) of precipitation amounts, together with their uncertainty, are important in hydrological modelling and other applications. The interest in high quantiles of precipitation distributions is also related to possible climate change effects, as climate model simulations tend to project increased severity of precipitation extremes in a warmer climate. The present study compares - in terms of Monte Carlo simulation experiments - several methods to modelling probabilities of precipitation extremes that make use of ‘regional approaches’: the estimation of distributions of extremes takes into account data in a ‘region’ (‘pooling group’), in which one may assume that the distributions at individual sites are identical apart from a site-specific scaling factor (the condition is referred to as ‘regional homogeneity’). In other words, all data in a region - often weighted in some way - are taken into account when estimating the probability distribution of extremes at a given site. The advantage is that sampling variations in the estimates of model parameters and high quantiles are to a large extent reduced compared to the single-site analysis. We focus on the ‘region-of-influence’ (ROI) method which is based on the identification of unique pooling groups (forming the database for the estimation) for each site under study. The similarity of sites is evaluated in terms of a set of site attributes related to the distributions of extremes. The issue of

  6. Modelling probabilities of heavy precipitation by regional approaches

    NASA Astrophysics Data System (ADS)

    Gaal, L.; Kysely, J.

    2009-09-01

    Extreme precipitation events are associated with large negative consequences for human society, mainly as they may trigger floods and landslides. The recent series of flash floods in central Europe (affecting several isolated areas) on June 24-28, 2009, the worst one over several decades in the Czech Republic as to the number of persons killed and the extent of damage to buildings and infrastructure, is an example. Estimates of growth curves and design values (corresponding e.g. to 50-yr and 100-yr return periods) of precipitation amounts, together with their uncertainty, are important in hydrological modelling and other applications. The interest in high quantiles of precipitation distributions is also related to possible climate change effects, as climate model simulations tend to project increased severity of precipitation extremes in a warmer climate. The present study compares - in terms of Monte Carlo simulation experiments - several methods to modelling probabilities of precipitation extremes that make use of ‘regional approaches’: the estimation of distributions of extremes takes into account data in a ‘region’ (‘pooling group’), in which one may assume that the distributions at individual sites are identical apart from a site-specific scaling factor (the condition is referred to as ‘regional homogeneity’). In other words, all data in a region - often weighted in some way - are taken into account when estimating the probability distribution of extremes at a given site. The advantage is that sampling variations in the estimates of model parameters and high quantiles are to a large extent reduced compared to the single-site analysis. We focus on the ‘region-of-influence’ (ROI) method which is based on the identification of unique pooling groups (forming the database for the estimation) for each site under study. The similarity of sites is evaluated in terms of a set of site attributes related to the distributions of extremes. The issue of

  7. Probability Distribution Functions of freak-waves: nonlinear vs linear model

    NASA Astrophysics Data System (ADS)

    Kachulin, Dmitriy; Dyachenko, Alexander; Zakharov, Vladimir

    2015-04-01

    No doubts that estimation of probability of freak-wave appearing at the surface of ocean has practical meaning. Among different mechanisms of this phenomenon linear dispersion and modulational instability are generally recognized. For linear equation of water waves Probability Distribution Functions (PDF) can be calculated analytically and it is nothing but normal Gaussian distribution for surface elevation. Or it is Rayleigh distribution for absolute values of elevations. For nonlinear waves one can expect something different. In this report we consider and compare these two mechanism for various levels of nonlinearity. We present results of numerical experiments on calculation of Probability Distribution Functions for surface elevations of waters waves both for nonlinear and linear models. Both model demonstrates Rayleigh distribution of surface elevations. However dispersion of PDF for nonlinear case is much larger than for linear case. This work was supported by the Grant "Wave turbulence: theory, numerical simulation, experiment" #14-22-00174 of Russian Science Foundation. Numerical simulation was performed on the Informational Computational Center of the Novosibirsk State University.

  8. The Sequential Probability Ratio Test and Binary Item Response Models

    ERIC Educational Resources Information Center

    Nydick, Steven W.

    2014-01-01

    The sequential probability ratio test (SPRT) is a common method for terminating item response theory (IRT)-based adaptive classification tests. To decide whether a classification test should stop, the SPRT compares a simple log-likelihood ratio, based on the classification bound separating two categories, to prespecified critical values. As has…

  9. Marijuana odor perception: studies modeled from probable cause cases.

    PubMed

    Doty, Richard L; Wudarski, Thomas; Marshall, David A; Hastings, Lloyd

    2004-04-01

    The 4th Amendment of the United States Constitution protects American citizens against unreasonable search and seizure without probable cause. Although law enforcement officials routinely rely solely on the sense of smell to justify probable cause when entering vehicles and dwellings to search for illicit drugs, the accuracy of their perception in this regard has rarely been questioned and, to our knowledge, never tested. In this paper, we present data from two empirical studies based upon actual legal cases in which the odor of marijuana was used as probable cause for search. In the first, we simulated a situation in which, during a routine traffic stop, the odor of packaged marijuana located in the trunk of an automobile was said to be detected through the driver's window. In the second, we investigated a report that marijuana odor was discernable from a considerable distance from the chimney effluence of diesel exhaust emanating from an illicit California grow room. Our findings suggest that the odor of marijuana was not reliably discernable by persons with an excellent sense of smell in either case. These studies are the first to examine the ability of humans to detect marijuana in simulated real-life situations encountered by law enforcement officials, and are particularly relevant to the issue of probable cause. PMID:15141780

  10. Technology-Enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution

    ERIC Educational Resources Information Center

    Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2013-01-01

    Data analysis requires subtle probability reasoning to answer questions like "What is the chance of event A occurring, given that event B was observed?" This generic question arises in discussions of many intriguing scientific questions such as "What is the probability that an adolescent weighs between 120 and 140 pounds given that…

  11. Valve, normally open, titanium: Pyronetics Model 1425

    NASA Technical Reports Server (NTRS)

    Avalos, E.

    1972-01-01

    An operating test series was applied to two explosive actuated, normally open, titanium valves. There were no failures. Tests included: proof pressure and external leakage test, gross leak test, post actuation leakage test, and burst pressure test.

  12. Normal Tissue Complication Probability Estimation by the Lyman-Kutcher-Burman Method Does Not Accurately Predict Spinal Cord Tolerance to Stereotactic Radiosurgery

    SciTech Connect

    Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.

    2012-04-01

    Purpose: To determine whether normal tissue complication probability (NTCP) analyses of the human spinal cord by use of the Lyman-Kutcher-Burman (LKB) model, supplemented by linear-quadratic modeling to account for the effect of fractionation, predict the risk of myelopathy from stereotactic radiosurgery (SRS). Methods and Materials: From November 2001 to July 2008, 24 spinal hemangioblastomas in 17 patients were treated with SRS. Of the tumors, 17 received 1 fraction with a median dose of 20 Gy (range, 18-30 Gy) and 7 received 20 to 25 Gy in 2 or 3 sessions, with cord maximum doses of 22.7 Gy (range, 17.8-30.9 Gy) and 22.0 Gy (range, 20.2-26.6 Gy), respectively. By use of conventional values for {alpha}/{beta}, volume parameter n, 50% complication probability dose TD{sub 50}, and inverse slope parameter m, a computationally simplified implementation of the LKB model was used to calculate the biologically equivalent uniform dose and NTCP for each treatment. Exploratory calculations were performed with alternate values of {alpha}/{beta} and n. Results: In this study 1 case (4%) of myelopathy occurred. The LKB model using radiobiological parameters from Emami and the logistic model with parameters from Schultheiss overestimated complication rates, predicting 13 complications (54%) and 18 complications (75%), respectively. An increase in the volume parameter (n), to assume greater parallel organization, improved the predictive value of the models. Maximum-likelihood LKB fitting of {alpha}/{beta} and n yielded better predictions (0.7 complications), with n = 0.023 and {alpha}/{beta} = 17.8 Gy. Conclusions: The spinal cord tolerance to the dosimetry of SRS is higher than predicted by the LKB model using any set of accepted parameters. Only a high {alpha}/{beta} value in the LKB model and only a large volume effect in the logistic model with Schultheiss data could explain the low number of complications observed. This finding emphasizes that radiobiological models

  13. Simplifying Probability Elicitation and Uncertainty Modeling in Bayesian Networks

    SciTech Connect

    Paulson, Patrick R; Carroll, Thomas E; Sivaraman, Chitra; Neorr, Peter A; Unwin, Stephen D; Hossain, Shamina S

    2011-04-16

    In this paper we contribute two methods that simplify the demands of knowledge elicitation for particular types of Bayesian networks. The first method simplify the task of providing probabilities when the states that a random variable takes can be described by a new, fully ordered state set in which a state implies all the preceding states. The second method leverages Dempster-Shafer theory of evidence to provide a way for the expert to express the degree of ignorance that they feel about the estimates being provided.

  14. Identification of Patient Benefit From Proton Therapy for Advanced Head and Neck Cancer Patients Based on Individual and Subgroup Normal Tissue Complication Probability Analysis

    SciTech Connect

    Jakobi, Annika; Bandurska-Luque, Anna; Stützer, Kristin; Haase, Robert; Löck, Steffen; Wack, Linda-Jacqueline; Mönnich, David; Thorwarth, Daniela; and others

    2015-08-01

    Purpose: The purpose of this study was to determine, by treatment plan comparison along with normal tissue complication probability (NTCP) modeling, whether a subpopulation of patients with head and neck squamous cell carcinoma (HNSCC) could be identified that would gain substantial benefit from proton therapy in terms of NTCP. Methods and Materials: For 45 HNSCC patients, intensity modulated radiation therapy (IMRT) was compared to intensity modulated proton therapy (IMPT). Physical dose distributions were evaluated as well as the resulting NTCP values, using modern models for acute mucositis, xerostomia, aspiration, dysphagia, laryngeal edema, and trismus. Patient subgroups were defined based on primary tumor location. Results: Generally, IMPT reduced the NTCP values while keeping similar target coverage for all patients. Subgroup analyses revealed a higher individual reduction of swallowing-related side effects by IMPT for patients with tumors in the upper head and neck area, whereas the risk reduction of acute mucositis was more pronounced in patients with tumors in the larynx region. More patients with tumors in the upper head and neck area had a reduction in NTCP of more than 10%. Conclusions: Subgrouping can help to identify patients who may benefit more than others from the use of IMPT and, thus, can be a useful tool for a preselection of patients in the clinic where there are limited PT resources. Because the individual benefit differs within a subgroup, the relative merits should additionally be evaluated by individual treatment plan comparisons.

  15. Photon recollision probability in heterogeneous forest canopies: Compatibility with a hybrid GO model

    NASA Astrophysics Data System (ADS)

    Mõttus, Matti; Stenberg, Pauline; Rautiainen, Miina

    2007-02-01

    Photon recollision probability, or the probability by which a photon scattered from a phytoelement in the canopy will interact within the canopy again, has previously been shown to approximate well the fractions of radiation scattered and absorbed by homogeneous plant covers. To test the applicability of the recollision probability theory to more complicated canopy structures, a set of modeled stands was generated using allometric relations for Scots pine trees growing in central Finland. A hybrid geometric-optical model (FRT, or the Kuusk-Nilson model) was used to simulate the reflectance and transmittance of the modeled forests consisting of ellipsoidal tree crowns and, on the basis of the simulations, the recollision probability (p) was calculated for the canopies. As the recollision probability theory assumes energy conservation, a method to check and ensure energy conservation in the model was first developed. The method enabled matching the geometric-optical and two-stream submodels of the hybrid FRT model, and more importantly, allowed calculation of the recollision probability from model output. Next, to assess the effect of canopy structure on the recollision probability, the obtained p-values were compared to those calculated for structureless (homogeneous) canopies with similar effective LAI using a simple two-stream radiation transfer model. Canopy structure was shown to increase the recollision probability, implying that structured canopies absorb more efficiently the radiation interacting with the canopy, and it also changed the escape probabilities for different scattering orders. Most importantly, the study demonstrated that the concept of recollision probability is coherent with physically based canopy reflectance models which use the classical radiative transfer theory. Furthermore, it was shown that as a first approximation, the recollision probability can be considered to be independent of wavelength. Finally, different algorithms for

  16. Probability Weighting Functions Derived from Hyperbolic Time Discounting: Psychophysical Models and Their Individual Level Testing

    PubMed Central

    Takemura, Kazuhisa; Murakami, Hajime

    2016-01-01

    A probability weighting function (w(p)) is considered to be a nonlinear function of probability (p) in behavioral decision theory. This study proposes a psychophysical model of probability weighting functions derived from a hyperbolic time discounting model and a geometric distribution. The aim of the study is to show probability weighting functions from the point of view of waiting time for a decision maker. Since the expected value of a geometrically distributed random variable X is 1/p, we formulized the probability weighting function of the expected value model for hyperbolic time discounting as w(p) = (1 − k log p)−1. Moreover, the probability weighting function is derived from Loewenstein and Prelec's (1992) generalized hyperbolic time discounting model. The latter model is proved to be equivalent to the hyperbolic-logarithmic weighting function considered by Prelec (1998) and Luce (2001). In this study, we derive a model from the generalized hyperbolic time discounting model assuming Fechner's (1860) psychophysical law of time and a geometric distribution of trials. In addition, we develop median models of hyperbolic time discounting and generalized hyperbolic time discounting. To illustrate the fitness of each model, a psychological experiment was conducted to assess the probability weighting and value functions at the level of the individual participant. The participants were 50 university students. The results of individual analysis indicated that the expected value model of generalized hyperbolic discounting fitted better than previous probability weighting decision-making models. The theoretical implications of this finding are discussed. PMID:27303338

  17. Probability Weighting Functions Derived from Hyperbolic Time Discounting: Psychophysical Models and Their Individual Level Testing.

    PubMed

    Takemura, Kazuhisa; Murakami, Hajime

    2016-01-01

    A probability weighting function (w(p)) is considered to be a nonlinear function of probability (p) in behavioral decision theory. This study proposes a psychophysical model of probability weighting functions derived from a hyperbolic time discounting model and a geometric distribution. The aim of the study is to show probability weighting functions from the point of view of waiting time for a decision maker. Since the expected value of a geometrically distributed random variable X is 1/p, we formulized the probability weighting function of the expected value model for hyperbolic time discounting as w(p) = (1 - k log p)(-1). Moreover, the probability weighting function is derived from Loewenstein and Prelec's (1992) generalized hyperbolic time discounting model. The latter model is proved to be equivalent to the hyperbolic-logarithmic weighting function considered by Prelec (1998) and Luce (2001). In this study, we derive a model from the generalized hyperbolic time discounting model assuming Fechner's (1860) psychophysical law of time and a geometric distribution of trials. In addition, we develop median models of hyperbolic time discounting and generalized hyperbolic time discounting. To illustrate the fitness of each model, a psychological experiment was conducted to assess the probability weighting and value functions at the level of the individual participant. The participants were 50 university students. The results of individual analysis indicated that the expected value model of generalized hyperbolic discounting fitted better than previous probability weighting decision-making models. The theoretical implications of this finding are discussed. PMID:27303338

  18. Modeling Conditional Probabilities in Complex Educational Assessments. CSE Technical Report.

    ERIC Educational Resources Information Center

    Mislevy, Robert J.; Almond, Russell; Dibello, Lou; Jenkins, Frank; Steinberg, Linda; Yan, Duanli; Senturk, Deniz

    An active area in psychometric research is coordinated task design and statistical analysis built around cognitive models. Compared with classical test theory and item response theory, there is often less information from observed data about the measurement-model parameters. On the other hand, there is more information from the grounding…

  19. Application of Probability Methods to Assess Crash Modeling Uncertainty

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.; Stockwell, Alan E.; Hardy, Robin C.

    2003-01-01

    Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stress-strain behaviors, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the effects of finite element modeling assumptions on the predicted responses. The vertical drop test of a Fokker F28 fuselage section will be the focus of this paper. The results of a probabilistic analysis using finite element simulations will be compared with experimental data.

  20. Application of Probability Methods to Assess Crash Modeling Uncertainty

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.; Stockwell, Alan E.; Hardy, Robin C.

    2007-01-01

    Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stress-strain behaviors, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the effects of finite element modeling assumptions on the predicted responses. The vertical drop test of a Fokker F28 fuselage section will be the focus of this paper. The results of a probabilistic analysis using finite element simulations will be compared with experimental data.

  1. Modeling Outcomes from Probability Tasks: Sixth Graders Reasoning Together

    ERIC Educational Resources Information Center

    Alston, Alice; Maher, Carolyn

    2003-01-01

    This report considers the reasoning of sixth grade students as they explore problem tasks concerning the fairness of dice games. The particular focus is the students' interactions, verbal and non-verbal, as they build and justify representations that extend their basic understanding of number combinations in order to model the outcome set of a…

  2. Some aspects of statistical modeling of human-error probability

    SciTech Connect

    Prairie, R. R.

    1982-01-01

    Human reliability analyses (HRA) are often performed as part of risk assessment and reliability projects. Recent events in nuclear power have shown the potential importance of the human element. There are several on-going efforts in the US and elsewhere with the purpose of modeling human error such that the human contribution can be incorporated into an overall risk assessment associated with one or more aspects of nuclear power. An effort that is described here uses the HRA (event tree) to quantify and model the human contribution to risk. As an example, risk analyses are being prepared on several nuclear power plants as part of the Interim Reliability Assessment Program (IREP). In this process the risk analyst selects the elements of his fault tree that could be contributed to by human error. He then solicits the HF analyst to do a HRA on this element.

  3. Physical model assisted probability of detection in nondestructive evaluation

    SciTech Connect

    Li, M.; Meeker, W. Q.; Thompson, R. B.

    2011-06-23

    Nondestructive evaluation is used widely in many engineering and industrial areas to detect defects or flaws such as cracks inside parts or structures during manufacturing or for products in service. The standard statistical model is a simple empirical linear regression between the (possibly transformed) signal response variables and the (possibly transformed) explanatory variables. For some applications, such a simple empirical approach is inadequate. An important alternative approach is to use knowledge of the physics of the inspection process to provide information about the underlying relationship between the response and explanatory variables. Use of such knowledge can greatly increase the power and accuracy of the statistical analysis and enable, when needed, proper extrapolation outside the range of the observed explanatory variables. This paper describes a set of physical model-assisted analyses to study the capability of two different ultrasonic testing inspection methods to detect synthetic hard alpha inclusion and flat-bottom hole defects in a titanium forging disk.

  4. Physical Model Assisted Probability of Detection in Nondestructive Evaluation

    NASA Astrophysics Data System (ADS)

    Li, M.; Meeker, W. Q.; Thompson, R. B.

    2011-06-01

    Nondestructive evaluation is used widely in many engineering and industrial areas to detect defects or flaws such as cracks inside parts or structures during manufacturing or for products in service. The standard statistical model is a simple empirical linear regression between the (possibly transformed) signal response variables and the (possibly transformed) explanatory variables. For some applications, such a simple empirical approach is inadequate. An important alternative approach is to use knowledge of the physics of the inspection process to provide information about the underlying relationship between the response and explanatory variables. Use of such knowledge can greatly increase the power and accuracy of the statistical analysis and enable, when needed, proper extrapolation outside the range of the observed explanatory variables. This paper describes a set of physical model-assisted analyses to study the capability of two different ultrasonic testing inspection methods to detect synthetic hard alpha inclusion and flat-bottom hole defects in a titanium forging disk.

  5. Probabilistic Independence Networks for Hidden Markov Probability Models

    NASA Technical Reports Server (NTRS)

    Smyth, Padhraic; Heckerman, Cavid; Jordan, Michael I

    1996-01-01

    In this paper we explore hidden Markov models(HMMs) and related structures within the general framework of probabilistic independence networks (PINs). The paper contains a self-contained review of the basic principles of PINs. It is shown that the well-known forward-backward (F-B) and Viterbi algorithms for HMMs are special cases of more general enference algorithms for arbitrary PINs.

  6. A simulation model for estimating probabilities of defects in welds

    SciTech Connect

    Chapman, O.J.V.; Khaleel, M.A.; Simonen, F.A.

    1996-12-01

    In recent work for the US Nuclear Regulatory Commission in collaboration with Battelle Pacific Northwest National Laboratory, Rolls-Royce and Associates, Ltd., has adapted an existing model for piping welds to address welds in reactor pressure vessels. This paper describes the flaw estimation methodology as it applies to flaws in reactor pressure vessel welds (but not flaws in base metal or flaws associated with the cladding process). Details of the associated computer software (RR-PRODIGAL) are provided. The approach uses expert elicitation and mathematical modeling to simulate the steps in manufacturing a weld and the errors that lead to different types of weld defects. The defects that may initiate in weld beads include center cracks, lack of fusion, slag, pores with tails, and cracks in heat affected zones. Various welding processes are addressed including submerged metal arc welding. The model simulates the effects of both radiographic and dye penetrant surface inspections. Output from the simulation gives occurrence frequencies for defects as a function of both flaw size and flaw location (surface connected and buried flaws). Numerical results are presented to show the effects of submerged metal arc versus manual metal arc weld processes.

  7. A Comparison Of The Mycin Model For Reasoning Under Uncertainty To A Probability Based Model

    NASA Astrophysics Data System (ADS)

    Neapolitan, Richard E.

    1986-03-01

    Rule-based expert systems are those in which a certain number of IF-THEN rules are assumed to hold. Based on the verity of some assertions, the rules deduce new conclusions. In many cases, neither the rules nor the assertions are known with certainty. The system must then be able to obtain a measure of partial belief in the conclusion based upon measures of partial belief in the assertions and the rule. A problem arises when two or more rules (items of evidence) argue for the same conclusion. As proven in , certain assumptions concerning the independence of the two items of evidence is necessary before the certainties can be combined. In the current paper, it is shown how the well known MYCIN model combines the certainties from two items of evidence. The validity of the model is then proven based on the model's assumptions of independence of evidence. The assumptions are that the evidence must be independent in the whole space, in the space of the conclusion, and in the space of the complement of the conclusion. Next a probability-based model is described and compared to the MYCIN model. It is proven that the probabilistic assumptions for this model are weaker (independence is necessary only in the space of the conclusion and the space of the complement of conclusion), and therefore more appealing. An example is given to show how the added assumption in the MYCIN model is, in fact, the most restrictive assumption. It is also proven that, when two rules argue for the same conclusion, the combinatoric method in a MYCIN version of the probability-based model yields a higher combined certainty than that in the MYCIN model. It is finally concluded that the probability-based model, in light of the comparison, is the better choice.

  8. A simplified model for the assessment of the impact probability of fragments.

    PubMed

    Gubinelli, Gianfilippo; Zanelli, Severino; Cozzani, Valerio

    2004-12-31

    A model was developed for the assessment of fragment impact probability on a target vessel, following the collapse and fragmentation of a primary vessel due to internal pressure. The model provides the probability of impact of a fragment with defined shape, mass and initial velocity on a target of a known shape and at a given position with respect to the source point. The model is based on the ballistic analysis of the fragment trajectory and on the determination of impact probabilities by the analysis of initial direction of fragment flight. The model was validated using available literature data. PMID:15601611

  9. On the Log-Normality of Historical Magnetic-Storm Intensity Statistics: Implications for Extreme-Event Probabilities

    NASA Astrophysics Data System (ADS)

    Love, J. J.; Rigler, E. J.; Pulkkinen, A. A.; Riley, P.

    2015-12-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to -Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, -Dst > 850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42, 2.41] times per century; a 100-yr magnetic storm is identified as having a -Dst > 880 nT (greater than Carrington) but a wide 95% confidence interval of [490, 1187] nT. This work is partially motivated by United States National Science and Technology Council and Committee on Space Research and International Living with a Star priorities and strategic plans for the assessment and mitigation of space-weather hazards.

  10. On the log-normality of historical magnetic-storm intensity statistics: implications for extreme-event probabilities

    USGS Publications Warehouse

    Love, Jeffrey J.; Rigler, E. Joshua; Pulkkinen, Antti; Riley, Pete

    2015-01-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to −Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, −Dst≥850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42,2.41] times per century; a 100-yr magnetic storm is identified as having a −Dst≥880 nT (greater than Carrington) but a wide 95% confidence interval of [490,1187] nT.

  11. Time‐dependent renewal‐model probabilities when date of last earthquake is unknown

    USGS Publications Warehouse

    Field, Edward H.; Jordan, Thomas H.

    2015-01-01

    We derive time-dependent, renewal-model earthquake probabilities for the case in which the date of the last event is completely unknown, and compare these with the time-independent Poisson probabilities that are customarily used as an approximation in this situation. For typical parameter values, the renewal-model probabilities exceed Poisson results by more than 10% when the forecast duration exceeds ~20% of the mean recurrence interval. We also derive probabilities for the case in which the last event is further constrained to have occurred before historical record keeping began (the historic open interval), which can only serve to increase earthquake probabilities for typically applied renewal models.We conclude that accounting for the historic open interval can improve long-term earthquake rupture forecasts for California and elsewhere.

  12. Emptiness Formation Probability of the Six-Vertex Model and the Sixth Painlevé Equation

    NASA Astrophysics Data System (ADS)

    Kitaev, A. V.; Pronko, A. G.

    2016-07-01

    We show that the emptiness formation probability of the six-vertex model with domain wall boundary conditions at its free-fermion point is a {τ}-function of the sixth Painlevé equation. Using this fact we derive asymptotics of the emptiness formation probability in the thermodynamic limit.

  13. Cold and hot cognition: quantum probability theory and realistic psychological modeling.

    PubMed

    Corr, Philip J

    2013-06-01

    Typically, human decision making is emotionally "hot" and does not conform to "cold" classical probability (CP) theory. As quantum probability (QP) theory emphasises order, context, superimposition states, and nonlinear dynamic effects, one of its major strengths may be its power to unify formal modeling and realistic psychological theory (e.g., information uncertainty, anxiety, and indecision, as seen in the Prisoner's Dilemma). PMID:23673029

  14. Inferring Conjunctive Probabilities from Noisy Samples: Evidence for the Configural Weighted Average Model

    ERIC Educational Resources Information Center

    Jenny, Mirjam A.; Rieskamp, Jörg; Nilsson, Håkan

    2014-01-01

    Judging whether multiple events will co-occur is an important aspect of everyday decision making. The underlying probabilities of occurrence are usually unknown and have to be inferred from experience. Using a rigorous, quantitative model comparison, we investigate how people judge the conjunctive probabilities of multiple events to co-occur. In 2…

  15. Discrete Latent Markov Models for Normally Distributed Response Data

    ERIC Educational Resources Information Center

    Schmittmann, Verena D.; Dolan, Conor V.; van der Maas, Han L. J.; Neale, Michael C.

    2005-01-01

    Van de Pol and Langeheine (1990) presented a general framework for Markov modeling of repeatedly measured discrete data. We discuss analogical single indicator models for normally distributed responses. In contrast to discrete models, which have been studied extensively, analogical continuous response models have hardly been considered. These…

  16. Probabilistic Inference: Task Dependency and Individual Differences of Probability Weighting Revealed by Hierarchical Bayesian Modeling

    PubMed Central

    Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno

    2016-01-01

    Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision. PMID:27303323

  17. Application of the response probability density function technique to biodynamic models.

    PubMed

    Hershey, R L; Higgins, T H

    1978-01-01

    A method has been developed, which we call the "response probability density function technique," which has applications in predicting the probability of injury in a wide range of biodynamic situations. The method, which was developed in connection with sonic boom damage prediction, utilized the probability density function of the excitation force and the probability density function of the sensitivity of the material being acted upon. The method is especially simple to use when both these probability density functions are lognormal. Studies thus far have shown that the stresses from sonic booms, as well as the strengths of glass and mortars, are distributed lognormally. Some biodynamic processes also have lognormal distributions and are, therefore, amenable to modeling by this technique. In particular, this paper discusses the application of the response probability density function technique to the analysis of the thoracic response to air blast and the prediction of skull fracture from head impact. PMID:623590

  18. Not Quite Normal: Consequences of Violating the Assumption of Normality in Regression Mixture Models

    ERIC Educational Resources Information Center

    Van Horn, M. Lee; Smith, Jessalyn; Fagan, Abigail A.; Jaki, Thomas; Feaster, Daniel J.; Masyn, Katherine; Hawkins, J. David; Howe, George

    2012-01-01

    Regression mixture models, which have only recently begun to be used in applied research, are a new approach for finding differential effects. This approach comes at the cost of the assumption that error terms are normally distributed within classes. This study uses Monte Carlo simulations to explore the effects of relatively minor violations of…

  19. Normal Tissue Complication Probability Analysis of Acute Gastrointestinal Toxicity in Cervical Cancer Patients Undergoing Intensity Modulated Radiation Therapy and Concurrent Cisplatin

    SciTech Connect

    Simpson, Daniel R.; Song, William Y.; Moiseenko, Vitali; Rose, Brent S.; Yashar, Catheryn M.; Mundt, Arno J.; Mell, Loren K.

    2012-05-01

    Purpose: To test the hypothesis that increased bowel radiation dose is associated with acute gastrointestinal (GI) toxicity in cervical cancer patients undergoing concurrent chemotherapy and intensity-modulated radiation therapy (IMRT), using a previously derived normal tissue complication probability (NTCP) model. Methods: Fifty patients with Stage I-III cervical cancer undergoing IMRT and concurrent weekly cisplatin were analyzed. Acute GI toxicity was graded using the Radiation Therapy Oncology Group scale, excluding upper GI events. A logistic model was used to test correlations between acute GI toxicity and bowel dosimetric parameters. The primary objective was to test the association between Grade {>=}2 GI toxicity and the volume of bowel receiving {>=}45 Gy (V{sub 45}) using the logistic model. Results: Twenty-three patients (46%) had Grade {>=}2 GI toxicity. The mean (SD) V{sub 45} was 143 mL (99). The mean V{sub 45} values for patients with and without Grade {>=}2 GI toxicity were 176 vs. 115 mL, respectively. Twenty patients (40%) had V{sub 45} >150 mL. The proportion of patients with Grade {>=}2 GI toxicity with and without V{sub 45} >150 mL was 65% vs. 33% (p = 0.03). Logistic model parameter estimates V50 and {gamma} were 161 mL (95% confidence interval [CI] 60-399) and 0.31 (95% CI 0.04-0.63), respectively. On multivariable logistic regression, increased V{sub 45} was associated with an increased odds of Grade {>=}2 GI toxicity (odds ratio 2.19 per 100 mL, 95% CI 1.04-4.63, p = 0.04). Conclusions: Our results support the hypothesis that increasing bowel V{sub 45} is correlated with increased GI toxicity in cervical cancer patients undergoing IMRT and concurrent cisplatin. Reducing bowel V{sub 45} could reduce the risk of Grade {>=}2 GI toxicity by approximately 50% per 100 mL of bowel spared.

  20. A removal model for estimating detection probabilities from point-count surveys

    USGS Publications Warehouse

    Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.

    2000-01-01

    We adapted a removal model to estimate detection probability during point count surveys. The model assumes one factor influencing detection during point counts is the singing frequency of birds. This may be true for surveys recording forest songbirds when most detections are by sound. The model requires counts to be divided into several time intervals. We used time intervals of 2, 5, and 10 min to develop a maximum-likelihood estimator for the detectability of birds during such surveys. We applied this technique to data from bird surveys conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. The overall detection probability for all birds was 75%. We found differences in detection probability among species. Species that sing frequently such as Winter Wren and Acadian Flycatcher had high detection probabilities (about 90%) and species that call infrequently such as Pileated Woodpecker had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. This method of estimating detectability during point count surveys offers a promising new approach to using count data to address questions of the bird abundance, density, and population trends.

  1. Skew-normal antedependence models for skewed longitudinal data

    PubMed Central

    Chang, Shu-Ching; Zimmerman, Dale L.

    2016-01-01

    Antedependence models, also known as transition models, have proven to be useful for longitudinal data exhibiting serial correlation, especially when the variances and/or same-lag correlations are time-varying. Statistical inference procedures associated with normal antedependence models are well-developed and have many nice properties, but they are not appropriate for longitudinal data that exhibit considerable skewness. We propose two direct extensions of normal antedependence models to skew-normal antedependence models. The first is obtained by imposing antedependence on a multivariate skew-normal distribution, and the second is a sequential autoregressive model with skew-normal innovations. For both models, necessary and sufficient conditions for \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$p$\\end{document}th-order antedependence are established, and likelihood-based estimation and testing procedures for models satisfying those conditions are developed. The procedures are applied to simulated data and to real data from a study of cattle growth. PMID:27279663

  2. A Discrete SIRS Model with Kicked Loss of Immunity and Infection Probability

    NASA Astrophysics Data System (ADS)

    Paladini, F.; Renna, I.; Renna, L.

    2011-03-01

    A discrete-time deterministic epidemic model is proposed with the aim of reproducing the behaviour observed in the incidence of real infectious diseases, such as oscillations and irregularities. For this purpose we introduce, in a naïve discrete-time SIRS model, seasonal variability in the loss of immunity and in the infection probability, modelled by sequences of kicks. Restrictive assumptions are made on the parameters of the models, in order to guarantee that the transitions are determined by true probabilities, so that comparisons with stochastic discrete-time previsions can be also provided. Numerical simulations show that the characteristics of real infectious diseases can be adequately modeled.

  3. General properties of different models used to predict normal tissue complications due to radiation

    SciTech Connect

    Kuperman, V. Y.

    2008-11-15

    In the current study the author analyzes general properties of three different models used to predict normal tissue complications due to radiation: (1) Surviving fraction of normal cells in the framework of the linear quadratic (LQ) equation for cell kill, (2) the Lyman-Kutcher-Burman (LKB) model for normal tissue complication probability (NTCP), and (3) generalized equivalent uniform dose (gEUD). For all considered cases the author assumes fixed average dose to an organ of interest. The author's goal is to establish whether maximizing dose uniformity in the irradiated normal tissues is radiobiologically beneficial. Assuming that NTCP increases with increasing overall cell kill, it is shown that NTCP in the LQ model is maximized for uniform dose. Conversely, NTCP in the LKB and gEUD models is always smaller for a uniform dose to a normal organ than that for a spatially varying dose if parameter n in these models is small (i.e., n<1). The derived conflicting properties of the considered models indicate the need for more studies before these models can be utilized clinically for plan evaluation and/or optimization of dose distributions. It is suggested that partial-volume irradiation can be used to establish the validity of the considered models.

  4. Normal seasonal variations for atmospheric radon concentration: a sinusoidal model.

    PubMed

    Hayashi, Koseki; Yasuoka, Yumi; Nagahama, Hiroyuki; Muto, Jun; Ishikawa, Tetsuo; Omori, Yasutaka; Suzuki, Toshiyuki; Homma, Yoshimi; Mukai, Takahiro

    2015-01-01

    Anomalous radon readings in air have been reported before an earthquake activity. However, careful measurements of atmospheric radon concentrations during a normal period are required to identify anomalous variations in a precursor period. In this study, we obtained radon concentration data for 5 years (2003-2007) that can be considered a normal period and compared it with data from the precursory period of 2008 until March 2011, when the 2011 Tohoku-Oki Earthquake occurred. Then, we established a model for seasonal variation by fitting a sinusoidal model to the radon concentration data during the normal period, considering that the seasonal variation was affected by atmospheric turbulence. By determining the amplitude in the sinusoidal model, the normal variation of the radon concentration can be estimated. Thus, the results of this method can be applied to identify anomalous radon variations before an earthquake. PMID:25464051

  5. A removal model for estimating detection probabilities from point-count surveys

    USGS Publications Warehouse

    Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.

    2002-01-01

    Use of point-count surveys is a popular method for collecting data on abundance and distribution of birds. However, analyses of such data often ignore potential differences in detection probability. We adapted a removal model to directly estimate detection probability during point-count surveys. The model assumes that singing frequency is a major factor influencing probability of detection when birds are surveyed using point counts. This may be appropriate for surveys in which most detections are by sound. The model requires counts to be divided into several time intervals. Point counts are often conducted for 10 min, where the number of birds recorded is divided into those first observed in the first 3 min, the subsequent 2 min, and the last 5 min. We developed a maximum-likelihood estimator for the detectability of birds recorded during counts divided into those intervals. This technique can easily be adapted to point counts divided into intervals of any length. We applied this method to unlimited-radius counts conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. We found differences in detection probability among species. Species that sing frequently such as Winter Wren (Troglodytes troglodytes) and Acadian Flycatcher (Empidonax virescens) had high detection probabilities (~90%) and species that call infrequently such as Pileated Woodpecker (Dryocopus pileatus) had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. We used the same approach to estimate detection probability and density for a subset of the observations with limited-radius point counts.

  6. Probable slow slips in the mid-crust of Hsinchu, northwestern Taiwan: Temporal correlation between normal faulting earthquakes and relative uplift

    NASA Astrophysics Data System (ADS)

    Pu, H. C.; Lin, C. H.

    2016-05-01

    To investigate the seismic behavior of crustal deformation, we deployed a dense seismic network at the Hsinchu area of northwestern Taiwan during the period between 2004 and 2006. Based on abundant local micro-earthquakes recorded at this seismic network, we have successfully determined 274 focal mechanisms among ∼1300 seismic events. It is very interesting to see that the dominant energy of both seismic strike-slip and normal faulting mechanisms repeatedly alternated with each other within two years. Also, the strike-slip and normal faulting earthquakes were largely accompanied with the surface slipping along N60°E and uplifting obtained from the continuous GPS data, individually. Those phenomena were probably resulted by the slow uplifts at the mid-crust beneath the northwestern Taiwan area. As the deep slow uplift was active below 10 km in depth along either the boundary fault or blind fault, the push of the uplifting material would simultaneously produce both of the normal faulting earthquakes in the shallow depths (0-10 km) and the slight surface uplifting. As the deep slow uplift was stop, instead, the strike-slip faulting earthquakes would be dominated as usual due to strongly horizontal plate convergence in the Taiwan. Since the normal faulting earthquakes repeatedly dominated in every 6 or 7 months between 2004 and 2006, it may conclude that slow slip events in the mid crust were frequent to release accumulated tectonic stress in the Hsinchu area.

  7. Trending in Probability of Collision Measurements via a Bayesian Zero-Inflated Beta Mixed Model

    NASA Technical Reports Server (NTRS)

    Vallejo, Jonathon; Hejduk, Matt; Stamey, James

    2015-01-01

    We investigate the performance of a generalized linear mixed model in predicting the Probabilities of Collision (Pc) for conjunction events. Specifically, we apply this model to the log(sub 10) transformation of these probabilities and argue that this transformation yields values that can be considered bounded in practice. Additionally, this bounded random variable, after scaling, is zero-inflated. Consequently, we model these values using the zero-inflated Beta distribution, and utilize the Bayesian paradigm and the mixed model framework to borrow information from past and current events. This provides a natural way to model the data and provides a basis for answering questions of interest, such as what is the likelihood of observing a probability of collision equal to the effective value of zero on a subsequent observation.

  8. A likelihood reformulation method in non-normal random effects models.

    PubMed

    Liu, Lei; Yu, Zhangsheng

    2008-07-20

    In this paper, we propose a practical computational method to obtain the maximum likelihood estimates (MLE) for mixed models with non-normal random effects. By simply multiplying and dividing a standard normal density, we reformulate the likelihood conditional on the non-normal random effects to that conditional on the normal random effects. Gaussian quadrature technique, conveniently implemented in SAS Proc NLMIXED, can then be used to carry out the estimation process. Our method substantially reduces computational time, while yielding similar estimates to the probability integral transformation method (J. Comput. Graphical Stat. 2006; 15:39-57). Furthermore, our method can be applied to more general situations, e.g. finite mixture random effects or correlated random effects from Clayton copula. Simulations and applications are presented to illustrate our method. PMID:18038445

  9. Height probabilities in the Abelian sandpile model on the generalized finite Bethe lattice

    NASA Astrophysics Data System (ADS)

    Chen, Haiyan; Zhang, Fuji

    2013-08-01

    In this paper, we study the sandpile model on the generalized finite Bethe lattice with a particular boundary condition. Using a combinatorial method, we give the exact expressions for all single-site probabilities and some two-site joint probabilities. As a by-product, we prove that the height probabilities of bulk vertices are all the same for the Bethe lattice with certain given boundary condition, which was found from numerical evidence by Grassberger and Manna ["Some more sandpiles," J. Phys. (France) 51, 1077-1098 (1990)], 10.1051/jphys:0199000510110107700 but without a proof.

  10. Series Expansion Method for Asymmetrical Percolation Models with Two Connection Probabilities

    NASA Astrophysics Data System (ADS)

    Inui, Norio; Komatsu, Genichi; Kameoka, Koichi

    2000-01-01

    In order to study the solvability of the percolation model based on Guttmann and Enting's conjecture, the power series for the percolation probability in the form of ∑nHn(q)pn is examined. Although the power series is given by calculating inverse of the transfer-matrix in principle, it is very hard to obtain the inverse matrix containing many complex polynomials as elements. We introduce a new series expansion technique which does not necessitate inverse operation for the transfer-matrix.By using the new procedure, we derive the series of the asymmetrical percolation probability including the isotropic percolation probability as a special case.

  11. An empirical model for earthquake probabilities in the San Francisco Bay region, California, 2002-2031

    USGS Publications Warehouse

    Reasenberg, P.A.; Hanks, T.C.; Bakun, W.H.

    2003-01-01

    The moment magnitude M 7.8 earthquake in 1906 profoundly changed the rate of seismic activity over much of northern California. The low rate of seismic activity in the San Francisco Bay region (SFBR) since 1906, relative to that of the preceding 55 yr, is often explained as a stress-shadow effect of the 1906 earthquake. However, existing elastic and visco-elastic models of stress change fail to fully account for the duration of the lowered rate of earthquake activity. We use variations in the rate of earthquakes as a basis for a simple empirical model for estimating the probability of M ???6.7 earthquakes in the SFBR. The model preserves the relative magnitude distribution of sources predicted by the Working Group on California Earthquake Probabilities' (WGCEP, 1999; WGCEP, 2002) model of characterized ruptures on SFBR faults and is consistent with the occurrence of the four M ???6.7 earthquakes in the region since 1838. When the empirical model is extrapolated 30 yr forward from 2002, it gives a probability of 0.42 for one or more M ???6.7 in the SFBR. This result is lower than the probability of 0.5 estimated by WGCEP (1988), lower than the 30-yr Poisson probability of 0.60 obtained by WGCEP (1999) and WGCEP (2002), and lower than the 30-yr time-dependent probabilities of 0.67, 0.70, and 0.63 obtained by WGCEP (1990), WGCEP (1999), and WGCEP (2002), respectively, for the occurrence of one or more large earthquakes. This lower probability is consistent with the lack of adequate accounting for the 1906 stress-shadow in these earlier reports. The empirical model represents one possible approach toward accounting for the stress-shadow effect of the 1906 earthquake. However, the discrepancy between our result and those obtained with other modeling methods underscores the fact that the physics controlling the timing of earthquakes is not well understood. Hence, we advise against using the empirical model alone (or any other single probability model) for estimating the

  12. Sensitivity Analysis and Assessment of Prior Model Probabilities in MLBMA with Application to Unsaturated Fractured Tuff

    SciTech Connect

    Ye, Ming; Neuman, Shlomo P.; Meyer, Philip D.; Pohlmann, Karl

    2005-12-24

    Previous application of Maximum Likelihood Bayesian Model Averaging (MLBMA, Neuman [2002, 2003]) to alternative variogram models of log air permeability data in fractured tuff has demonstrated its effectiveness in quantifying conceptual model uncertainty and enhancing predictive capability [Ye et al., 2004]. A question remained how best to ascribe prior probabilities to competing models. In this paper we examine the extent to which lead statistics of posterior log permeability predictions are sensitive to prior probabilities of seven corresponding variogram models. We then explore the feasibility of quantifying prior model probabilities by (a) maximizing Shannon's entropy H [Shannon, 1948] subject to constraints reflecting a single analyst's (or a group of analysts?) prior perception about how plausible each alternative model (or a group of models) is relative to others, and (b) selecting a posteriori the most likely among such maxima corresponding to alternative prior perceptions of various analysts or groups of analysts. Another way to select among alternative prior model probability sets, which however is not guaranteed to yield optimum predictive performance (though it did so in our example) and would therefore not be our preferred option, is a min-max approach according to which one selects a priori the set corresponding to the smallest value of maximum entropy. Whereas maximizing H subject to the prior perception of a single analyst (or group) maximizes the potential for further information gain through conditioning, selecting the smallest among such maxima gives preference to the most informed prior perception among those of several analysts (or groups). We use the same variogram models and log permeability data as Ye et al. [2004] to demonstrate that our proposed approach yields the least amount of posterior entropy (residual uncertainty after conditioning) and enhances predictive model performance as compared to (a) the non-informative neutral case in

  13. Modeling and simulation of normal and hemiparetic gait

    NASA Astrophysics Data System (ADS)

    Luengas, Lely A.; Camargo, Esperanza; Sanchez, Giovanni

    2015-09-01

    Gait is the collective term for the two types of bipedal locomotion, walking and running. This paper is focused on walking. The analysis of human gait is of interest to many different disciplines, including biomechanics, human-movement science, rehabilitation and medicine in general. Here we present a new model that is capable of reproducing the properties of walking, normal and pathological. The aim of this paper is to establish the biomechanical principles that underlie human walking by using Lagrange method. The constraint forces of Rayleigh dissipation function, through which to consider the effect on the tissues in the gait, are included. Depending on the value of the factor present in the Rayleigh dissipation function, both normal and pathological gait can be simulated. First of all, we apply it in the normal gait and then in the permanent hemiparetic gait. Anthropometric data of adult person are used by simulation, and it is possible to use anthropometric data for children but is necessary to consider existing table of anthropometric data. Validation of these models includes simulations of passive dynamic gait that walk on level ground. The dynamic walking approach provides a new perspective of gait analysis, focusing on the kinematics and kinetics of gait. There have been studies and simulations to show normal human gait, but few of them have focused on abnormal, especially hemiparetic gait. Quantitative comparisons of the model predictions with gait measurements show that the model can reproduce the significant characteristics of normal gait.

  14. Comparison of K-Means Clustering with Linear Probability Model, Linear Discriminant Function, and Logistic Regression for Predicting Two-Group Membership.

    ERIC Educational Resources Information Center

    So, Tak-Shing Harry; Peng, Chao-Ying Joanne

    This study compared the accuracy of predicting two-group membership obtained from K-means clustering with those derived from linear probability modeling, linear discriminant function, and logistic regression under various data properties. Multivariate normally distributed populations were simulated based on combinations of population proportions,…

  15. Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique

    SciTech Connect

    Glosup, J.G.; Axelrod, M.C.

    1994-08-12

    The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method. The problem involves a probability model for underwater noise due to distant shipping.

  16. Suitable models for face geometry normalization in facial expression recognition

    NASA Astrophysics Data System (ADS)

    Sadeghi, Hamid; Raie, Abolghasem A.

    2015-01-01

    Recently, facial expression recognition has attracted much attention in machine vision research because of its various applications. Accordingly, many facial expression recognition systems have been proposed. However, the majority of existing systems suffer from a critical problem: geometric variability. It directly affects the performance of geometric feature-based facial expression recognition approaches. Furthermore, it is a crucial challenge in appearance feature-based techniques. This variability appears in both neutral faces and facial expressions. Appropriate face geometry normalization can improve the accuracy of each facial expression recognition system. Therefore, this paper proposes different geometric models or shapes for normalization. Face geometry normalization removes geometric variability of facial images and consequently, appearance feature extraction methods can be accurately utilized to represent facial images. Thus, some expression-based geometric models are proposed for facial image normalization. Next, local binary patterns and local phase quantization are used for appearance feature extraction. A combination of an effective geometric normalization with accurate appearance representations results in more than a 4% accuracy improvement compared to several state-of-the-arts in facial expression recognition. Moreover, utilizing the model of facial expressions which have larger mouth and eye region sizes gives higher accuracy due to the importance of these regions in facial expression.

  17. A neuronal model of vowel normalization and representation.

    PubMed

    Sussman, H M

    1986-05-01

    A speculative neuronal model for vowel normalization and representation is offered. The neurophysiological basis for the premise is the "combination-sensitive" neuron recently documented in the auditory cortex of the mustached bat (N. Suga, W. E. O'Neill, K. Kujirai, and T. Manabe, 1983, Journal of Neurophysiology, 49, 1573-1627). These neurons are specialized to respond to either precise frequency, amplitude, or time differentials between specific harmonic components of the pulse-echo pair comprising the biosonar signal of the bat. Such multiple frequency comparisons lie at the heart of human vowel perception and categorization. A representative vowel normalization algorithm is used to illustrate the operational principles of the neuronal model in accomplishing both normalization and categorization in early infancy. The neurological precursors to a phonemic vocalic system is described based on the neurobiological events characterizing regressive neurogenesis. PMID:3013360

  18. Analysis of a semiclassical model for rotational transition probabilities. [in highly nonequilibrium flow of diatomic molecules

    NASA Technical Reports Server (NTRS)

    Deiwert, G. S.; Yoshikawa, K. K.

    1975-01-01

    A semiclassical model proposed by Pearson and Hansen (1974) for computing collision-induced transition probabilities in diatomic molecules is tested by the direct-simulation Monte Carlo method. Specifically, this model is described by point centers of repulsion for collision dynamics, and the resulting classical trajectories are used in conjunction with the Schroedinger equation for a rigid-rotator harmonic oscillator to compute the rotational energy transition probabilities necessary to evaluate the rotation-translation exchange phenomena. It is assumed that a single, average energy spacing exists between the initial state and possible final states for a given collision.

  19. Fitting the distribution of dry and wet spells with alternative probability models

    NASA Astrophysics Data System (ADS)

    Deni, Sayang Mohd; Jemain, Abdul Aziz

    2009-06-01

    The development of the rainfall occurrence model is greatly important not only for data-generation purposes, but also in providing informative resources for future advancements in water-related sectors, such as water resource management and the hydrological and agricultural sectors. Various kinds of probability models had been introduced to a sequence of dry (wet) days by previous researchers in the field. Based on the probability models developed previously, the present study is aimed to propose three types of mixture distributions, namely, the mixture of two log series distributions (LSD), the mixture of the log series Poisson distribution (MLPD), and the mixture of the log series and geometric distributions (MLGD), as the alternative probability models to describe the distribution of dry (wet) spells in daily rainfall events. In order to test the performance of the proposed new models with the other nine existing probability models, 54 data sets which had been published by several authors were reanalyzed in this study. Also, the new data sets of daily observations from the six selected rainfall stations in Peninsular Malaysia for the period 1975-2004 were used. In determining the best fitting distribution to describe the observed distribution of dry (wet) spells, a Chi-square goodness-of-fit test was considered. The results revealed that the new method proposed that MLGD and MLPD showed a better fit as more than half of the data sets successfully fitted the distribution of dry and wet spells. However, the existing models, such as the truncated negative binomial and the modified LSD, were also among the successful probability models to represent the sequence of dry (wet) days in daily rainfall occurrence.

  20. The Probability of the Collapse of the Thermohaline Circulation in an Intermediate Complexity Model

    NASA Astrophysics Data System (ADS)

    Challenor, P.; Hankin, R.; Marsh, R.

    2005-12-01

    If the thermohaline circulation were to collapse we could see very rapid climate changes, with North West Europe becoming much cooler and widespread impacts across the globe. The risk of such an event has two aspects: the first is the impact of a collapse in the circulation and the second is the probability that it will happen. In this paper we look at latter problem. In particular we investigate the probability that the thermohaline circulation will collapse by the end of the century. To calculate the probability of thermohaline collapse we use a Monte Carl method. We simulate from a climate model with uncertain parameters and estimate the probability from the number of times the model collapses compared to the number of runs. We use an intermediate complexity climate model, C-GOLDSTEIN, which includes a 3-d ocean, an energy balance atmosphere and, in the version we use, a parameterised carbon cycle. Although C-GOLDSTEIN runs quickly for a climate model it is still too slow to allow the thousands of runs needed for the Monte Carlo calculations. We therefore build an emulator of the model. An emulator is a statistical approximation to the full climate model that gives an estimate of the model output and an uncertainty measure. We use a Gaussian process as our emulator. A limited number of model runs are used to build the emulator which is then used for the simulations. We produce estimates of the probability of the collapse of the thermohaline circulation corresponding to the indicative SRES emission scenarios: A1, A1FI, A1T, A2, B1 and B2.

  1. Questioning the Relevance of Model-Based Probability Statements on Extreme Weather and Future Climate

    NASA Astrophysics Data System (ADS)

    Smith, L. A.

    2007-12-01

    We question the relevance of climate-model based Bayesian (or other) probability statements for decision support and impact assessment on spatial scales less than continental and temporal averages less than seasonal. Scientific assessment of higher resolution space and time scale information is urgently needed, given the commercial availability of "products" at high spatiotemporal resolution, their provision by nationally funded agencies for use both in industry decision making and governmental policy support, and their presentation to the public as matters of fact. Specifically we seek to establish necessary conditions for probability forecasts (projections conditioned on a model structure and a forcing scenario) to be taken seriously as reflecting the probability of future real-world events. We illustrate how risk management can profitably employ imperfect models of complicated chaotic systems, following NASA's study of near-Earth PHOs (Potentially Hazardous Objects). Our climate models will never be perfect, nevertheless the space and time scales on which they provide decision- support relevant information is expected to improve with the models themselves. Our aim is to establish a set of baselines of internal consistency; these are merely necessary conditions (not sufficient conditions) that physics based state-of-the-art models are expected to pass if their output is to be judged decision support relevant. Probabilistic Similarity is proposed as one goal which can be obtained even when our models are not empirically adequate. In short, probabilistic similarity requires that, given inputs similar to today's empirical observations and observational uncertainties, we expect future models to produce similar forecast distributions. Expert opinion on the space and time scales on which we might reasonably expect probabilistic similarity may prove of much greater utility than expert elicitation of uncertainty in parameter values in a model that is not empirically

  2. Normal versus Noncentral Chi-Square Asymptotics of Misspecified Models

    ERIC Educational Resources Information Center

    Chun, So Yeon; Shapiro, Alexander

    2009-01-01

    The noncentral chi-square approximation of the distribution of the likelihood ratio (LR) test statistic is a critical part of the methodology in structural equation modeling. Recently, it was argued by some authors that in certain situations normal distributions may give a better approximation of the distribution of the LR test statistic. The main…

  3. Coupled escape probability for an asymmetric spherical case: Modeling optically thick comets

    SciTech Connect

    Gersch, Alan M.; A'Hearn, Michael F.

    2014-05-20

    We have adapted Coupled Escape Probability, a new exact method of solving radiative transfer problems, for use in asymmetrical spherical situations. Our model is intended specifically for use in modeling optically thick cometary comae, although not limited to such use. This method enables the accurate modeling of comets' spectra even in the potentially optically thick regions nearest the nucleus, such as those seen in Deep Impact observations of 9P/Tempel 1 and EPOXI observations of 103P/Hartley 2.

  4. An Empirical Model for Estimating the Probability of Electrical Short Circuits from Tin Whiskers. Part 2

    NASA Technical Reports Server (NTRS)

    Courey, Karim; Wright, Clara; Asfour, Shihab; Onar, Arzu; Bayliss, Jon; Ludwig, Larry

    2009-01-01

    In this experiment, an empirical model to quantify the probability of occurrence of an electrical short circuit from tin whiskers as a function of voltage was developed. This empirical model can be used to improve existing risk simulation models. FIB and TEM images of a tin whisker confirm the rare polycrystalline structure on one of the three whiskers studied. FIB cross-section of the card guides verified that the tin finish was bright tin.

  5. Logit-normal mixed model for Indian monsoon precipitation

    NASA Astrophysics Data System (ADS)

    Dietz, L. R.; Chatterjee, S.

    2014-09-01

    Describing the nature and variability of Indian monsoon precipitation is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Four GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data. The logit-normal model was applied to light, moderate, and extreme rainfall. Findings indicated that physical constructs were preserved by the models, and random effects were significant in many cases. We also found GLMM estimation methods were sensitive to tuning parameters and assumptions and therefore, recommend use of multiple methods in applications. This work provides a novel use of GLMM and promotes its addition to the gamut of tools for analysis in studying climate phenomena.

  6. Results from probability-based, simplified, off-shore Louisiana CSEM hydrocarbon reservoir modeling

    NASA Astrophysics Data System (ADS)

    Stalnaker, J. L.; Tinley, M.; Gueho, B.

    2009-12-01

    Perhaps the biggest impediment to the commercial application of controlled-source electromagnetic (CSEM) geophysics marine hydrocarbon exploration is the inefficiency of modeling and data inversion. If an understanding of the typical (in a statistical sense) geometrical and electrical nature of a reservoir can be attained, then it is possible to derive therefrom a simplified yet accurate model of the electromagnetic interactions that produce a measured marine CSEM signal, leading ultimately to efficient modeling and inversion. We have compiled geometric and resistivity measurements from roughly 100 known, producing off-shore Louisiana Gulf of Mexico reservoirs. Recognizing that most reservoirs could be recreated roughly from a sectioned hemi-ellipsoid, we devised a unified, compact reservoir geometry description. Each reservoir was initially fit to the ellipsoid by eye, though we plan in the future to perform a more rigorous least-squares fit. We created, using kernel density estimation, initial probabilistic descriptions of reservoir parameter distributions, with the understanding that additional information would not fundamentally alter our results, but rather increase accuracy. From the probabilistic description, we designed an approximate model consisting of orthogonally oriented current segments distributed across the ellipsoid--enough to define the shape, yet few enough to be resolved during inversion. The moment and length of the currents are mapped to geometry and resistivity of the ellipsoid. The probability density functions (pdfs) derived from reservoir statistics serve as a workbench. We first use the pdfs in a Monte Carlo simulation designed to assess the detectability off-shore Louisiana reservoirs using magnitude versus offset (MVO) anomalies. From the pdfs, many reservoir instances are generated (using rejection sampling) and each normalized MVO response is calculated. The response strength is summarized by numerically computing MVO power, and that

  7. Active Cigarette Smoking in Cognitively-Normal Elders and Probable Alzheimer's Disease is Associated with Elevated Cerebrospinal Fluid Oxidative Stress Biomarkers.

    PubMed

    Durazzo, Timothy C; Korecka, Magdalena; Trojanowski, John Q; Weiner, Michael W; O' Hara, Ruth; Ashford, John W; Shaw, Leslie M

    2016-07-25

    Neurodegenerative diseases and chronic cigarette smoking are associated with increased cerebral oxidative stress (OxS). Elevated F2-isoprostane levels in biological fluid is a recognized marker of OxS. This study assessed the association of active cigarette smoking with F2-isoprostane in concentrations in cognitively-normal elders (CN), and those with mild cognitive impairment (MCI) and probable Alzheimer's disease (AD). Smoking and non-smoking CN (n = 83), MCI (n = 164), and probable AD (n = 101) were compared on cerebrospinal fluid (CSF) iPF2α-III and 8,12, iso-iPF2α-VI F2-isoprostane concentrations. Associations between F2-isoprostane levels and hippocampal volumes were also evaluated. In CN and AD, smokers had higher iPF2α-III concentration; overall, smoking AD showed the highest iPF2α-III concentration across groups. Smoking and non-smoking MCI did not differ on iPF2α-III concentration. No group differences were apparent on 8,12, iso-iPF2α-VI concentration, but across AD, higher 8,12, iso-iPF2α-VI level was related to smaller left and total hippocampal volumes. Results indicate that active cigarette smoking in CN and probable AD is associated with increased central nervous system OxS. Further investigation of factors mediating/moderating the absence of smoking effects on CSF F2-isoprostane levels in MCI is warranted. In AD, increasing magnitude of OxS appeared to be related to smaller hippocampal volume. This study contributes additional novel information to the mounting body of evidence that cigarette smoking is associated with adverse effects on the human central nervous system across the lifespan. PMID:27472882

  8. Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique

    SciTech Connect

    Glosup, J.G.; Axelrod M.C.

    1994-11-15

    The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of our interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method.

  9. A model selection algorithm for a posteriori probability estimation with neural networks.

    PubMed

    Arribas, Juan Ignacio; Cid-Sueiro, Jesús

    2005-07-01

    This paper proposes a novel algorithm to jointly determine the structure and the parameters of a posteriori probability model based on neural networks (NNs). It makes use of well-known ideas of pruning, splitting, and merging neural components and takes advantage of the probabilistic interpretation of these components. The algorithm, so called a posteriori probability model selection (PPMS), is applied to an NN architecture called the generalized softmax perceptron (GSP) whose outputs can be understood as probabilities although results shown can be extended to more general network architectures. Learning rules are derived from the application of the expectation-maximization algorithm to the GSP-PPMS structure. Simulation results show the advantages of the proposed algorithm with respect to other schemes. PMID:16121722

  10. Application of damping mechanism model and stacking fault probability in Fe-Mn alloy

    SciTech Connect

    Huang, S.K.; Wen, Y.H.; Li, N. Teng, J.; Ding, S.; Xu, Y.G.

    2008-06-15

    In this paper, the damping mechanism model of Fe-Mn alloy was analyzed using dislocation theory. Moreover, as an important parameter in Fe-Mn based alloy, the effect of stacking fault probability on the damping capacity of Fe-19.35Mn alloy after deep-cooling or tensile deformation was also studied. The damping capacity was measured using reversal torsion pendulum. The stacking fault probability of {gamma}-austenite and {epsilon}-martensite was determined by means of X-ray diffraction (XRD) profile analysis. The microstructure was observed using scanning electronic microscope (SEM). The results indicated that with the strain amplitude increasing above a critical value, the damping capacity of Fe-19.35Mn alloy increased rapidly which could be explained using the breakaway model of Shockley partial dislocations. Deep-cooling and suitable tensile deformation could improve the damping capacity owning to the increasing of stacking fault probability of Fe-19.35Mn alloy.

  11. Modelling detection probabilities to evaluate management and control tools for an invasive species

    USGS Publications Warehouse

    Christy, M.T.; Yackel Adams, A.A.; Rodda, G.H.; Savidge, J.A.; Tyrrell, C.L.

    2010-01-01

    For most ecologists, detection probability (p) is a nuisance variable that must be modelled to estimate the state variable of interest (i.e. survival, abundance, or occupancy). However, in the realm of invasive species control, the rate of detection and removal is the rate-limiting step for management of this pervasive environmental problem. For strategic planning of an eradication (removal of every individual), one must identify the least likely individual to be removed, and determine the probability of removing it. To evaluate visual searching as a control tool for populations of the invasive brown treesnake Boiga irregularis, we designed a mark-recapture study to evaluate detection probability as a function of time, gender, size, body condition, recent detection history, residency status, searcher team and environmental covariates. We evaluated these factors using 654 captures resulting from visual detections of 117 snakes residing in a 5-ha semi-forested enclosure on Guam, fenced to prevent immigration and emigration of snakes but not their prey. Visual detection probability was low overall (= 0??07 per occasion) but reached 0??18 under optimal circumstances. Our results supported sex-specific differences in detectability that were a quadratic function of size, with both small and large females having lower detection probabilities than males of those sizes. There was strong evidence for individual periodic changes in detectability of a few days duration, roughly doubling detection probability (comparing peak to non-elevated detections). Snakes in poor body condition had estimated mean detection probabilities greater than snakes with high body condition. Search teams with high average detection rates exhibited detection probabilities about twice that of search teams with low average detection rates. Surveys conducted with bright moonlight and strong wind gusts exhibited moderately decreased probabilities of detecting snakes. Synthesis and applications. By

  12. Modelling the regional variability of the probability of high trihalomethane occurrence in municipal drinking water.

    PubMed

    Cool, Geneviève; Lebel, Alexandre; Sadiq, Rehan; Rodriguez, Manuel J

    2015-12-01

    The regional variability of the probability of occurrence of high total trihalomethane (TTHM) levels was assessed using multilevel logistic regression models that incorporate environmental and infrastructure characteristics. The models were structured in a three-level hierarchical configuration: samples (first level), drinking water utilities (DWUs, second level) and natural regions, an ecological hierarchical division from the Quebec ecological framework of reference (third level). They considered six independent variables: precipitation, temperature, source type, seasons, treatment type and pH. The average probability of TTHM concentrations exceeding the targeted threshold was 18.1%. The probability was influenced by seasons, treatment type, precipitations and temperature. The variance at all levels was significant, showing that the probability of TTHM concentrations exceeding the threshold is most likely to be similar if located within the same DWU and within the same natural region. However, most of the variance initially attributed to natural regions was explained by treatment types and clarified by spatial aggregation on treatment types. Nevertheless, even after controlling for treatment type, there was still significant regional variability of the probability of TTHM concentrations exceeding the threshold. Regional variability was particularly important for DWUs using chlorination alone since they lack the appropriate treatment required to reduce the amount of natural organic matter (NOM) in source water prior to disinfection. Results presented herein could be of interest to authorities in identifying regions with specific needs regarding drinking water quality and for epidemiological studies identifying geographical variations in population exposure to disinfection by-products (DBPs). PMID:26563233

  13. Modeling co-occurrence of northern spotted and barred owls: accounting for detection probability differences

    USGS Publications Warehouse

    Bailey, Larissa L.; Reid, Janice A.; Forsman, Eric D.; Nichols, James D.

    2009-01-01

    Barred owls (Strix varia) have recently expanded their range and now encompass the entire range of the northern spotted owl (Strix occidentalis caurina). This expansion has led to two important issues of concern for management of northern spotted owls: (1) possible competitive interactions between the two species that could contribute to population declines of northern spotted owls, and (2) possible changes in vocalization behavior and detection probabilities of northern spotted owls induced by presence of barred owls. We used a two-species occupancy model to investigate whether there was evidence of competitive exclusion between the two species at study locations in Oregon, USA. We simultaneously estimated detection probabilities for both species and determined if the presence of one species influenced the detection of the other species. Model selection results and associated parameter estimates provided no evidence that barred owls excluded spotted owls from territories. We found strong evidence that detection probabilities differed for the two species, with higher probabilities for northern spotted owls that are the object of current surveys. Non-detection of barred owls is very common in surveys for northern spotted owls, and detection of both owl species was negatively influenced by the presence of the congeneric species. Our results suggest that analyses directed at hypotheses of barred owl effects on demographic or occupancy vital rates of northern spotted owls need to deal adequately with imperfect and variable detection probabilities for both species.

  14. Developing a Model and Applications for Probabilities of Student Success: A Case Study of Predictive Analytics

    ERIC Educational Resources Information Center

    Calvert, Carol Elaine

    2014-01-01

    This case study relates to distance learning students on open access courses. It demonstrates the use of predictive analytics to generate a model of the probabilities of success and retention at different points, or milestones, in a student journey. A core set of explanatory variables has been established and their varying relative importance at…

  15. Blind Students' Learning of Probability through the Use of a Tactile Model

    ERIC Educational Resources Information Center

    Vita, Aida Carvalho; Kataoka, Verônica Yumi

    2014-01-01

    The objective of this paper is to discuss how blind students learn basic concepts of probability using the tactile model proposed by Vita (2012). Among the activities were part of the teaching sequence "Jefferson's Random Walk", in which students built a tree diagram (using plastic trays, foam cards, and toys), and pictograms in 3D…

  16. Logit-normal mixed model for Indian Monsoon rainfall extremes

    NASA Astrophysics Data System (ADS)

    Dietz, L. R.; Chatterjee, S.

    2014-03-01

    Describing the nature and variability of Indian monsoon rainfall extremes is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Several GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data procured from the National Climatic Data Center. The logit-normal model was applied with fixed covariates of latitude, longitude, elevation, daily minimum and maximum temperatures with a random intercept by weather station. In general, the estimation methods concurred in their suggestion of a relationship between the El Niño Southern Oscillation (ENSO) and extreme rainfall variability estimates. This work provides a valuable starting point for extending GLMM to incorporate the intricate dependencies in extreme climate events.

  17. TURBULENCE IN A THREE-DIMENSIONAL DEFLAGRATION MODEL FOR TYPE Ia SUPERNOVAE. II. INTERMITTENCY AND THE DEFLAGRATION-TO-DETONATION TRANSITION PROBABILITY

    SciTech Connect

    Schmidt, W.; Niemeyer, J. C.; Ciaraldi-Schoolmann, F.; Roepke, F. K.; Hillebrandt, W.

    2010-02-20

    The delayed detonation model describes the observational properties of the majority of Type Ia supernovae very well. Using numerical data from a three-dimensional deflagration model for Type Ia supernovae, the intermittency of the turbulent velocity field and its implications on the probability of a deflagration-to-detonation (DDT) transition are investigated. From structure functions of the turbulent velocity fluctuations, we determine intermittency parameters based on the log-normal and the log-Poisson models. The bulk of turbulence in the ash regions appears to be less intermittent than predicted by the standard log-normal model and the She-Leveque model. On the other hand, the analysis of the turbulent velocity fluctuations in the vicinity of the flame front by Roepke suggests a much higher probability of large velocity fluctuations on the grid scale in comparison to the log-normal intermittency model. Following Pan et al., we computed probability density functions for a DDT for the different distributions. The determination of the total number of regions at the flame surface, in which DDTs can be triggered, enables us to estimate the total number of events. Assuming that a DDT can occur in the stirred flame regime, as proposed by Woosley et al., the log-normal model would imply a delayed detonation between 0.7 and 0.8 s after the beginning of the deflagration phase for the multi-spot ignition scenario used in the simulation. However, the probability drops to virtually zero if a DDT is further constrained by the requirement that the turbulent velocity fluctuations reach about 500 km s{sup -1}. Under this condition, delayed detonations are only possible if the distribution of the velocity fluctuations is not log-normal. From our calculations follows that the distribution obtained by Roepke allow for multiple DDTs around 0.8 s after ignition at a transition density close to 1 x 10{sup 7} g cm{sup -3}.

  18. Predicting Flow Breakdown Probability and Duration in Stochastic Network Models: Impact on Travel Time Reliability

    SciTech Connect

    Dong, Jing; Mahmassani, Hani S.

    2011-01-01

    This paper proposes a methodology to produce random flow breakdown endogenously in a mesoscopic operational model, by capturing breakdown probability and duration. Based on previous research findings that probability of flow breakdown can be represented as a function of flow rate and the duration can be characterized by a hazard model. By generating random flow breakdown at various levels and capturing the traffic characteristics at the onset of the breakdown, the stochastic network simulation model provides a tool for evaluating travel time variability. The proposed model can be used for (1) providing reliability related traveler information; (2) designing ITS (intelligent transportation systems) strategies to improve reliability; and (3) evaluating reliability-related performance measures of the system.

  19. Internal Energy Exchange and Dissociation Probability in DSMC Molecular Collision Models

    NASA Astrophysics Data System (ADS)

    Chabut, E.

    2008-12-01

    The present work is related to the gas—gas collision models used in DSMC. It especially concerns the relaxation rates and the reactivity for diatomic molecules (but most of the models can be extended to polyatomic molecules). The Larsen-Borgnakke [1] model is often used in DSMC to describe the way of redistribution of the energies during collisions. A lot of information is provided by literature about links existing between macroscopic collision number, the fraction of inelastic collisions and the probability for a molecule to exchange energy during a collision in a specific mode. We then expose the main relations able to reproduce macroscopic relaxation rates. During collisions, the energy brought by the collision partners can be sufficient to generate a chemical reaction. The problematic is at first to determine an energetic condition for a possible reaction: which energy we have to consider and which threshold we have to compare with; and in second how to calculate the reaction probabilities. Then we often use the experimental results which put in light some phenomena (vibration—dissociation coupling for example) to built a qualitative basis for the models and, in a quantitative point of view, we determine probabilities such they can reproduce the macroscopic experimental rates reflected by the modified Arrhenius law. Some of the different chemical models used in DSMC will be exposed as the "TCE" [2]-3], "EAE" [3], "ME" [4] and "VFD" [5] models.

  20. Multistate modeling of habitat dynamics: Factors affecting Florida scrub transition probabilities

    USGS Publications Warehouse

    Breininger, D.R.; Nichols, J.D.; Duncan, B.W.; Stolen, Eric D.; Carter, G.M.; Hunt, D.K.; Drese, J.H.

    2010-01-01

    Many ecosystems are influenced by disturbances that create specific successional states and habitat structures that species need to persist. Estimating transition probabilities between habitat states and modeling the factors that influence such transitions have many applications for investigating and managing disturbance-prone ecosystems. We identify the correspondence between multistate capture-recapture models and Markov models of habitat dynamics. We exploit this correspondence by fitting and comparing competing models of different ecological covariates affecting habitat transition probabilities in Florida scrub and flatwoods, a habitat important to many unique plants and animals. We subdivided a large scrub and flatwoods ecosystem along central Florida's Atlantic coast into 10-ha grid cells, which approximated average territory size of the threatened Florida Scrub-Jay (Aphelocoma coerulescens), a management indicator species. We used 1.0-m resolution aerial imagery for 1994, 1999, and 2004 to classify grid cells into four habitat quality states that were directly related to Florida Scrub-Jay source-sink dynamics and management decision making. Results showed that static site features related to fire propagation (vegetation type, edges) and temporally varying disturbances (fires, mechanical cutting) best explained transition probabilities. Results indicated that much of the scrub and flatwoods ecosystem was resistant to moving from a degraded state to a desired state without mechanical cutting, an expensive restoration tool. We used habitat models parameterized with the estimated transition probabilities to investigate the consequences of alternative management scenarios on future habitat dynamics. We recommend this multistate modeling approach as being broadly applicable for studying ecosystem, land cover, or habitat dynamics. The approach provides maximum-likelihood estimates of transition parameters, including precision measures, and can be used to assess

  1. Dependence in probabilistic modeling, Dempster-Shafer theory, and probability bounds analysis.

    SciTech Connect

    Oberkampf, William Louis; Tucker, W. Troy; Zhang, Jianzhong; Ginzburg, Lev; Berleant, Daniel J.; Ferson, Scott; Hajagos, Janos; Nelsen, Roger B.

    2004-10-01

    This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.

  2. The return period analysis of natural disasters with statistical modeling of bivariate joint probability distribution.

    PubMed

    Li, Ning; Liu, Xueqin; Xie, Wei; Wu, Jidong; Zhang, Peng

    2013-01-01

    New features of natural disasters have been observed over the last several years. The factors that influence the disasters' formation mechanisms, regularity of occurrence and main characteristics have been revealed to be more complicated and diverse in nature than previously thought. As the uncertainty involved increases, the variables need to be examined further. This article discusses the importance and the shortage of multivariate analysis of natural disasters and presents a method to estimate the joint probability of the return periods and perform a risk analysis. Severe dust storms from 1990 to 2008 in Inner Mongolia were used as a case study to test this new methodology, as they are normal and recurring climatic phenomena on Earth. Based on the 79 investigated events and according to the dust storm definition with bivariate, the joint probability distribution of severe dust storms was established using the observed data of maximum wind speed and duration. The joint return periods of severe dust storms were calculated, and the relevant risk was analyzed according to the joint probability. The copula function is able to simulate severe dust storm disasters accurately. The joint return periods generated are closer to those observed in reality than the univariate return periods and thus have more value in severe dust storm disaster mitigation, strategy making, program design, and improvement of risk management. This research may prove useful in risk-based decision making. The exploration of multivariate analysis methods can also lay the foundation for further applications in natural disaster risk analysis. PMID:22616629

  3. Impact of stray charge on interconnect wire via probability model of double-dot system

    NASA Astrophysics Data System (ADS)

    Xiangye, Chen; Li, Cai; Qiang, Zeng; Xinqiao, Wang

    2016-02-01

    The behavior of quantum cellular automata (QCA) under the influence of a stray charge is quantified. A new time-independent switching paradigm, a probability model of the double-dot system, is developed. Superiority in releasing the calculation operation is presented by the probability model compared to previous stray charge analysis utilizing ICHA or full-basis calculation. Simulation results illustrate that there is a 186-nm-wide region surrounding a QCA wire where a stray charge will cause the target cell to switch unsuccessfully. The failure is exhibited by two new states' dominating the target cell. Therefore, a bistable saturation model is no longer applicable for stray charge analysis. Project supported by the National Natural Science Foundation of China (No. 61172043) and the Key Program of Shaanxi Provincial Natural Science for Basic Research (No. 2011JZ015).

  4. A Probability Model of Decompression Sickness at 4.3 Psia after Exercise Prebreathe

    NASA Technical Reports Server (NTRS)

    Conkin, Johnny; Gernhardt, Michael L.; Powell, Michael R.; Pollock, Neal

    2004-01-01

    Exercise PB can reduce the risk of decompression sickness on ascent to 4.3 psia when performed at the proper intensity and duration. Data are from seven tests. PB times ranged from 90 to 150 min. High intensity, short duration dual-cycle ergometry was done during the PB. This was done alone, or combined with intermittent low intensity exercise or periods of rest for the remaining PB. Nonambulating men and women performed light exercise from a semi-recumbent position at 4.3 psia for four hrs. The Research Model with age tested the probability that DCS increases with advancing age. The NASA Model with gender hypothesized that the probability of DCS increases if gender is female. Accounting for exercise and rest during PB with a variable half-time compartment for computed tissue N2 pressure advances our probability modeling of hypobaric DCS. Both models show that a small increase in exercise intensity during PB reduces the risk of DCS, and a larger increase in exercise intensity dramatically reduces risk. These models support the hypothesis that aerobic fitness is an important consideration for the risk of hypobaric DCS when exercise is performed during the PB.

  5. Modelling convection-enhanced delivery in normal and oedematous brain.

    PubMed

    Haar, P J; Chen, Z-J; Fatouros, P P; Gillies, G T; Corwin, F D; Broaddus, W C

    2014-03-01

    Convection-enhanced delivery (CED) could have clinical applications in the delivery of neuroprotective agents in brain injury states, such as ischaemic stroke. For CED to be safe and effective, a physician must have accurate knowledge of how concentration distributions will be affected by catheter location, flow rate and other similar parameters. In most clinical applications of CED, brain microstructures will be altered by pathological injury processes. Ischaemic stroke and other acute brain injury states are complicated by formation of cytotoxic oedema, in which cellular swelling decreases the fractional volume of the extracellular space (ECS). Such changes would be expected to significantly alter the distribution of neuroprotective agents delivered by CED. Quantitative characterization of these changes will help confirm this prediction and assist in efforts to model the distribution of therapeutic agents. Three-dimensional computational models based on a Nodal Point Integration (NPI) scheme were developed to model infusions in normal brain and brain with cytotoxic oedema. These models were compared to experimental data in which CED was studied in normal brain and in a middle cerebral artery (MCA) occlusion model of cytotoxic oedema. The computational models predicted concentration distributions with reasonable accuracy. PMID:24446800

  6. Syntactic error modeling and scoring normalization in speech recognition

    NASA Technical Reports Server (NTRS)

    Olorenshaw, Lex

    1991-01-01

    The objective was to develop the speech recognition system to be able to detect speech which is pronounced incorrectly, given that the text of the spoken speech is known to the recognizer. Research was performed in the following areas: (1) syntactic error modeling; (2) score normalization; and (3) phoneme error modeling. The study into the types of errors that a reader makes will provide the basis for creating tests which will approximate the use of the system in the real world. NASA-Johnson will develop this technology into a 'Literacy Tutor' in order to bring innovative concepts to the task of teaching adults to read.

  7. Hitchhikers on trade routes: A phenology model estimates the probabilities of gypsy moth introduction and establishment.

    PubMed

    Gray, David R

    2010-12-01

    As global trade increases so too does the probability of introduction of alien species to new locations. Estimating the probability of an alien species introduction and establishment following introduction is a necessary step in risk estimation (probability of an event times the consequences, in the currency of choice, of the event should it occur); risk estimation is a valuable tool for reducing the risk of biological invasion with limited resources. The Asian gypsy moth, Lymantria dispar (L.), is a pest species whose consequence of introduction and establishment in North America and New Zealand warrants over US$2 million per year in surveillance expenditure. This work describes the development of a two-dimensional phenology model (GLS-2d) that simulates insect development from source to destination and estimates: (1) the probability of introduction from the proportion of the source population that would achieve the next developmental stage at the destination and (2) the probability of establishment from the proportion of the introduced population that survives until a stable life cycle is reached at the destination. The effect of shipping schedule on the probabilities of introduction and establishment was examined by varying the departure date from 1 January to 25 December by weekly increments. The effect of port efficiency was examined by varying the length of time that invasion vectors (shipping containers and ship) were available for infection. The application of GLS-2d is demonstrated using three common marine trade routes (to Auckland, New Zealand, from Kobe, Japan, and to Vancouver, Canada, from Kobe and from Vladivostok, Russia). PMID:21265459

  8. Determining the probability of arsenic in groundwater using a parsimonious model.

    PubMed

    Lee, Jin-Jing; Jang, Cheng-Shin; Liu, Chen-Wuing; Liang, Ching-Ping; Wang, Sheng-Wei

    2009-09-01

    Spatial distributions of groundwater quality are commonly heterogeneous, varying with depths and locations, which is important in assessing the health and ecological risks. Owing to time and cost constraints, it is not practical or economical to measure arsenic everywhere. A predictive model is necessary to estimate the distribution of a specific pollutant in groundwater. This study developed a logistic regression (LR) model to predict the residential well water quality in the Lanyang plain. Six hydrochemical parameters, pH, NO3- -N, NO2- -N, NH+ -N, Fe, and Mn, and a regional variable (binary type) were used to evaluate the probability of arsenic concentrations exceeding 10 microg/L in groundwater. The developed parsimonious LR model indicates that four parameters in the Lanyang plain aquifer, (pH, NH4+, Fe(aq), and a component to account for regional heterogeneity) can accurately predict probability of arsenic concentration > or =1 microg/Lin groundwater. These parameters provide an explanation for release of arsenic by reductive dissolution of As-rich FeOOH in NH4+ containing groundwater. A comparison of LR and indicator kriging (IK) show similar results in modeling the distributions of arsenic. LR can be applied to assess the probability of groundwater arsenic at sampled sites without arsenic concentration data apriori. However, arsenic sampling is still needed and required in arsenic-assessment stages in other areas, and the need for long-term monitoring and maintenance is not precluded. PMID:19764232

  9. From default probabilities to credit spreads: credit risk models explain market prices (Keynote Address)

    NASA Astrophysics Data System (ADS)

    Denzler, Stefan M.; Dacorogna, Michel M.; Muller, Ulrich A.; McNeil, Alexander J.

    2005-05-01

    Credit risk models like Moody's KMV are now well established in the market and give bond managers reliable default probabilities for individual firms. Until now it has been hard to relate those probabilities to the actual credit spreads observed on the market for corporate bonds. Inspired by the existence of scaling laws in financial markets by Dacorogna et al. 2001 and DiMatteo et al. 2005 deviating from the Gaussian behavior, we develop a model that quantitatively links those default probabilities to credit spreads (market prices). The main input quantities to this study are merely industry yield data of different times to maturity and expected default frequencies (EDFs) of Moody's KMV. The empirical results of this paper clearly indicate that the model can be used to calculate approximate credit spreads (market prices) from EDFs, independent of the time to maturity and the industry sector under consideration. Moreover, the model is effective in an out-of-sample setting, it produces consistent results on the European bond market where data are scarce and can be adequately used to approximate credit spreads on the corporate level.

  10. Weighted least square estimates of the parameters of a model of survivorship probabilities.

    PubMed

    Mitra, S

    1987-06-01

    "A weighted regression has been fitted to estimate the parameters of a model involving functions of survivorship probability and age. Earlier, the parameters were estimated by the method of ordinary least squares and the results were very encouraging. However, a multiple regression equation passing through the origin has been found appropriate for the present model from statistical consideration. Fortunately, this method, while methodologically more sophisticated, has a slight edge over the former as evidenced by the respective measures of reproducibility in the model and actual life tables selected for this study." PMID:12281212

  11. Statistical Surrogate Models for Estimating Probability of High-Consequence Climate Change

    NASA Astrophysics Data System (ADS)

    Field, R.; Constantine, P.; Boslough, M.

    2011-12-01

    We have posed the climate change problem in a framework similar to that used in safety engineering, by acknowledging that probabilistic risk assessments focused on low-probability, high-consequence climate events are perhaps more appropriate than studies focused simply on best estimates. To properly explore the tails of the distribution requires extensive sampling, which is not possible with existing coupled atmospheric models due to the high computational cost of each simulation. We have developed specialized statistical surrogate models (SSMs) that can be used to make predictions about the tails of the associated probability distributions. A SSM is different than a deterministic surrogate model in that it represents each climate variable of interest as a space/time random field, that is, a random variable for every fixed location in the atmosphere at all times. The SSM can be calibrated to available spatial and temporal data from existing climate databases, or to a collection of outputs from general circulation models. Because of its reduced size and complexity, the realization of a large number of independent model outputs from a SSM becomes computationally straightforward, so that quantifying the risk associated with low-probability, high-consequence climate events becomes feasible. A Bayesian framework was also developed to provide quantitative measures of confidence, via Bayesian credible intervals, to assess these risks. To illustrate the use of the SSM, we considered two collections of NCAR CCSM 3.0 output data. The first collection corresponds to average December surface temperature for years 1990-1999 based on a collection of 8 different model runs obtained from the Program for Climate Model Diagnosis and Intercomparison (PCMDI). We calibrated the surrogate model to the available model data and make various point predictions. We also analyzed average precipitation rate in June, July, and August over a 54-year period assuming a cyclic Y2K ocean model. We

  12. How might Model-based Probabilities Extracted from Imperfect Models Guide Rational Decisions: The Case for non-probabilistic odds

    NASA Astrophysics Data System (ADS)

    Smith, Leonard A.

    2010-05-01

    This contribution concerns "deep" or "second-order" uncertainty, such as the uncertainty in our probability forecasts themselves. It asks the question: "Is it rational to take (or offer) bets using model-based probabilities as if they were objective probabilities?" If not, what alternative approaches for determining odds, perhaps non-probabilistic odds, might prove useful in practice, given the fact we know our models are imperfect? We consider the case where the aim is to provide sustainable odds: not to produce a profit but merely to rationally expect to break even in the long run. In other words, to run a quantified risk of ruin that is relatively small. Thus the cooperative insurance schemes of coastal villages provide a more appropriate parallel than a casino. A "better" probability forecast would lead to lower premiums charged and less volatile fluctuations in the cash reserves of the village. Note that the Bayesian paradigm does not constrain one to interpret model distributions as subjective probabilities, unless one believes the model to be empirically adequate for the task at hand. In geophysics, this is rarely the case. When a probability forecast is interpreted as the objective probability of an event, the odds on that event can be easily computed as one divided by the probability of the event, and one need not favour taking either side of the wager. (Here we are using "odds-for" not "odds-to", the difference being whether of not the stake is returned; odds of one to one are equivalent to odds of two for one.) The critical question is how to compute sustainable odds based on information from imperfect models. We suggest that this breaks the symmetry between the odds-on an event and the odds-against it. While a probability distribution can always be translated into odds, interpreting the odds on a set of events might result in "implied-probabilities" that sum to more than one. And/or the set of odds may be incomplete, not covering all events. We ask

  13. A Bayesian analysis of two probability models describing thunderstorm activity at Cape Kennedy, Florida

    NASA Technical Reports Server (NTRS)

    Williford, W. O.; Hsieh, P.; Carter, M. C.

    1974-01-01

    A Bayesian analysis of the two discrete probability models, the negative binomial and the modified negative binomial distributions, which have been used to describe thunderstorm activity at Cape Kennedy, Florida, is presented. The Bayesian approach with beta prior distributions is compared to the classical approach which uses a moment method of estimation or a maximum-likelihood method. The accuracy and simplicity of the Bayesian method is demonstrated.

  14. Experimental and numerical models of basement-detached normal faults

    SciTech Connect

    Islam, Q.T.; Lapointe, P.R. ); Withjack, M.O. )

    1991-03-01

    The ability to infer more accurately the type, timing, and location of folds and faults that develop during the evolution of large-scale geologic structures can help explorationists to interpret subsurface structures and generate new prospects to better assess their risk factors. One type of structural setting that is of importance in many exploration plays is that of the basement-detached normal fault. Key questions regarding such structures are (1) what structures form, (2) where do the structures form, (3) when do the structures form, (4) why do the structures form Clay and finite element models were used to examine the influence of fault shape on the development of folds and faults in the hanging wall of basement-detached normal faults. The use of two, independent methods helps to overcome each method's inherent limitations, providing additional corroboration for conclusions drawn from the modeling. Three fault geometries were modeled: a fault plane dipping uniformly at 45{degree}; a fault plane that steepens from 30{degree} to 45{degree}; and a fault plane that shallows with depth from 45{degree} to 30{degree}. Results from both modeling approaches show that (1) antithetic faults form at fault bends where fault dip increases, (2) faults become progressively younger towards the footwall, (3) the zone(s) of high stress and faulting are stationary relative to the footwall, (4) anticlines with no closure form below faults shallow, and (5) closed anticlines form only above the point where faults steepen.

  15. How to model a negligible probability under the WTO sanitary and phytosanitary agreement?

    PubMed

    Powell, Mark R

    2013-06-01

    Since the 1997 EC--Hormones decision, World Trade Organization (WTO) Dispute Settlement Panels have wrestled with the question of what constitutes a negligible risk under the Sanitary and Phytosanitary Agreement. More recently, the 2010 WTO Australia--Apples Panel focused considerable attention on the appropriate quantitative model for a negligible probability in a risk assessment. The 2006 Australian Import Risk Analysis for Apples from New Zealand translated narrative probability statements into quantitative ranges. The uncertainty about a "negligible" probability was characterized as a uniform distribution with a minimum value of zero and a maximum value of 10(-6) . The Australia - Apples Panel found that the use of this distribution would tend to overestimate the likelihood of "negligible" events and indicated that a triangular distribution with a most probable value of zero and a maximum value of 10⁻⁶ would correct the bias. The Panel observed that the midpoint of the uniform distribution is 5 × 10⁻⁷ but did not consider that the triangular distribution has an expected value of 3.3 × 10⁻⁷. Therefore, if this triangular distribution is the appropriate correction, the magnitude of the bias found by the Panel appears modest. The Panel's detailed critique of the Australian risk assessment, and the conclusions of the WTO Appellate Body about the materiality of flaws found by the Panel, may have important implications for the standard of review for risk assessments under the WTO SPS Agreement. PMID:22985254

  16. Meta-analysis of two-arm studies: Modeling the intervention effect from survival probabilities.

    PubMed

    Combescure, C; Courvoisier, D S; Haller, G; Perneger, T V

    2016-04-01

    Pooling the hazard ratios is not always feasible in meta-analyses of two-arm survival studies, because the measure of the intervention effect is not systematically reported. An alternative approach proposed by Moodie et al. is to use the survival probabilities of the included studies, all collected at a single point in time: the intervention effect is then summarised as the pooled ratio of the logarithm of survival probabilities (which is an estimator of the hazard ratios when hazards are proportional). In this article, we propose a generalization of this method. By using survival probabilities at several points in time, this generalization allows a flexible modeling of the intervention over time. The method is applicable to partially proportional hazards models, with the advantage of not requiring the specification of the baseline survival. As in Moodie et al.'s method, the study-level factors modifying the survival functions can be ignored as long as they do not modify the intervention effect. The procedures of estimation are presented for fixed and random effects models. Two illustrative examples are presented. PMID:23267027

  17. Spatially-constrained probability distribution model of incoherent motion (SPIM) for abdominal diffusion-weighted MRI.

    PubMed

    Kurugol, Sila; Freiman, Moti; Afacan, Onur; Perez-Rossello, Jeannette M; Callahan, Michael J; Warfield, Simon K

    2016-08-01

    Quantitative diffusion-weighted MR imaging (DW-MRI) of the body enables characterization of the tissue microenvironment by measuring variations in the mobility of water molecules. The diffusion signal decay model parameters are increasingly used to evaluate various diseases of abdominal organs such as the liver and spleen. However, previous signal decay models (i.e., mono-exponential, bi-exponential intra-voxel incoherent motion (IVIM) and stretched exponential models) only provide insight into the average of the distribution of the signal decay rather than explicitly describe the entire range of diffusion scales. In this work, we propose a probability distribution model of incoherent motion that uses a mixture of Gamma distributions to fully characterize the multi-scale nature of diffusion within a voxel. Further, we improve the robustness of the distribution parameter estimates by integrating spatial homogeneity prior into the probability distribution model of incoherent motion (SPIM) and by using the fusion bootstrap solver (FBM) to estimate the model parameters. We evaluated the improvement in quantitative DW-MRI analysis achieved with the SPIM model in terms of accuracy, precision and reproducibility of parameter estimation in both simulated data and in 68 abdominal in-vivo DW-MRIs. Our results show that the SPIM model not only substantially reduced parameter estimation errors by up to 26%; it also significantly improved the robustness of the parameter estimates (paired Student's t-test, p < 0.0001) by reducing the coefficient of variation (CV) of estimated parameters compared to those produced by previous models. In addition, the SPIM model improves the parameter estimates reproducibility for both intra- (up to 47%) and inter-session (up to 30%) estimates compared to those generated by previous models. Thus, the SPIM model has the potential to improve accuracy, precision and robustness of quantitative abdominal DW-MRI analysis for clinical applications. PMID

  18. Normalized Texture Motifs and Their Application to Statistical Object Modeling

    SciTech Connect

    Newsam, S D

    2004-03-09

    A fundamental challenge in applying texture features to statistical object modeling is recognizing differently oriented spatial patterns. Rows of moored boats in remote sensed images of harbors should be consistently labeled regardless of the orientation of the harbors, or of the boats within the harbors. This is not straightforward to do, however, when using anisotropic texture features to characterize the spatial patterns. We here propose an elegant solution, termed normalized texture motifs, that uses a parametric statistical model to characterize the patterns regardless of their orientation. The models are learned in an unsupervised fashion from arbitrarily orientated training samples. The proposed approach is general enough to be used with a large category of orientation-selective texture features.

  19. Physical models for the normal YORP and diurnal Yarkovsky effects

    NASA Astrophysics Data System (ADS)

    Golubov, O.; Kravets, Y.; Krugly, Yu. N.; Scheeres, D. J.

    2016-06-01

    We propose an analytic model for the normal Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) and diurnal Yarkovsky effects experienced by a convex asteroid. Both the YORP torque and the Yarkovsky force are expressed as integrals of a universal function over the surface of an asteroid. Although in general this function can only be calculated numerically from the solution of the heat conductivity equation, approximate solutions can be obtained in quadratures for important limiting cases. We consider three such simplified models: Rubincam's approximation (zero heat conductivity), low thermal inertia limit (including the next order correction and thus valid for small heat conductivity), and high thermal inertia limit (valid for large heat conductivity). All three simplified models are compared with the exact solution.

  20. Neurophysiological model of the normal and abnormal human pupil

    NASA Technical Reports Server (NTRS)

    Krenz, W.; Robin, M.; Barez, S.; Stark, L.

    1985-01-01

    Anatomical, experimental, and computer simulation studies were used to determine the structure of the neurophysiological model of the pupil size control system. The computer simulation of this model demonstrates the role played by each of the elements in the neurological pathways influencing the size of the pupil. Simulations of the effect of drugs and common abnormalities in the system help to illustrate the workings of the pathways and processes involved. The simulation program allows the user to select pupil condition (normal or an abnormality), specific site along the neurological pathway (retina, hypothalamus, etc.) drug class input (barbiturate, narcotic, etc.), stimulus/response mode, display mode, stimulus type and input waveform, stimulus or background intensity and frequency, the input and output conditions, and the response at the neuroanatomical site. The model can be used as a teaching aid or as a tool for testing hypotheses regarding the system.

  1. Hierarchical mixture models for longitudinal immunologic data with heterogeneity, non-normality, and missingness.

    PubMed

    Huang, Yangxin; Chen, Jiaqing; Yin, Ping

    2014-07-17

    It is a common practice to analyze longitudinal data frequently arisen in medical studies using various mixed-effects models in the literature. However, the following issues may standout in longitudinal data analysis: (i) In clinical practice, the profile of each subject's response from a longitudinal study may follow a "broken stick" like trajectory, indicating multiple phases of increase, decline and/or stable in response. Such multiple phases (with changepoints) may be an important indicator to help quantify treatment effect and improve management of patient care. To estimate changepoints, the various mixed-effects models become a challenge due to complicated structures of model formulations; (ii) an assumption of homogeneous population for models may be unrealistically obscuring important features of between-subject and within-subject variations; (iii) normality assumption for model errors may not always give robust and reliable results, in particular, if the data exhibit non-normality; and (iv) the response may be missing and the missingness may be non-ignorable. In the literature, there has been considerable interest in accommodating heterogeneity, non-normality or missingness in such models. However, there has been relatively little work concerning all of these features simultaneously. There is a need to fill up this gap as longitudinal data do often have these characteristics. In this article, our objectives are to study simultaneous impact of these data features by developing a Bayesian mixture modeling approach-based Finite Mixture of Changepoint (piecewise) Mixed-Effects (FMCME) models with skew distributions, allowing estimates of both model parameters and class membership probabilities at population and individual levels. Simulation studies are conducted to assess the performance of the proposed method, and an AIDS clinical data example is analyzed to demonstrate the proposed methodologies and to compare modeling results of potential mixture models

  2. Ensemble learning of inverse probability weights for marginal structural modeling in large observational datasets.

    PubMed

    Gruber, Susan; Logan, Roger W; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A

    2015-01-15

    Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However, a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V-fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. PMID:25316152

  3. Ensemble learning of inverse probability weights for marginal structural modeling in large observational datasets

    PubMed Central

    Gruber, Susan; Logan, Roger W.; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A.

    2014-01-01

    Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V -fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. PMID:25316152

  4. Quantifying predictive uncertainty of streamflow forecasts based on a Bayesian joint probability model

    NASA Astrophysics Data System (ADS)

    Zhao, Tongtiegang; Wang, Q. J.; Bennett, James C.; Robertson, David E.; Shao, Quanxi; Zhao, Jianshi

    2015-09-01

    Uncertainty is inherent in streamflow forecasts and is an important determinant of the utility of forecasts for water resources management. However, predictions by deterministic models provide only single values without uncertainty attached. This study presents a method for using a Bayesian joint probability (BJP) model to post-process deterministic streamflow forecasts by quantifying predictive uncertainty. The BJP model is comprised of a log-sinh transformation that normalises hydrological data, and a bi-variate Gaussian distribution that characterises the dependence relationship. The parameters of the transformation and the distribution are estimated through Bayesian inference with a Monte Carlo Markov chain (MCMC) algorithm. The BJP model produces, from a raw deterministic forecast, an ensemble of values to represent forecast uncertainty. The model is applied to raw deterministic forecasts of inflows to the Three Gorges Reservoir in China as a case study. The heteroscedasticity and non-Gaussianity of forecast uncertainty are effectively addressed. The ensemble spread accounts for the forecast uncertainty and leads to considerable improvement in terms of the continuous ranked probability score. The forecasts become less accurate as lead time increases, and the ensemble spread provides reliable information on the forecast uncertainty. We conclude that the BJP model is a useful tool to quantify predictive uncertainty in post-processing deterministic streamflow forecasts.

  5. Normality Index of Ventricular Contraction Based on a Statistical Model from FADS

    PubMed Central

    Jiménez-Ángeles, Luis; Valdés-Cristerna, Raquel; Vallejo, Enrique; Bialostozky, David; Medina-Bañuelos, Verónica

    2013-01-01

    Radionuclide-based imaging is an alternative to evaluate ventricular function and synchrony and may be used as a tool for the identification of patients that could benefit from cardiac resynchronization therapy (CRT). In a previous work, we used Factor Analysis of Dynamic Structures (FADS) to analyze the contribution and spatial distribution of the 3 most significant factors (3-MSF) present in a dynamic series of equilibrium radionuclide angiography images. In this work, a probability density function model of the 3-MSF extracted from FADS for a control group is presented; also an index, based on the likelihood between the control group's contraction model and a sample of normal subjects is proposed. This normality index was compared with those computed for two cardiopathic populations, satisfying the clinical criteria to be considered as candidates for a CRT. The proposed normality index provides a measure, consistent with the phase analysis currently used in clinical environment, sensitive enough to show contraction differences between normal and abnormal groups, which suggests that it can be related to the degree of severity in the ventricular contraction dyssynchrony, and therefore shows promise as a follow-up procedure for patients under CRT. PMID:23634177

  6. Mathematical modeling of normal pharyngeal bolus transport: a preliminary study.

    PubMed

    Chang, M W; Rosendall, B; Finlayson, B A

    1998-07-01

    Dysphagia (difficulty in swallowing) is a common clinical symptom associated with many diseases, such as stroke, multiple sclerosis, neuromuscular diseases, and cancer. Its complications include choking, aspiration, malnutrition, cachexia, and dehydration. The goal in dysphagia management is to provide adequate nutrition and hydration while minimizing the risk of choking and aspiration. It is important to advance the individual toward oral feeding in a timely manner to enhance the recovery of swallowing function and preserve the quality of life. Current clinical assessments of dysphagia are limited in providing adequate guidelines for oral feeding. Mathematical modeling of the fluid dynamics of pharyngeal bolus transport provides a unique opportunity for studying the physiology and pathophysiology of swallowing. Finite element analysis (FEA) is a special case of computational fluid dynamics (CFD). In CFD, the flow of a fluid in a space is modeled by covering the space with a grid and predicting how the fluid moves from grid point to grid point. FEA is capable of solving problems with complex geometries and free surfaces. A preliminary pharyngeal model has been constructed using FEA. This model incorporates literature-reported, normal, anatomical data with time-dependent pharyngeal/upper esophageal sphincter (UES) wall motion obtained from videofluorography (VFG). This time-dependent wall motion can be implemented as a moving boundary condition in the model. Clinical kinematic data can be digitized from VFG studies to construct and test the mathematical model. The preliminary model demonstrates the feasibility of modeling pharyngeal bolus transport, which, to our knowledge, has not been attempted before. This model also addresses the need and the potential for CFD in understanding the physiology and pathophysiology of the pharyngeal phase of swallowing. Improvements of the model are underway. Combining the model with individualized clinical data should potentially

  7. A cellular automata model of traffic flow with variable probability of randomization

    NASA Astrophysics Data System (ADS)

    Zheng, Wei-Fan; Zhang, Ji-Ye

    2015-05-01

    Research on the stochastic behavior of traffic flow is important to understand the intrinsic evolution rules of a traffic system. By introducing an interactional potential of vehicles into the randomization step, an improved cellular automata traffic flow model with variable probability of randomization is proposed in this paper. In the proposed model, the driver is affected by the interactional potential of vehicles before him, and his decision-making process is related to the interactional potential. Compared with the traditional cellular automata model, the modeling is more suitable for the driver’s random decision-making process based on the vehicle and traffic situations in front of him in actual traffic. From the improved model, the fundamental diagram (flow-density relationship) is obtained, and the detailed high-density traffic phenomenon is reproduced through numerical simulation. Project supported by the National Natural Science Foundation of China (Grant Nos. 11172247, 61273021, 61373009, and 61100118).

  8. A spatial model of bird abundance as adjusted for detection probability

    USGS Publications Warehouse

    Gorresen, P.M.; Mcmillan, G.P.; Camp, R.J.; Pratt, T.K.

    2009-01-01

    Modeling the spatial distribution of animals can be complicated by spatial and temporal effects (i.e. spatial autocorrelation and trends in abundance over time) and other factors such as imperfect detection probabilities and observation-related nuisance variables. Recent advances in modeling have demonstrated various approaches that handle most of these factors but which require a degree of sampling effort (e.g. replication) not available to many field studies. We present a two-step approach that addresses these challenges to spatially model species abundance. Habitat, spatial and temporal variables were handled with a Bayesian approach which facilitated modeling hierarchically structured data. Predicted abundance was subsequently adjusted to account for imperfect detection and the area effectively sampled for each species. We provide examples of our modeling approach for two endemic Hawaiian nectarivorous honeycreepers: 'i'iwi Vestiaria coccinea and 'apapane Himatione sanguinea. ?? 2009 Ecography.

  9. Modeling avian detection probabilities as a function of habitat using double-observer point count data

    USGS Publications Warehouse

    Heglund, P.J.; Nichols, J.D.; Hines, J.E.; Sauer, J.; Fallon, J.; Fallon, F.

    2001-01-01

    Point counts are a controversial sampling method for bird populations because the counts are not censuses, and the proportion of birds missed during counting generally is not estimated. We applied a double-observer approach to estimate detection rates of birds from point counts in Maryland, USA, and test whether detection rates differed between point counts conducted in field habitats as opposed to wooded habitats. We conducted 2 analyses. The first analysis was based on 4 clusters of counts (routes) surveyed by a single pair of observers. A series of models was developed with differing assumptions about sources of variation in detection probabilities and fit using program SURVIV. The most appropriate model was selected using Akaike's Information Criterion. The second analysis was based on 13 routes (7 woods and 6 field routes) surveyed by various observers in which average detection rates were estimated by route and compared using a t-test. In both analyses, little evidence existed for variation in detection probabilities in relation to habitat. Double-observer methods provide a reasonable means of estimating detection probabilities and testing critical assumptions needed for analysis of point counts.

  10. Corrections to vibrational transition probabilities calculated from a three-dimensional model.

    NASA Technical Reports Server (NTRS)

    Stallcop, J. R.

    1972-01-01

    Corrections to the collision-induced vibration transition probability calculated by Hansen and Pearson from a three-dimensional semiclassical model are examined. These corrections come from the retention of higher order terms in the expansion of the interaction potential and the use of the actual value of the deflection angle in the calculation of the transition probability. It is found that the contribution to the transition cross section from previously neglected potential terms can be significant for short range potentials and for the large relative collision velocities encountered at high temperatures. The correction to the transition cross section obtained from the use of actual deflection angles will not be appreciable unless the change in the rotational quantum number is large.

  11. Protein single-model quality assessment by feature-based probability density functions.

    PubMed

    Cao, Renzhi; Cheng, Jianlin

    2016-01-01

    Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method-Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob. PMID:27041353

  12. Protein single-model quality assessment by feature-based probability density functions

    PubMed Central

    Cao, Renzhi; Cheng, Jianlin

    2016-01-01

    Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method–Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob. PMID:27041353

  13. Transient Response of Seismicity and Earthquake Probabilities to Stress Transfer in a Brownian Earthquake Model

    NASA Astrophysics Data System (ADS)

    Ellsworth, W. L.; Matthews, M. V.; Simpson, R. W.

    2001-12-01

    A statistical mechanical description of elastic rebound is used to study earthquake interaction and stress transfer effects in a point process model of earthquakes. The model is a Brownian Relaxation Oscillator (BRO) in which a random walk (standard Brownian motion) is added to a steady tectonic loading to produce a stochastic load state process. Rupture occurs in this model when the load state reaches a critical value. The load state is a random variable and may be described at any point in time by its probability density. Load state evolves toward the failure threshold due to tectonic loading (drift), and diffuses due to Brownian motion (noise) according to a diffusion equation. The Brownian perturbation process formally represents the sum total of all factors, aside from tectonic loading, that govern rupture. Physically, these factors may include effects of earthquakes external to the source, aseismic loading, interaction effects within the source itself, healing, pore pressure evolution, etc. After a sufficiently long time, load state always evolves to a steady state probability density that is independent of the initial condition and completely described by the drift rate and noise scale. Earthquake interaction and stress transfer effects are modeled by an instantaneous change in the load state. A negative step reduces the probability of failure, while a positive step may either immediately trigger rupture or increase the failure probability (hazard). When the load state is far from failure, the effects are well-approximated by ``clock advances'' that shift the unperturbed hazard down or up, as appropriate for the sign of the step. However, when the load state is advanced in the earthquake cycle, the response is a sharp, temporally localized decrease or increase in hazard. Recovery of the hazard is characteristically ``Omori like'' ( ~ 1/t), which can be understood in terms of equilibrium thermodynamical considerations since state evolution is diffusion with

  14. A FRAX Experience in Korea: Fracture Risk Probabilities with a Country-specific Versus a Surrogate Model

    PubMed Central

    Min, Yong-Ki; Lee, Dong-Yun; Park, Youn-Soo; Moon, Young-Wan; Lim, Seung-Jae; Lee, Young-Kyun; Choi, DooSeok

    2015-01-01

    Background Recently, a Korean fracture-risk assessment tool (FRAX) model has become available, but large prospective cohort studies, which are needed to validate the model, are still lacking, and there has been little effort to evaluate its usefulness. This study evaluated the clinical usefulness of the FRAX model, a FRAX developed by the World Health Organization, in Korea. Methods In 405 postmenopausal women and 139 men with a proximal femoral fracture, 10-year predicted fracture probabilities calculated by the Korean FRAX model (a country-specific model) were compared with the probabilities calculated with a FRAX model for Japan, which has a similar ethnic background (surrogate model). Results The 10-year probabilities of major osteoporotic and hip fractures calculated by the Korean model were significantly lower than those calculated by the Japanese model in women and men. The fracture probabilities calculated by each model increased significantly with age in both sexes. In patients aged 70 or older, however, there was a significant difference between the two models. In addition, the Korean model led to lower probabilities for major osteoporotic fracture and hip fracture in women when BMD was excluded from the model than when it was included. Conclusions The 10-year fracture probabilities calculated with FRAX models might differ between country-specific and surrogate models, and caution is needed when applying a surrogate model to a new population. A large prospective study is warranted to validate the country-specific Korean model in the general population. PMID:26389086

  15. A multiscale finite element model validation method of composite cable-stayed bridge based on Probability Box theory

    NASA Astrophysics Data System (ADS)

    Zhong, Rumian; Zong, Zhouhong; Niu, Jie; Liu, Qiqi; Zheng, Peijuan

    2016-05-01

    Modeling and simulation are routinely implemented to predict the behavior of complex structures. These tools powerfully unite theoretical foundations, numerical models and experimental data which include associated uncertainties and errors. A new methodology for multi-scale finite element (FE) model validation is proposed in this paper. The method is based on two-step updating method, a novel approach to obtain coupling parameters in the gluing sub-regions of a multi-scale FE model, and upon Probability Box (P-box) theory that can provide a lower and upper bound for the purpose of quantifying and transmitting the uncertainty of structural parameters. The structural health monitoring data of Guanhe Bridge, a composite cable-stayed bridge with large span, and Monte Carlo simulation were used to verify the proposed method. The results show satisfactory accuracy, as the overlap ratio index of each modal frequency is over 89% without the average absolute value of relative errors, and the CDF of normal distribution has a good coincidence with measured frequencies of Guanhe Bridge. The validated multiscale FE model may be further used in structural damage prognosis and safety prognosis.

  16. A formalism to generate probability distributions for performance-assessment modeling

    SciTech Connect

    Kaplan, P.G.

    1990-12-31

    A formalism is presented for generating probability distributions of parameters used in performance-assessment modeling. The formalism is used when data are either sparse or nonexistent. The appropriate distribution is a function of the known or estimated constraints and is chosen to maximize a quantity known as Shannon`s informational entropy. The formalism is applied to a parameter used in performance-assessment modeling. The functional form of the model that defines the parameter, data from the actual field site, and natural analog data are analyzed to estimate the constraints. A beta probability distribution of the example parameter is generated after finding four constraints. As an example of how the formalism is applied to the site characterization studies of Yucca Mountain, the distribution is generated for an input parameter in a performance-assessment model currently used to estimate compliance with disposal of high-level radioactive waste in geologic repositories, 10 CFR 60.113(a)(2), commonly known as the ground water travel time criterion. 8 refs., 2 figs.

  17. Modeling the probability of arsenic in groundwater in New England as a tool for exposure assessment

    USGS Publications Warehouse

    Ayotte, J.D.; Nolan, B.T.; Nuckols, J.R.; Cantor, K.P.; Robinson, G.R., Jr.; Baris, D.; Hayes, L.; Karagas, M.; Bress, W.; Silverman, D.T.; Lubin, J.H.

    2006-01-01

    We developed a process-based model to predict the probability of arsenic exceeding 5 ??g/L in drinking water wells in New England bedrock aquifers. The model is being used for exposure assessment in an epidemiologic study of bladder cancer. One important study hypothesis that may explain increased bladder cancer risk is elevated concentrations of inorganic arsenic in drinking water. In eastern New England, 20-30% of private wells exceed the arsenic drinking water standard of 10 micrograms per liter. Our predictive model significantly improves the understanding of factors associated with arsenic contamination in New England. Specific rock types, high arsenic concentrations in stream sediments, geochemical factors related to areas of Pleistocene marine inundation and proximity to intrusive granitic plutons, and hydrologic and landscape variables relating to groundwater residence time increase the probability of arsenic occurrence in groundwater. Previous studies suggest that arsenic in bedrock groundwater may be partly from past arsenical pesticide use. Variables representing historic agricultural inputs do not improve the model, indicating that this source does not significantly contribute to current arsenic concentrations. Due to the complexity of the fractured bedrock aquifers in the region, well depth and related variables also are not significant predictors. ?? 2006 American Chemical Society.

  18. Probability-based damage detection using model updating with efficient uncertainty propagation

    NASA Astrophysics Data System (ADS)

    Xu, Yalan; Qian, Yu; Chen, Jianjun; Song, Gangbing

    2015-08-01

    Model updating method has received increasing attention in damage detection of structures based on measured modal parameters. In this article, a probability-based damage detection procedure is presented, in which the random factor method for non-homogeneous random field is developed and used as the forward propagation to analytically evaluate covariance matrices in each iteration step of stochastic model updating. An improved optimization algorithm is introduced to guarantee the convergence and reduce the computational effort, in which the design variables are restricted in search region by region truncation of each iteration step. The developed algorithm is illustrated by a simulated 25-bar planar truss structure and the results have been compared and verified with those obtained from Monte Carlo simulation. In order to assess the influences of uncertainty sources on the results of model updating and damage detection of structures, a comparative study is also given under different cases of uncertainties, that is, structural uncertainty only, measurement uncertainty only and combination of the two. The simulation results show the proposed method can perform well in stochastic model updating and probability-based damage detection of structures with less computational effort.

  19. Modeling and forecasting foreign exchange daily closing prices with normal inverse Gaussian

    NASA Astrophysics Data System (ADS)

    Teneng, Dean

    2013-09-01

    We fit the normal inverse Gaussian(NIG) distribution to foreign exchange closing prices using the open software package R and select best models by Käärik and Umbleja (2011) proposed strategy. We observe that daily closing prices (12/04/2008 - 07/08/2012) of CHF/JPY, AUD/JPY, GBP/JPY, NZD/USD, QAR/CHF, QAR/EUR, SAR/CHF, SAR/EUR, TND/CHF and TND/EUR are excellent fits while EGP/EUR and EUR/GBP are good fits with a Kolmogorov-Smirnov test p-value of 0.062 and 0.08 respectively. It was impossible to estimate normal inverse Gaussian parameters (by maximum likelihood; computational problem) for JPY/CHF but CHF/JPY was an excellent fit. Thus, while the stochastic properties of an exchange rate can be completely modeled with a probability distribution in one direction, it may be impossible the other way around. We also demonstrate that foreign exchange closing prices can be forecasted with the normal inverse Gaussian (NIG) Lévy process, both in cases where the daily closing prices can and cannot be modeled by NIG distribution.

  20. Empirical probability model of cold plasma environment in the Jovian magnetosphere

    NASA Astrophysics Data System (ADS)

    Futaana, Yoshifumi; Wang, Xiao-Dong; Barabash, Stas; Roussos, Elias; Truscott, Pete

    2015-04-01

    We analyzed the Galileo PLS dataset to produce a new cold plasma environment model for the Jovian magneto- sphere. Although there exist many sophisticated radiation models, treating energetic plasma (e.g. JOSE, GIRE, or Salammbo), only a limited number of simple models has been utilized for cold plasma environment. By extend- ing the existing cold plasma models toward the probability domain, we can predict the extreme periods of Jovian environment by specifying the percentile of the environmental parameters. The new model was produced in the following procedure. We first referred to the existing cold plasma models of Divine and Garrett, 1983 (DG83) or Bagenal and Delamere 2011 (BD11). These models are scaled to fit the statistical median of the parameters obtained from Galileo PLS data. The scaled model (also called as "mean model") indicates the median environment of Jovian magnetosphere. Then, assuming that the deviations in the Galileo PLS parameters are purely due to variations in the environment, we extended the mean model toward the percentile domain. The input parameter of the model is simply the position of the spacecraft (distance, magnetic longitude and lati- tude) and the specific percentile (e.g. 0.5 for the mean model). All the parameters in the model are described in mathematical forms; therefore the needed computational resources are quite low. The new model can be used for assessing the JUICE mission profile. The spatial extent of the model covers the main phase of the JUICE mission; namely from the Europa orbit to 40 Rj (where Rj is the radius of Jupiter). In addition, theoretical extensions toward the latitudinal direction are also included in the model to support the high latitude orbit of the JUICE spacecraft.

  1. Modelling the probability of ionospheric irregularity occurrence over African low latitude region

    NASA Astrophysics Data System (ADS)

    Mungufeni, Patrick; Jurua, Edward; Bosco Habarulema, John; Anguma Katrini, Simon

    2015-06-01

    This study presents models of geomagnetically quiet time probability of occurrence of ionospheric irregularities over the African low latitude region. GNSS-derived ionospheric total electron content data from Mbarara, Uganda (0.60°S, 30.74°E, geographic, 10.22°S, magnetic) and Libreville, Gabon (0.35°N, 9.68°E, geographic, 8.05°S, magnetic) during the period 2001-2012 were used. First, we established the rate of change of total electron content index (ROTI) value associated with background ionospheric irregularity over the region. This was done by analysing GNSS carrier-phases at L-band frequencies L1 and L2 with the aim of identifying cycle slip events associated with ionospheric irregularities. We identified at both stations a total of 699 events of cycle slips. The corresponding median ROTI value at the epochs of the cycle slip events was 0.54 TECU/min. The probability of occurrence of ionospheric irregularities associated with ROTI ≥ 0.5 TECU / min was then modelled by fitting cubic B-splines to the data. The aspects the model captured included diurnal, seasonal, and solar flux dependence patterns of the probability of occurrence of ionospheric irregularities. The model developed over Mbarara was validated with data over Mt. Baker, Uganda (0.35°N, 29.90°E, geographic, 9.25°S, magnetic), Kigali, Rwanda (1.94°S, 30.09°E, geographic, 11.62°S, magnetic), and Kampala, Uganda (0.34°N, 32.60°E, geographic, 9.29°S, magnetic). For the period validated at Mt. Baker (approximately, 137.64 km, north west), Kigali (approximately, 162.42 km, south west), and Kampala (approximately, 237.61 km, north east) the percentages of the number of errors (difference between the observed and the modelled probability of occurrence of ionospheric irregularity) less than 0.05 are 97.3, 89.4, and 81.3, respectively.

  2. The basic reproduction number and the probability of extinction for a dynamic epidemic model.

    PubMed

    Neal, Peter

    2012-03-01

    We consider the spread of an epidemic through a population divided into n sub-populations, in which individuals move between populations according to a Markov transition matrix Σ and infectives can only make infectious contacts with members of their current population. Expressions for the basic reproduction number, R₀, and the probability of extinction of the epidemic are derived. It is shown that in contrast to contact distribution models, the distribution of the infectious period effects both the basic reproduction number and the probability of extinction of the epidemic in the limit as the total population size N→∞. The interactions between the infectious period distribution and the transition matrix Σ mean that it is not possible to draw general conclusions about the effects on R₀ and the probability of extinction. However, it is shown that for n=2, the basic reproduction number, R₀, is maximised by a constant length infectious period and is decreasing in ς, the speed of movement between the two populations. PMID:22269870

  3. Probability image of tissue characteristics for liver fibrosis using multi-Rayleigh model with removal of nonspeckle signals

    NASA Astrophysics Data System (ADS)

    Mori, Shohei; Hirata, Shinnosuke; Yamaguchi, Tadashi; Hachiya, Hiroyuki

    2015-07-01

    We have been developing a quantitative diagnostic method for liver fibrosis using an ultrasound image. In our previous study, we proposed a multi-Rayleigh model to express a probability density function of the echo amplitude from liver fibrosis and proposed a probability imaging method of tissue characteristics on the basis of the multi-Rayleigh model. In an evaluation using the multi-Rayleigh model, we found that a modeling error of the multi-Rayleigh model was increased by the effect of nonspeckle signals. In this paper, we proposed a method of removing nonspeckle signals using the modeling error of the multi-Rayleigh model and evaluated the probability image of tissue characteristics after removing the nonspeckle signals. By removing nonspeckle signals, the modeling error of the multi-Rayleigh model was decreased. A correct probability image of tissue characteristics was obtained by removing nonspeckle signals. We concluded that the removal of nonspeckle signals is important for evaluating liver fibrosis quantitatively.

  4. Spike Train Probability Models for Stimulus-Driven Leaky Integrate-and-Fire Neurons

    PubMed Central

    Koyama, Shinsuke; Kass, Robert E.

    2009-01-01

    Mathematical models of neurons are widely used to improve understanding of neuronal spiking behavior. These models can produce artificial spike trains that resemble actual spike train data in important ways, but they are not very easy to apply to the analysis of spike train data. Instead, statistical methods based on point process models of spike trains provide a wide range of data-analytical techniques. Two simplified point process models have been introduced in the literature: the time-rescaled renewal process (TRRP) and the multiplicative inhomogeneous Markov interval (m-IMI) model. In this letter we investigate the extent to which the TRRP and m-IMI models are able to fit spike trains produced by stimulus-driven leaky integrate-and-fire (LIF) neurons. With a constant stimulus, the LIF spike train is a renewal process, and the m-IMI and TRRP models will describe accurately the LIF spike train variability. With a time-varying stimulus, the probability of spiking under all three of these models depends on both the experimental clock time relative to the stimulus and the time since the previous spike, but it does so differently for the LIF, m-IMI, and TRRP models. We assessed the distance between the LIF model and each of the two empirical models in the presence of a time-varying stimulus. We found that while lack of fit of a Poisson model to LIF spike train data can be evident even in small samples, the m-IMI and TRRP models tend to fit well, and much larger samples are required before there is statistical evidence of lack of fit of the m-IMI or TRRP models. We also found that when the mean of the stimulus varies across time, the m-IMI model provides a better fit to the LIF data than the TRRP, and when the variance of the stimulus varies across time, the TRRP provides the better fit. PMID:18336078

  5. Precipitation intensity probability distribution modelling for hydrological and construction design purposes

    NASA Astrophysics Data System (ADS)

    Koshinchanov, Georgy; Dimitrov, Dobri

    2008-11-01

    The characteristics of rainfall intensity are important for many purposes, including design of sewage and drainage systems, tuning flood warning procedures, etc. Those estimates are usually statistical estimates of the intensity of precipitation realized for certain period of time (e.g. 5, 10 min., etc) with different return period (e.g. 20, 100 years, etc). The traditional approach in evaluating the mentioned precipitation intensities is to process the pluviometer's records and fit probability distribution to samples of intensities valid for certain locations ore regions. Those estimates further become part of the state regulations to be used for various economic activities. Two problems occur using the mentioned approach: 1. Due to various factors the climate conditions are changed and the precipitation intensity estimates need regular update; 2. As far as the extremes of the probability distribution are of particular importance for the practice, the methodology of the distribution fitting needs specific attention to those parts of the distribution. The aim of this paper is to make review of the existing methodologies for processing the intensive rainfalls and to refresh some of the statistical estimates for the studied areas. The methodologies used in Bulgaria for analyzing the intensive rainfalls and produce relevant statistical estimates: The method of the maximum intensity, used in the National Institute of Meteorology and Hydrology to process and decode the pluviometer's records, followed by distribution fitting for each precipitation duration period; As the above, but with separate modeling of probability distribution for the middle and high probability quantiles. Method is similar to the first one, but with a threshold of 0,36 mm/min of intensity; Another method proposed by the Russian hydrologist G. A. Aleksiev for regionalization of estimates over some territory, improved and adapted by S. Gerasimov for Bulgaria; Next method is considering only the

  6. Insight into Vent Opening Probability in Volcanic Calderas in the Light of a Sill Intrusion Model

    NASA Astrophysics Data System (ADS)

    Giudicepietro, Flora; Macedonio, G.; D'Auria, L.; Martini, M.

    2016-05-01

    The aim of this paper is to discuss a novel approach to provide insights on the probability of vent opening in calderas, using a dynamic model of sill intrusion. The evolution of the stress field is the main factor that controls the vent opening processes in volcanic calderas. On the basis of previous studies, we think that the intrusion of sills is one of the most common mechanism governing caldera unrest. Therefore, we have investigated the spatial and temporal evolution of the stress field due to the emplacement of a sill at shallow depth to provide insight on vent opening probability. We carried out several numerical experiments by using a physical model, to assess the role of the magma properties (viscosity), host rock characteristics (Young's modulus and thickness), and dynamics of the intrusion process (mass flow rate) in controlling the stress field. Our experiments highlight that high magma viscosity produces larger stress values, while low magma viscosity leads to lower stresses and favors the radial spreading of the sill. Also high-rock Young's modulus gives high stress intensity, whereas low values of Young's modulus produce a dramatic reduction of the stress associated with the intrusive process. The maximum intensity of tensile stress is concentrated at the front of the sill and propagates radially with it, over time. In our simulations, we find that maximum values of tensile stress occur in ring-shaped areas with radius ranging between 350 m and 2500 m from the injection point, depending on the model parameters. The probability of vent opening is higher in these areas.

  7. Faceted spurs at normal fault scarps: Insights from numerical modeling

    NASA Astrophysics Data System (ADS)

    Petit, C.; Gunnell, Y.; Gonga-Saholiariliva, N.; Meyer, B.; SéGuinot, J.

    2009-05-01

    We present a combined surface processes and tectonic model which allows us to determine the climatic and tectonic parameters that control the development of faceted spurs at normal fault scarps. Sensitivity tests to climatic parameter values are performed. For a given precipitation rate, when hillslope diffusion is high and channel bedrock is highly resistant to erosion, the scarp is smooth and undissected. When, instead, the bedrock is easily eroded and diffusion is limited, numerous channels develop and the scarp becomes deeply incised. Between these two end-member states, diffusion and incision compete to produce a range of scarp morphologies, including faceted spurs. The sensitivity tests allow us to determine a dimensionless ratio of erosion, f, for which faceted spurs can develop. This study evidences a strong dependence of facet slope angle on throw rate for throw rates between 0.4 and 0.7 mm/a. Facet height is also shown to be a linear function of fault throw rate. Model performance is tested on the Wasatch Fault, Utah, using topographic, geologic, and seismologic data. A Monte Carlo inversion on the topography of a portion of the Weber segment shows that the 5 Ma long development of this scarp has been dominated by a low effective precipitation rate (˜1.1 m/a) and a moderate diffusion coefficient (0.13 m2/a). Results demonstrate the ability of our model to estimate normal fault throw rates from the height of triangular facets and to retrieve the average long-term diffusion and incision parameters that prevailed during scarp evolution using an accurate 2-D misfit criterion.

  8. Modeling Stress Strain Relationships and Predicting Failure Probabilities For Graphite Core Components

    SciTech Connect

    Duffy, Stephen

    2013-09-09

    This project will implement inelastic constitutive models that will yield the requisite stress-strain information necessary for graphite component design. Accurate knowledge of stress states (both elastic and inelastic) is required to assess how close a nuclear core component is to failure. Strain states are needed to assess deformations in order to ascertain serviceability issues relating to failure, e.g., whether too much shrinkage has taken place for the core to function properly. Failure probabilities, as opposed to safety factors, are required in order to capture the bariability in failure strength in tensile regimes. The current stress state is used to predict the probability of failure. Stochastic failure models will be developed that can accommodate possible material anisotropy. This work will also model material damage (i.e., degradation of mechanical properties) due to radiation exposure. The team will design tools for components fabricated from nuclear graphite. These tools must readily interact with finite element software--in particular, COMSOL, the software algorithm currently being utilized by the Idaho National Laboratory. For the eleastic response of graphite, the team will adopt anisotropic stress-strain relationships available in COMSO. Data from the literature will be utilized to characterize the appropriate elastic material constants.

  9. Void probability as a function of the void's shape and scale-invariant models

    NASA Technical Reports Server (NTRS)

    Elizalde, E.; Gaztanaga, E.

    1991-01-01

    The dependence of counts in cells on the shape of the cell for the large scale galaxy distribution is studied. A very concrete prediction can be done concerning the void distribution for scale invariant models. The prediction is tested on a sample of the CfA catalog, and good agreement is found. It is observed that the probability of a cell to be occupied is bigger for some elongated cells. A phenomenological scale invariant model for the observed distribution of the counts in cells, an extension of the negative binomial distribution, is presented in order to illustrate how this dependence can be quantitatively determined. An original, intuitive derivation of this model is presented.

  10. Diffusion-Limited Aggregation with Anisotropic Sticking Probability: A Tentative Model for River Networks

    NASA Astrophysics Data System (ADS)

    Kondoh, Hiroshi; Matsushita, Mitsugu

    1986-10-01

    Diffusion-limited aggregation (DLA) model with anisotropic sticking probability Ps is computer-simulated on two dimensional square lattice. The cluster grows from a seed particle at the origin in the positive y area with the absorption-type boundary along x-axis. The cluster is found to grow anisotropically as R//˜Nν// and R\\bot˜Nν\\bot, where R\\bot and R// are the radii of gyration of the cluster along x- and y-axes, respectively, and N is the particle number constituting the cluster. The two exponents are shown to become assymptotically ν//{=}2/3, ν\\bot{=}1/3 whenever the sticking anisotropy exists. It is also found that the present model is fairly consistent with Hack’s law of river networks, suggesting that it is a good candidate of a prototype model for the evolution of the river network.

  11. A cellular automata traffic flow model considering the heterogeneity of acceleration and delay probability

    NASA Astrophysics Data System (ADS)

    Li, Qi-Lang; Wong, S. C.; Min, Jie; Tian, Shuo; Wang, Bing-Hong

    2016-08-01

    This study examines the cellular automata traffic flow model, which considers the heterogeneity of vehicle acceleration and the delay probability of vehicles. Computer simulations are used to identify three typical phases in the model: free-flow, synchronized flow, and wide moving traffic jam. In the synchronized flow region of the fundamental diagram, the low and high velocity vehicles compete with each other and play an important role in the evolution of the system. The analysis shows that there are two types of bistable phases. However, in the original Nagel and Schreckenberg cellular automata traffic model, there are only two kinds of traffic conditions, namely, free-flow and traffic jams. The synchronized flow phase and bistable phase have not been found.

  12. Centrifuge modeling of buried continuous pipelines subjected to normal faulting

    NASA Astrophysics Data System (ADS)

    Moradi, Majid; Rojhani, Mahdi; Galandarzadeh, Abbas; Takada, Shiro

    2013-03-01

    Seismic ground faulting is the greatest hazard for continuous buried pipelines. Over the years, researchers have attempted to understand pipeline behavior mostly via numerical modeling such as the finite element method. The lack of well-documented field case histories of pipeline failure from seismic ground faulting and the cost and complicated facilities needed for full-scale experimental simulation mean that a centrifuge-based method to determine the behavior of pipelines subjected to faulting is best to verify numerical approaches. This paper presents results from three centrifuge tests designed to investigate continuous buried steel pipeline behavior subjected to normal faulting. The experimental setup and procedure are described and the recorded axial and bending strains induced in a pipeline are presented and compared to those obtained via analytical methods. The influence of factors such as faulting offset, burial depth and pipe diameter on the axial and bending strains of pipes and on ground soil failure and pipeline deformation patterns are also investigated. Finally, the tensile rupture of a pipeline due to normal faulting is investigated.

  13. An Approach for Improving Prediction in River System Models Using Bayesian Probabilities of Parameter Performance

    NASA Astrophysics Data System (ADS)

    Kim, S. S. H.; Hughes, J. D.; Chen, J.; Dutta, D.; Vaze, J.

    2014-12-01

    Achieving predictive success is a major challenge in hydrological modelling. Predictive metrics indicate whether models and parameters are appropriate for impact assessment, design, planning and management, forecasting and underpinning policy. It is often found that very different parameter sets and model structures are equally acceptable system representations (commonly described as equifinality). Furthermore, parameters that produce the best goodness of fit during a calibration period may often yield poor results outside of that period. A calibration method is presented that uses a recursive Bayesian filter to estimate the probability of consistent performance of parameter sets in different sub-periods. The result is a probability distribution for each specified performance interval. This generic method utilises more information within time-series data than what is typically used for calibrations, and could be adopted for different types of time-series modelling applications. Where conventional calibration methods implicitly identify the best performing parameterisations on average, the new method looks at the consistency of performance during sub-periods. The proposed calibration method, therefore, can be used to avoid heavy weighting toward rare periods of good agreement. The method is trialled in a conceptual river system model called the Australian Water Resources Assessments River (AWRA-R) model in the Murray-Darling Basin, Australia. The new method is tested via cross-validation and results are compared to a traditional split-sample calibration/validation to evaluate the new technique's ability to predict daily streamflow. The results showed that the new calibration method could produce parameterisations that performed better in validation periods than optimum calibration parameter sets. The method shows ability to improve on predictive performance and provide more realistic flux terms compared to traditional split-sample calibration methods.

  14. Investigating the probability of detection of typical cavity shapes through modelling and comparison of geophysical techniques

    NASA Astrophysics Data System (ADS)

    James, P.

    2011-12-01

    With a growing need for housing in the U.K., the government has proposed increased development of brownfield sites. However, old mine workings and natural cavities represent a potential hazard before, during and after construction on such sites, and add further complication to subsurface parameters. Cavities are hence a limitation to certain redevelopment and their detection is an ever important consideration. The current standard technique for cavity detection is a borehole grid, which is intrusive, non-continuous, slow and expensive. A new robust investigation standard in the detection of cavities is sought and geophysical techniques offer an attractive alternative. Geophysical techniques have previously been utilised successfully in the detection of cavities in various geologies, but still has an uncertain reputation in the engineering industry. Engineers are unsure of the techniques and are inclined to rely on well known techniques than utilise new technologies. Bad experiences with geophysics are commonly due to the indiscriminate choice of particular techniques. It is imperative that a geophysical survey is designed with the specific site and target in mind at all times, and the ability and judgement to rule out some, or all, techniques. To this author's knowledge no comparative software exists to aid technique choice. Also, previous modelling software limit the shapes of bodies and hence typical cavity shapes are not represented. Here, we introduce 3D modelling software (Matlab) which computes and compares the response to various cavity targets from a range of techniques (gravity, gravity gradient, magnetic, magnetic gradient and GPR). Typical near surface cavity shapes are modelled including shafts, bellpits, various lining and capping materials, and migrating voids. The probability of cavity detection is assessed in typical subsurface and noise conditions across a range of survey parameters. Techniques can be compared and the limits of detection distance

  15. Modeling Normal Shock Velocity Curvature Relation for Heterogeneous Explosives

    NASA Astrophysics Data System (ADS)

    Yoo, Sunhee; Crochet, Michael; Pemberton, Steve

    2015-06-01

    The normal shock velocity and curvature, Dn(κ) , relation on a detonation shock surface has been an important functional quantity to measure to understand the shock strength exerted against the material interface between a main explosive charge and the case of an explosive munition. The Dn(κ) relation is considered an intrinsic property of an explosive, and can be experimentally deduced by rate stick tests at various charge diameters. However, experimental measurements of the Dn(κ) relation for heterogeneous explosives such as PBXN-111 are challenging due to the non-smoothness and asymmetry usually observed in the experimental streak records of explosion fronts. Out of the many possibilities, the asymmetric character may be attributed to the heterogeneity of the explosives, a hypothesis which begs two questions: (1) is there any simple hydrodynamic model that can explain such an asymmetric shock evolution, and (2) what statistics can be derived for the asymmetry using simulations with defined structural heterogeneity in the unreacted explosive? Saenz, Taylor and Stewart studied constitutive models for derivation of the Dn(κ) relation on porous `homogeneous' explosives and carried out simulations in a spherical coordinate frame. In this paper, we extend their model to account for `heterogeneity' and present shock evolutions in heterogeneous explosives using 2-D hydrodynamic simulations with some statistical examination. (96TW-2015-0004)

  16. A radiation damage repair model for normal tissues

    NASA Astrophysics Data System (ADS)

    Partridge, Mike

    2008-07-01

    A cellular Monte Carlo model describing radiation damage and repair in normal epithelial tissues is presented. The deliberately simplified model includes cell cycling, cell motility and radiation damage response (cell cycle arrest and cell death) only. Results demonstrate that the model produces a stable equilibrium system for mean cell cycle times in the range 24-96 h. Simulated irradiation of these stable equilibrium systems produced a range of responses that are shown to be consistent with experimental and clinical observation, including (i) re-epithelialization of radiation-induced lesions by a mixture of cell migration into the wound and repopulation at the periphery; (ii) observed radiosensitivity that is quantitatively consistent with both rate of induction of irreparable DNA lesions and, independently, with the observed acute oral and pharyngeal mucosal reactions to radiotherapy; (iii) an observed time between irradiation and maximum toxicity that is consistent with experimental data for skin; (iv) quantitatively accurate predictions of low-dose hyper-radiosensitivity; (v) Gomperzian repopulation for very small lesions (~2000 cells) and (vi) a linear rate of re-epithelialization of 5-10 µm h-1 for large lesions (>15 000 cells).

  17. Predicting Mortality in Low-Income Country ICUs: The Rwanda Mortality Probability Model (R-MPM)

    PubMed Central

    Kiviri, Willy; Fowler, Robert A.; Mueller, Ariel; Novack, Victor; Banner-Goodspeed, Valerie M.; Weinkauf, Julia L.; Talmor, Daniel S.; Twagirumugabe, Theogene

    2016-01-01

    Introduction Intensive Care Unit (ICU) risk prediction models are used to compare outcomes for quality improvement initiatives, benchmarking, and research. While such models provide robust tools in high-income countries, an ICU risk prediction model has not been validated in a low-income country where ICU population characteristics are different from those in high-income countries, and where laboratory-based patient data are often unavailable. We sought to validate the Mortality Probability Admission Model, version III (MPM0-III) in two public ICUs in Rwanda and to develop a new Rwanda Mortality Probability Model (R-MPM) for use in low-income countries. Methods We prospectively collected data on all adult patients admitted to Rwanda’s two public ICUs between August 19, 2013 and October 6, 2014. We described demographic and presenting characteristics and outcomes. We assessed the discrimination and calibration of the MPM0-III model. Using stepwise selection, we developed a new logistic model for risk prediction, the R-MPM, and used bootstrapping techniques to test for optimism in the model. Results Among 427 consecutive adults, the median age was 34 (IQR 25–47) years and mortality was 48.7%. Mechanical ventilation was initiated for 85.3%, and 41.9% received vasopressors. The MPM0-III predicted mortality with area under the receiver operating characteristic curve of 0.72 and Hosmer-Lemeshow chi-square statistic p = 0.024. We developed a new model using five variables: age, suspected or confirmed infection within 24 hours of ICU admission, hypotension or shock as a reason for ICU admission, Glasgow Coma Scale score at ICU admission, and heart rate at ICU admission. Using these five variables, the R-MPM predicted outcomes with area under the ROC curve of 0.81 with 95% confidence interval of (0.77, 0.86), and Hosmer-Lemeshow chi-square statistic p = 0.154. Conclusions The MPM0-III has modest ability to predict mortality in a population of Rwandan ICU patients. The R

  18. SAR amplitude probability density function estimation based on a generalized Gaussian model.

    PubMed

    Moser, Gabriele; Zerubia, Josiane; Serpico, Sebastiano B

    2006-06-01

    In the context of remotely sensed data analysis, an important problem is the development of accurate models for the statistics of the pixel intensities. Focusing on synthetic aperture radar (SAR) data, this modeling process turns out to be a crucial task, for instance, for classification or for denoising purposes. In this paper, an innovative parametric estimation methodology for SAR amplitude data is proposed that adopts a generalized Gaussian (GG) model for the complex SAR backscattered signal. A closed-form expression for the corresponding amplitude probability density function (PDF) is derived and a specific parameter estimation algorithm is developed in order to deal with the proposed model. Specifically, the recently proposed "method-of-log-cumulants" (MoLC) is applied, which stems from the adoption of the Mellin transform (instead of the usual Fourier transform) in the computation of characteristic functions and from the corresponding generalization of the concepts of moment and cumulant. For the developed GG-based amplitude model, the resulting MoLC estimates turn out to be numerically feasible and are also analytically proved to be consistent. The proposed parametric approach was validated by using several real ERS-1, XSAR, E-SAR, and NASA/JPL airborne SAR images, and the experimental results prove that the method models the amplitude PDF better than several previously proposed parametric models for backscattering phenomena. PMID:16764268

  19. Likelihood analysis of species occurrence probability from presence-only data for modelling species distributions

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Yackulic, Charles; Nichols, James D.

    2012-01-01

    1. Understanding the factors affecting species occurrence is a pre-eminent focus of applied ecological research. However, direct information about species occurrence is lacking for many species. Instead, researchers sometimes have to rely on so-called presence-only data (i.e. when no direct information about absences is available), which often results from opportunistic, unstructured sampling. MAXENT is a widely used software program designed to model and map species distribution using presence-only data. 2. We provide a critical review of MAXENT as applied to species distribution modelling and discuss how it can lead to inferential errors. A chief concern is that MAXENT produces a number of poorly defined indices that are not directly related to the actual parameter of interest – the probability of occurrence (ψ). This focus on an index was motivated by the belief that it is not possible to estimate ψ from presence-only data; however, we demonstrate that ψ is identifiable using conventional likelihood methods under the assumptions of random sampling and constant probability of species detection. 3. The model is implemented in a convenient r package which we use to apply the model to simulated data and data from the North American Breeding Bird Survey. We demonstrate that MAXENT produces extreme under-predictions when compared to estimates produced by logistic regression which uses the full (presence/absence) data set. We note that MAXENT predictions are extremely sensitive to specification of the background prevalence, which is not objectively estimated using the MAXENT method. 4. As with MAXENT, formal model-based inference requires a random sample of presence locations. Many presence-only data sets, such as those based on museum records and herbarium collections, may not satisfy this assumption. However, when sampling is random, we believe that inference should be based on formal methods that facilitate inference about interpretable ecological quantities

  20. Model Assembly for Estimating Cell Surviving Fraction for Both Targeted and Nontargeted Effects Based on Microdosimetric Probability Densities

    PubMed Central

    Sato, Tatsuhiko; Hamada, Nobuyuki

    2014-01-01

    We here propose a new model assembly for estimating the surviving fraction of cells irradiated with various types of ionizing radiation, considering both targeted and nontargeted effects in the same framework. The probability densities of specific energies in two scales, which are the cell nucleus and its substructure called a domain, were employed as the physical index for characterizing the radiation fields. In the model assembly, our previously established double stochastic microdosimetric kinetic (DSMK) model was used to express the targeted effect, whereas a newly developed model was used to express the nontargeted effect. The radioresistance caused by overexpression of anti-apoptotic protein Bcl-2 known to frequently occur in human cancer was also considered by introducing the concept of the adaptive response in the DSMK model. The accuracy of the model assembly was examined by comparing the computationally and experimentally determined surviving fraction of Bcl-2 cells (Bcl-2 overexpressing HeLa cells) and Neo cells (neomycin resistant gene-expressing HeLa cells) irradiated with microbeam or broadbeam of energetic heavy ions, as well as the WI-38 normal human fibroblasts irradiated with X-ray microbeam. The model assembly reproduced very well the experimentally determined surviving fraction over a wide range of dose and linear energy transfer (LET) values. Our newly established model assembly will be worth being incorporated into treatment planning systems for heavy-ion therapy, brachytherapy, and boron neutron capture therapy, given critical roles of the frequent Bcl-2 overexpression and the nontargeted effect in estimating therapeutic outcomes and harmful effects of such advanced therapeutic modalities. PMID:25426641

  1. Model assembly for estimating cell surviving fraction for both targeted and nontargeted effects based on microdosimetric probability densities.

    PubMed

    Sato, Tatsuhiko; Hamada, Nobuyuki

    2014-01-01

    We here propose a new model assembly for estimating the surviving fraction of cells irradiated with various types of ionizing radiation, considering both targeted and nontargeted effects in the same framework. The probability densities of specific energies in two scales, which are the cell nucleus and its substructure called a domain, were employed as the physical index for characterizing the radiation fields. In the model assembly, our previously established double stochastic microdosimetric kinetic (DSMK) model was used to express the targeted effect, whereas a newly developed model was used to express the nontargeted effect. The radioresistance caused by overexpression of anti-apoptotic protein Bcl-2 known to frequently occur in human cancer was also considered by introducing the concept of the adaptive response in the DSMK model. The accuracy of the model assembly was examined by comparing the computationally and experimentally determined surviving fraction of Bcl-2 cells (Bcl-2 overexpressing HeLa cells) and Neo cells (neomycin resistant gene-expressing HeLa cells) irradiated with microbeam or broadbeam of energetic heavy ions, as well as the WI-38 normal human fibroblasts irradiated with X-ray microbeam. The model assembly reproduced very well the experimentally determined surviving fraction over a wide range of dose and linear energy transfer (LET) values. Our newly established model assembly will be worth being incorporated into treatment planning systems for heavy-ion therapy, brachytherapy, and boron neutron capture therapy, given critical roles of the frequent Bcl-2 overexpression and the nontargeted effect in estimating therapeutic outcomes and harmful effects of such advanced therapeutic modalities. PMID:25426641

  2. Repopulation of interacting tumor cells during fractionated radiotherapy: Stochastic modeling of the tumor control probability

    SciTech Connect

    Fakir, Hatim; Hlatky, Lynn; Li, Huamin; Sachs, Rainer

    2013-12-15

    Purpose: Optimal treatment planning for fractionated external beam radiation therapy requires inputs from radiobiology based on recent thinking about the “five Rs” (repopulation, radiosensitivity, reoxygenation, redistribution, and repair). The need is especially acute for the newer, often individualized, protocols made feasible by progress in image guided radiation therapy and dose conformity. Current stochastic tumor control probability (TCP) models incorporating tumor repopulation effects consider “stem-like cancer cells” (SLCC) to be independent, but the authors here propose that SLCC-SLCC interactions may be significant. The authors present a new stochastic TCP model for repopulating SLCC interacting within microenvironmental niches. Our approach is meant mainly for comparing similar protocols. It aims at practical generalizations of previous mathematical models. Methods: The authors consider protocols with complete sublethal damage repair between fractions. The authors use customized open-source software and recent mathematical approaches from stochastic process theory for calculating the time-dependent SLCC number and thereby estimating SLCC eradication probabilities. As specific numerical examples, the authors consider predicted TCP results for a 2 Gy per fraction, 60 Gy protocol compared to 64 Gy protocols involving early or late boosts in a limited volume to some fractions. Results: In sample calculations with linear quadratic parameters α = 0.3 per Gy, α/β = 10 Gy, boosting is predicted to raise TCP from a dismal 14.5% observed in some older protocols for advanced NSCLC to above 70%. This prediction is robust as regards: (a) the assumed values of parameters other than α and (b) the choice of models for intraniche SLCC-SLCC interactions. However, α = 0.03 per Gy leads to a prediction of almost no improvement when boosting. Conclusions: The predicted efficacy of moderate boosts depends sensitively on α. Presumably, the larger values of α are

  3. Inferring Pairwise Interactions from Biological Data Using Maximum-Entropy Probability Models

    PubMed Central

    Stein, Richard R.; Marks, Debora S.; Sander, Chris

    2015-01-01

    Maximum entropy-based inference methods have been successfully used to infer direct interactions from biological datasets such as gene expression data or sequence ensembles. Here, we review undirected pairwise maximum-entropy probability models in two categories of data types, those with continuous and categorical random variables. As a concrete example, we present recently developed inference methods from the field of protein contact prediction and show that a basic set of assumptions leads to similar solution strategies for inferring the model parameters in both variable types. These parameters reflect interactive couplings between observables, which can be used to predict global properties of the biological system. Such methods are applicable to the important problems of protein 3-D structure prediction and association of gene–gene networks, and they enable potential applications to the analysis of gene alteration patterns and to protein design. PMID:26225866

  4. Model for the charge-transfer probability in helium nanodroplets following electron-impact ionization

    SciTech Connect

    Ellis, Andrew M.; Yang Shengfu

    2007-09-15

    A theoretical model has been developed to describe the probability of charge transfer from helium cations to dopant molecules inside helium nanodroplets following electron-impact ionization. The location of the initial charge site inside helium nanodroplets subject to electron impact has been investigated and is found to play an important role in understanding the ionization of dopants inside helium droplets. The model is consistent with a charge migration process in small helium droplets that is strongly directed by intermolecular forces originating from the dopant, whereas for large droplets (tens of thousands of helium atoms and larger) the charge migration increasingly takes on the character of a random walk. This suggests a clear droplet size limit for the use of electron-impact mass spectrometry for detecting molecules in helium droplets.

  5. Models for the probability densities of the turbulent plasma flux in magnetized plasmas

    NASA Astrophysics Data System (ADS)

    Bergsaker, A. S.; Fredriksen, Å; Pécseli, H. L.; Trulsen, J. K.

    2015-10-01

    Observations of turbulent transport in magnetized plasmas indicate that plasma losses can be due to coherent structures or bursts of plasma rather than a classical random walk or diffusion process. A model for synthetic data based on coherent plasma flux events is proposed, where all basic properties can be obtained analytically in terms of a few control parameters. One basic parameter in the present case is the density of burst events in a long time-record, together with parameters in a model of the individual pulse shapes and the statistical distribution of these parameters. The model and its extensions give the probability density of the plasma flux. An interesting property of the model is a prediction of a near-parabolic relation between skewness and kurtosis of the statistical flux distribution for a wide range of parameters. The model is generalized by allowing for an additive random noise component. When this noise dominates the signal we can find a transition to standard results for Gaussian random noise. Applications of the model are illustrated by data from the toroidal Blaamann plasma.

  6. On the thresholds, probability densities, and critical exponents of Bak-Sneppen-like models

    NASA Astrophysics Data System (ADS)

    Garcia, Guilherme J. M.; Dickman, Ronald

    2004-10-01

    We report a simple method to accurately determine the threshold and the exponent ν of the Bak-Sneppen (BS) model and also investigate the BS universality class. For the random-neighbor version of the BS model, we find the threshold x ∗=0.33332(3) , in agreement with the exact result x ∗= {1}/{3} given by mean-field theory. For the one-dimensional original model, we find x ∗=0.6672(2) in good agreement with the results reported in the literature; for the anisotropic BS model we obtain x ∗=0.7240(1) . We study the finite size effect x ∗(L)-x ∗(L→∞)∝L -ν, observed in a system with L sites, and find ν=1.00(1) for the random-neighbor version, ν=1.40(1) for the original model, and ν=1.58(1) for the anisotropic case. Finally, we discuss the effect of defining the extremal site as the one which minimizes a general function f( x), instead of simply f( x)= x as in the original updating rule. We emphasize that models with extremal dynamics have singular stationary probability distributions p( x). Our simulations indicate the existence of two symmetry-based universality classes.

  7. Evaluation of joint probability density function models for turbulent nonpremixed combustion with complex chemistry

    NASA Technical Reports Server (NTRS)

    Smith, N. S. A.; Frolov, S. M.; Bowman, C. T.

    1996-01-01

    Two types of mixing sub-models are evaluated in connection with a joint-scalar probability density function method for turbulent nonpremixed combustion. Model calculations are made and compared to simulation results for homogeneously distributed methane-air reaction zones mixing and reacting in decaying turbulence within a two-dimensional enclosed domain. The comparison is arranged to ensure that both the simulation and model calculations a) make use of exactly the same chemical mechanism, b) do not involve non-unity Lewis number transport of species, and c) are free from radiation loss. The modified Curl mixing sub-model was found to provide superior predictive accuracy over the simple relaxation-to-mean submodel in the case studied. Accuracy to within 10-20% was found for global means of major species and temperature; however, nitric oxide prediction accuracy was lower and highly dependent on the choice of mixing sub-model. Both mixing submodels were found to produce non-physical mixing behavior for mixture fractions removed from the immediate reaction zone. A suggestion for a further modified Curl mixing sub-model is made in connection with earlier work done in the field.

  8. Climate forecasting on the basis of imprecise probabilities ­ an example with a simple model of the thermohaline circulation

    NASA Astrophysics Data System (ADS)

    Kriegler, E.; Held, H.; Zickfeld, K.

    2003-04-01

    Climate forecasting with simple models has been hampered, among others, by the difficulties to perform an accurate assessment of probabilities for crucial model parameters. Expert elicitations and Bayesian updating of non-informative priors have been used to determine such probabilities. Both methods hinge on the specification of a precise probability, let it be an aggregate of different expert assessments or a particular choice of prior distribution. It is unclear, how such a choice should be made. Imprecise probability models can be used to circumvent the problem. The question arises how imprecise probabilities for the model parameters can be processed to predict the model output. We propose a method to process imprecise probabilities in simple models, which is based on a special type of imprecise probability theory, the Dempster-Shafer theory of evidence (DST). We show for the example of climate sensitivity, how a multitude of expert elicitations is compressed into a lower-upper-probability model that can be quantified in terms of DST. This information together with estimates on radiative forcing is projected onto future temperature change, which in turn is used to force a simple model of the thermohaline circulation. An algorithm to compute the uncertainty in the overturning strength from the uncertainty in the temperature forcing is introduced. It can be shown that the DST specification of the forcing leads to a DST-type uncertainty in the model output. This information is used to present the resulting lower-upper probability model for the overturning strength in a more intuitive way.

  9. Modelling the wind damage probability in forests in Southwestern Germany for the 1999 winter storm 'Lothar'.

    PubMed

    Schindler, Dirk; Grebhan, Karin; Albrecht, Axel; Schönborn, Jochen

    2009-11-01

    The wind damage probability (P (DAM)) in the forests in the federal state of Baden-Wuerttemberg (Southwestern Germany) was calculated using weights of evidence (WofE) methodology and a logistic regression model (LRM) after the winter storm 'Lothar' in December 1999. A geographic information system (GIS) was used for the area-wide spatial prediction and mapping of P (DAM). The combination of the six evidential themes forest type, soil type, geology, soil moisture, soil acidification, and the 'Lothar' maximum gust field predicted wind damage best and was used to map P (DAM) in a 50 x 50 m resolution grid. GIS software was utilised to produce probability maps, which allowed the identification of areas of low, moderate, and high P (DAM) across the study area. The highest P (DAM) values were calculated for coniferous forest growing on acidic, fresh to moist soils on bunter sandstone formations-provided that 'Lothar' maximum gust speed exceeded 35 m s(-1) in the areas in question. One of the most significant benefits associated with the results of this study is that, for the first time, there is a GIS-based area-wide quantification of P (DAM) in the forests in Southwestern Germany. In combination with the experience and expert knowledge of local foresters, the probability maps produced can be used as an important tool for decision support with respect to future silvicultural activities aimed at reducing wind damage. One limitation of the P (DAM)-predictions is that they are based on only one major storm event. At the moment it is not possible to relate storm event intensity to the amount of wind damage in forests due to the lack of comprehensive long-term tree and stand damage data across the study area. PMID:19562383

  10. Analytical expression for the exit probability of the q -voter model in one dimension

    NASA Astrophysics Data System (ADS)

    Timpanaro, André M.; Galam, Serge

    2015-07-01

    We present in this paper an approximation that is able to give an analytical expression for the exit probability of the q -voter model in one dimension. This expression gives a better fit for the more recent data about simulations in large networks [A. M. Timpanaro and C. P. C. do Prado, Phys. Rev. E 89, 052808 (2014), 10.1103/PhysRevE.89.052808] and as such departs from the expression ρ/qρq+(1-ρ ) q found in papers that investigated small networks only [R. Lambiotte and S. Redner, Europhys. Lett. 82, 18007 (2008), 10.1209/0295-5075/82/18007; P. Przybyła et al., Phys. Rev. E 84, 031117 (2011), 10.1103/PhysRevE.84.031117; F. Slanina et al., Europhys. Lett. 82, 18006 (2008), 10.1209/0295-5075/82/18006]. The approximation consists in assuming a large separation on the time scales at which active groups of agents convince inactive ones and the time taken in the competition between active groups. Some interesting findings are that for q =2 we still have ρ/2ρ2+(1-ρ ) 2 as the exit probability and for q >2 we can obtain a lower-order approximation of the form ρ/sρs+(1-ρ ) s with s varying from q for low values of q to q -1/2 for large values of q . As such, this work can also be seen as a deduction for why the exit probability ρ/qρq+(1-ρ ) q gives a good fit, without relying on mean-field arguments or on the assumption that only the first step is nondeterministic, as q and q -1/2 will give very similar results when q →∞ .

  11. Inverse modeling of hydraulic tests in fractured crystalline rock based on a transition probability geostatistical approach

    NASA Astrophysics Data System (ADS)

    Blessent, Daniela; Therrien, René; Lemieux, Jean-Michel

    2011-12-01

    This paper presents numerical simulations of a series of hydraulic interference tests conducted in crystalline bedrock at Olkiluoto (Finland), a potential site for the disposal of the Finnish high-level nuclear waste. The tests are in a block of crystalline bedrock of about 0.03 km3 that contains low-transmissivity fractures. Fracture density, orientation, and fracture transmissivity are estimated from Posiva Flow Log (PFL) measurements in boreholes drilled in the rock block. On the basis of those data, a geostatistical approach relying on a transitional probability and Markov chain models is used to define a conceptual model based on stochastic fractured rock facies. Four facies are defined, from sparsely fractured bedrock to highly fractured bedrock. Using this conceptual model, three-dimensional groundwater flow is then simulated to reproduce interference pumping tests in either open or packed-off boreholes. Hydraulic conductivities of the fracture facies are estimated through automatic calibration using either hydraulic heads or both hydraulic heads and PFL flow rates as targets for calibration. The latter option produces a narrower confidence interval for the calibrated hydraulic conductivities, therefore reducing the associated uncertainty and demonstrating the usefulness of the measured PFL flow rates. Furthermore, the stochastic facies conceptual model is a suitable alternative to discrete fracture network models to simulate fluid flow in fractured geological media.

  12. An investigation of a quantum probability model for the constructive effect of affective evaluation.

    PubMed

    White, Lee C; Barqué-Duran, Albert; Pothos, Emmanuel M

    2016-01-13

    The idea that choices can have a constructive effect has received a great deal of empirical support. The act of choosing appears to influence subsequent preferences for the options available. Recent research has proposed a cognitive model based on quantum probability (QP), which suggests that whether or not a participant provides an affective evaluation for a positively or negatively valenced stimulus can also be constructive and so, for example, influence the affective evaluation of a second oppositely valenced stimulus. However, there are some outstanding methodological questions in relation to this previous research. This paper reports the results of three experiments designed to resolve these questions. Experiment 1, using a binary response format, provides partial support for the interaction predicted by the QP model; and Experiment 2, which controls for the length of time participants have to respond, fully supports the QP model. Finally, Experiment 3 sought to determine whether the key effect can generalize beyond affective judgements about visual stimuli. Using judgements about the trustworthiness of well-known people, the predictions of the QP model were confirmed. Together, these three experiments provide further support for the QP model of the constructive effect of simple evaluations. PMID:26621993

  13. Stochastic assessment of Phien generalized reservoir storage-yield-probability models using global runoff data records

    NASA Astrophysics Data System (ADS)

    Adeloye, Adebayo J.; Soundharajan, Bankaru-Swamy; Musto, Jagarkhin N.; Chiamsathit, Chuthamat

    2015-10-01

    This study has carried out an assessment of Phien generalised storage-yield-probability (S-Y-P) models using recorded runoff data of six global rivers that were carefully selected such that they satisfy the criteria specified for the models. Using stochastic hydrology, 2000 replicates of the historic records were generated and used to drive the sequent peak algorithm (SPA) for estimating capacity of hypothetical reservoirs at the respective sites. The resulting ensembles of reservoir capacity estimates were then analysed to determine the mean, standard deviation and quantiles, which were then compared with corresponding estimates produced by the Phien models. The results showed that Phien models produced a mix of significant under- and over-predictions of the mean and standard deviation of capacity, with the under-prediction situations occurring as the level of development reduces. On the other hand, consistent over-prediction was obtained for full regulation for all the rivers analysed. The biases in the reservoir capacity quantiles were equally high, implying that the limitations of the Phien models affect the entire distribution function of reservoir capacity. Due to very high values of these errors, it is recommended that the Phien relationships should be avoided for reservoir planning.

  14. Probability based remaining capacity estimation using data-driven and neural network model

    NASA Astrophysics Data System (ADS)

    Wang, Yujie; Yang, Duo; Zhang, Xu; Chen, Zonghai

    2016-05-01

    Since large numbers of lithium-ion batteries are composed in pack and the batteries are complex electrochemical devices, their monitoring and safety concerns are key issues for the applications of battery technology. An accurate estimation of battery remaining capacity is crucial for optimization of the vehicle control, preventing battery from over-charging and over-discharging and ensuring the safety during its service life. The remaining capacity estimation of a battery includes the estimation of state-of-charge (SOC) and state-of-energy (SOE). In this work, a probability based adaptive estimator is presented to obtain accurate and reliable estimation results for both SOC and SOE. For the SOC estimation, an n ordered RC equivalent circuit model is employed by combining an electrochemical model to obtain more accurate voltage prediction results. For the SOE estimation, a sliding window neural network model is proposed to investigate the relationship between the terminal voltage and the model inputs. To verify the accuracy and robustness of the proposed model and estimation algorithm, experiments under different dynamic operation current profiles are performed on the commercial 1665130-type lithium-ion batteries. The results illustrate that accurate and robust estimation can be obtained by the proposed method.

  15. Modeling and mapping the probability of occurrence of invasive wild pigs across the contiguous United States.

    PubMed

    McClure, Meredith L; Burdett, Christopher L; Farnsworth, Matthew L; Lutman, Mark W; Theobald, David M; Riggs, Philip D; Grear, Daniel A; Miller, Ryan S

    2015-01-01

    Wild pigs (Sus scrofa), also known as wild swine, feral pigs, or feral hogs, are one of the most widespread and successful invasive species around the world. Wild pigs have been linked to extensive and costly agricultural damage and present a serious threat to plant and animal communities due to their rooting behavior and omnivorous diet. We modeled the current distribution of wild pigs in the United States to better understand the physiological and ecological factors that may determine their invasive potential and to guide future study and eradication efforts. Using national-scale wild pig occurrence data reported between 1982 and 2012 by wildlife management professionals, we estimated the probability of wild pig occurrence across the United States using a logistic discrimination function and environmental covariates hypothesized to influence the distribution of the species. Our results suggest the distribution of wild pigs in the U.S. was most strongly limited by cold temperatures and availability of water, and that they were most likely to occur where potential home ranges had higher habitat heterogeneity, providing access to multiple key resources including water, forage, and cover. High probability of occurrence was also associated with frequent high temperatures, up to a high threshold. However, this pattern is driven by pigs' historic distribution in warm climates of the southern U.S. Further study of pigs' ability to persist in cold northern climates is needed to better understand whether low temperatures actually limit their distribution. Our model highlights areas at risk of invasion as those with habitat conditions similar to those found in pigs' current range that are also near current populations. This study provides a macro-scale approach to generalist species distribution modeling that is applicable to other generalist and invasive species. PMID:26267266

  16. PHOTOMETRIC REDSHIFTS AND QUASAR PROBABILITIES FROM A SINGLE, DATA-DRIVEN GENERATIVE MODEL

    SciTech Connect

    Bovy, Jo; Hogg, David W.; Weaver, Benjamin A.; Myers, Adam D.; Hennawi, Joseph F.; McMahon, Richard G.; Schiminovich, David; Sheldon, Erin S.; Brinkmann, Jon; Schneider, Donald P.

    2012-04-10

    We describe a technique for simultaneously classifying and estimating the redshift of quasars. It can separate quasars from stars in arbitrary redshift ranges, estimate full posterior distribution functions for the redshift, and naturally incorporate flux uncertainties, missing data, and multi-wavelength photometry. We build models of quasars in flux-redshift space by applying the extreme deconvolution technique to estimate the underlying density. By integrating this density over redshift, one can obtain quasar flux densities in different redshift ranges. This approach allows for efficient, consistent, and fast classification and photometric redshift estimation. This is achieved by combining the speed obtained by choosing simple analytical forms as the basis of our density model with the flexibility of non-parametric models through the use of many simple components with many parameters. We show that this technique is competitive with the best photometric quasar classification techniques-which are limited to fixed, broad redshift ranges and high signal-to-noise ratio data-and with the best photometric redshift techniques when applied to broadband optical data. We demonstrate that the inclusion of UV and NIR data significantly improves photometric quasar-star separation and essentially resolves all of the redshift degeneracies for quasars inherent to the ugriz filter system, even when included data have a low signal-to-noise ratio. For quasars spectroscopically confirmed by the SDSS 84% and 97% of the objects with Galaxy Evolution Explorer UV and UKIDSS NIR data have photometric redshifts within 0.1 and 0.3, respectively, of the spectroscopic redshift; this amounts to about a factor of three improvement over ugriz-only photometric redshifts. Our code to calculate quasar probabilities and redshift probability distributions is publicly available.

  17. Photometric redshifts and quasar probabilities from a single, data-driven generative model

    SciTech Connect

    Bovy, Jo; Myers, Adam D.; Hennawi, Joseph F.; Hogg, David W.; McMahon, Richard G.; Schiminovich, David; Sheldon, Erin S.; Brinkmann, Jon; Schneider, Donald P.; Weaver, Benjamin A.

    2012-03-20

    We describe a technique for simultaneously classifying and estimating the redshift of quasars. It can separate quasars from stars in arbitrary redshift ranges, estimate full posterior distribution functions for the redshift, and naturally incorporate flux uncertainties, missing data, and multi-wavelength photometry. We build models of quasars in flux-redshift space by applying the extreme deconvolution technique to estimate the underlying density. By integrating this density over redshift, one can obtain quasar flux densities in different redshift ranges. This approach allows for efficient, consistent, and fast classification and photometric redshift estimation. This is achieved by combining the speed obtained by choosing simple analytical forms as the basis of our density model with the flexibility of non-parametric models through the use of many simple components with many parameters. We show that this technique is competitive with the best photometric quasar classification techniques—which are limited to fixed, broad redshift ranges and high signal-to-noise ratio data—and with the best photometric redshift techniques when applied to broadband optical data. We demonstrate that the inclusion of UV and NIR data significantly improves photometric quasar-star separation and essentially resolves all of the redshift degeneracies for quasars inherent to the ugriz filter system, even when included data have a low signal-to-noise ratio. For quasars spectroscopically confirmed by the SDSS 84% and 97% of the objects with Galaxy Evolution Explorer UV and UKIDSS NIR data have photometric redshifts within 0.1 and 0.3, respectively, of the spectroscopic redshift; this amounts to about a factor of three improvement over ugriz-only photometric redshifts. Our code to calculate quasar probabilities and redshift probability distributions is publicly available.

  18. Modeling and Mapping the Probability of Occurrence of Invasive Wild Pigs across the Contiguous United States

    PubMed Central

    McClure, Meredith L.; Burdett, Christopher L.; Farnsworth, Matthew L.; Lutman, Mark W.; Theobald, David M.; Riggs, Philip D.; Grear, Daniel A.; Miller, Ryan S.

    2015-01-01

    Wild pigs (Sus scrofa), also known as wild swine, feral pigs, or feral hogs, are one of the most widespread and successful invasive species around the world. Wild pigs have been linked to extensive and costly agricultural damage and present a serious threat to plant and animal communities due to their rooting behavior and omnivorous diet. We modeled the current distribution of wild pigs in the United States to better understand the physiological and ecological factors that may determine their invasive potential and to guide future study and eradication efforts. Using national-scale wild pig occurrence data reported between 1982 and 2012 by wildlife management professionals, we estimated the probability of wild pig occurrence across the United States using a logistic discrimination function and environmental covariates hypothesized to influence the distribution of the species. Our results suggest the distribution of wild pigs in the U.S. was most strongly limited by cold temperatures and availability of water, and that they were most likely to occur where potential home ranges had higher habitat heterogeneity, providing access to multiple key resources including water, forage, and cover. High probability of occurrence was also associated with frequent high temperatures, up to a high threshold. However, this pattern is driven by pigs’ historic distribution in warm climates of the southern U.S. Further study of pigs’ ability to persist in cold northern climates is needed to better understand whether low temperatures actually limit their distribution. Our model highlights areas at risk of invasion as those with habitat conditions similar to those found in pigs’ current range that are also near current populations. This study provides a macro-scale approach to generalist species distribution modeling that is applicable to other generalist and invasive species. PMID:26267266

  19. Fixation probability and the crossing time in the Wright-Fisher multiple alleles model

    NASA Astrophysics Data System (ADS)

    Gill, Wonpyong

    2009-08-01

    The fixation probability and crossing time in the Wright-Fisher multiple alleles model, which describes a finite haploid population, were calculated by switching on an asymmetric sharply-peaked landscape with a positive asymmetric parameter, r, such that the reversal allele of the optimal allele has higher fitness than the optimal allele. The fixation probability, which was evaluated as the ratio of the first arrival time at the reversal allele to the origination time, was double the selective advantage of the reversal allele compared with the optimal allele in the strong selection region, where the fitness parameter, k, is much larger than the critical fitness parameter, kc. The crossing time in a finite population for r>0 and k0 and k≫kc scaled as a power law in the fitness parameter with a similar scaling exponent as the crossing time in an infinite population for r=0, and that the critical fitness parameter decreased with increasing sequence length with a fixed population size.

  20. Probability of identification: a statistical model for the validation of qualitative botanical identification methods.

    PubMed

    LaBudde, Robert A; Harnly, James M

    2012-01-01

    A qualitative botanical identification method (BIM) is an analytical procedure that returns a binary result (1 = Identified, 0 = Not Identified). A BIM may be used by a buyer, manufacturer, or regulator to determine whether a botanical material being tested is the same as the target (desired) material, or whether it contains excessive nontarget (undesirable) material. The report describes the development and validation of studies for a BIM based on the proportion of replicates identified, or probability of identification (POI), as the basic observed statistic. The statistical procedures proposed for data analysis follow closely those of the probability of detection, and harmonize the statistical concepts and parameters between quantitative and qualitative method validation. Use of POI statistics also harmonizes statistical concepts for botanical, microbiological, toxin, and other analyte identification methods that produce binary results. The POI statistical model provides a tool for graphical representation of response curves for qualitative methods, reporting of descriptive statistics, and application of performance requirements. Single collaborator and multicollaborative study examples are given. PMID:22468371

  1. Characteristics of the probability function for three random-walk models of reaction-diffusion processes

    NASA Astrophysics Data System (ADS)

    Musho, Matthew K.; Kozak, John J.

    1984-10-01

    A method is presented for calculating exactly the relative width (σ2)1/2/, the skewness γ1, and the kurtosis γ2 characterizing the probability distribution function for three random-walk models of diffusion-controlled processes. For processes in which a diffusing coreactant A reacts irreversibly with a target molecule B situated at a reaction center, three models are considered. The first is the traditional one of an unbiased, nearest-neighbor random walk on a d-dimensional periodic/confining lattice with traps; the second involves the consideration of unbiased, non-nearest-neigh bor (i.e., variable-step length) walks on the same d-dimensional lattice; and, the third deals with the case of a biased, nearest-neighbor walk on a d-dimensional lattice (wherein a walker experiences a potential centered at the deep trap site of the lattice). Our method, which has been described in detail elsewhere [P.A. Politowicz and J. J. Kozak, Phys. Rev. B 28, 5549 (1983)] is based on the use of group theoretic arguments within the framework of the theory of finite Markov processes. The approach allows the separate effects of geometry (system size N, dimensionality d, and valency ν), of the governing potential and of the medium temperature to be assessed and their respective influence on (σ2)1/2/, γ1, and γ2 to be studied quantitatively. We determine the classes of potential functions and the regimes of temperature for which allowing variable-length jumps or admitting a bias in the site-to-site trajectory of the walker produces results which are significantly different (both quantitatively and qualitatively) from those calculated assuming only unbiased, nearest-neighbor random walks. Finally, we demonstrate that the approach provides a method for determining a continuous probability (density) distribution function consistent with the numerical data on (σ2)1/2/, γ1, and γ2 for the processes described above. In particular we show that the first of the above reaction

  2. TRANSITION PROBABILITIES FOR STUDENT-TEACHER POPULATION GROWTH MODEL (DYNAMOD II).

    ERIC Educational Resources Information Center

    ZINTER, JUDITH R.

    THIS NOTE PRESENTS THE TRANSITION PROBABILITIES CURRENTLY IN USE IN DYNAMOD II. THE ESTIMATING PROCEDURES USED TO DERIVE THESE PROBABILITIES WERE DISCUSSED IN THESE RELATED DOCUMENTS--EA 001 016, EA 001 017, EA 001 018, AND EA 001 063. THE TRANSIT ON PROBABILITIES FOR FOUR SEX-RACE GROUPS ARE SHOWN ALONG WITH THE DONOR-RECEIVER CODES TO WHICH THEY…

  3. Global climate change model natural climate variation: Paleoclimate data base, probabilities and astronomic predictors

    SciTech Connect

    Kukla, G.; Gavin, J.

    1994-05-01

    This report was prepared at the Lamont-Doherty Geological Observatory of Columbia University at Palisades, New York, under subcontract to Pacific Northwest Laboratory it is a part of a larger project of global climate studies which supports site characterization work required for the selection of a potential high-level nuclear waste repository and forms part of the Performance Assessment Scientific Support (PASS) Program at PNL. The work under the PASS Program is currently focusing on the proposed site at Yucca Mountain, Nevada, and is under the overall direction of the Yucca Mountain Project Office US Department of Energy, Las Vegas, Nevada. The final results of the PNL project will provide input to global atmospheric models designed to test specific climate scenarios which will be used in the site specific modeling work of others. The primary purpose of the data bases compiled and of the astronomic predictive models is to aid in the estimation of the probabilities of future climate states. The results will be used by two other teams working on the global climate study under contract to PNL. They are located at and the University of Maine in Orono, Maine, and the Applied Research Corporation in College Station, Texas. This report presents the results of the third year`s work on the global climate change models and the data bases describing past climates.

  4. A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities

    USGS Publications Warehouse

    Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.

    1999-01-01

    A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.

  5. 3D model retrieval using probability density-based shape descriptors.

    PubMed

    Akgül, Ceyhun Burak; Sankur, Bülent; Yemez, Yücel; Schmitt, Francis

    2009-06-01

    We address content-based retrieval of complete 3D object models by a probabilistic generative description of local shape properties. The proposed shape description framework characterizes a 3D object with sampled multivariate probability density functions of its local surface features. This density-based descriptor can be efficiently computed via kernel density estimation (KDE) coupled with fast Gauss transform. The non-parametric KDE technique allows reliable characterization of a diverse set of shapes and yields descriptors which remain relatively insensitive to small shape perturbations and mesh resolution. Density-based characterization also induces a permutation property which can be used to guarantee invariance at the shape matching stage. As proven by extensive retrieval experiments on several 3D databases, our framework provides state-of-the-art discrimination over a broad and heterogeneous set of shape categories. PMID:19372614

  6. Modeling Longitudinal Data Containing Non-Normal Within Subject Errors

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan; Glenn, Nancy L.

    2013-01-01

    The mission of the National Aeronautics and Space Administration’s (NASA) human research program is to advance safe human spaceflight. This involves conducting experiments, collecting data, and analyzing data. The data are longitudinal and result from a relatively few number of subjects; typically 10 – 20. A longitudinal study refers to an investigation where participant outcomes and possibly treatments are collected at multiple follow-up times. Standard statistical designs such as mean regression with random effects and mixed–effects regression are inadequate for such data because the population is typically not approximately normally distributed. Hence, more advanced data analysis methods are necessary. This research focuses on four such methods for longitudinal data analysis: the recently proposed linear quantile mixed models (lqmm) by Geraci and Bottai (2013), quantile regression, multilevel mixed–effects linear regression, and robust regression. This research also provides computational algorithms for longitudinal data that scientists can directly use for human spaceflight and other longitudinal data applications, then presents statistical evidence that verifies which method is best for specific situations. This advances the study of longitudinal data in a broad range of applications including applications in the sciences, technology, engineering and mathematics fields.

  7. Multivariate Models for Normal and Binary Responses in Intervention Studies

    ERIC Educational Resources Information Center

    Pituch, Keenan A.; Whittaker, Tiffany A.; Chang, Wanchen

    2016-01-01

    Use of multivariate analysis (e.g., multivariate analysis of variance) is common when normally distributed outcomes are collected in intervention research. However, when mixed responses--a set of normal and binary outcomes--are collected, standard multivariate analyses are no longer suitable. While mixed responses are often obtained in…

  8. Complication Probability Models for Radiation-Induced Heart Valvular Dysfunction: Do Heart-Lung Interactions Play a Role?

    PubMed Central

    Cella, Laura; Palma, Giuseppe; Deasy, Joseph O.; Oh, Jung Hun; Liuzzi, Raffaele; D’Avino, Vittoria; Conson, Manuel; Pugliese, Novella; Picardi, Marco; Salvatore, Marco; Pacelli, Roberto

    2014-01-01

    Purpose The purpose of this study is to compare different normal tissue complication probability (NTCP) models for predicting heart valve dysfunction (RVD) following thoracic irradiation. Methods All patients from our institutional Hodgkin lymphoma survivors database with analyzable datasets were included (n = 90). All patients were treated with three-dimensional conformal radiotherapy with a median total dose of 32 Gy. The cardiac toxicity profile was available for each patient. Heart and lung dose-volume histograms (DVHs) were extracted and both organs were considered for Lyman-Kutcher-Burman (LKB) and Relative Seriality (RS) NTCP model fitting using maximum likelihood estimation. Bootstrap refitting was used to test the robustness of the model fit. Model performance was estimated using the area under the receiver operating characteristic curve (AUC). Results Using only heart-DVHs, parameter estimates were, for the LKB model: D50 = 32.8 Gy, n = 0.16 and m = 0.67; and for the RS model: D50 = 32.4 Gy, s = 0.99 and γ = 0.42. AUC values were 0.67 for LKB and 0.66 for RS, respectively. Similar performance was obtained for models using only lung-DVHs (LKB: D50 = 33.2 Gy, n = 0.01, m = 0.19, AUC = 0.68; RS: D50 = 24.4 Gy, s = 0.99, γ = 2.12, AUC = 0.66). Bootstrap result showed that the parameter fits for lung-LKB were extremely robust. A combined heart-lung LKB model was also tested and showed a minor improvement (AUC = 0.70). However, the best performance was obtained using the previously determined multivariate regression model including maximum heart dose with increasing risk for larger heart and smaller lung volumes (AUC = 0.82). Conclusions The risk of radiation induced valvular disease cannot be modeled using NTCP models only based on heart dose-volume distribution. A predictive model with an improved performance can be obtained but requires the inclusion of heart and lung volume terms

  9. Determining probability distributions of parameter performances for time-series model calibration: A river system trial

    NASA Astrophysics Data System (ADS)

    Kim, Shaun Sang Ho; Hughes, Justin Douglas; Chen, Jie; Dutta, Dushmanta; Vaze, Jai

    2015-11-01

    A calibration method is presented that uses a sub-period resampling method to estimate probability distributions of performance for different parameter sets. Where conventional calibration methods implicitly identify the best performing parameterisations on average, the new method looks at the consistency of performance during sub-periods. The method is implemented with the conceptual river reach algorithms within the Australian Water Resources Assessments River (AWRA-R) model in the Murray-Darling Basin, Australia. The new method is tested for 192 reaches in a cross-validation scheme and results are compared to a traditional split-sample calibration-validation implementation. This is done to evaluate the new technique's ability to predict daily streamflow outside the calibration period. The new calibration method produced parameterisations that performed better in validation periods than optimum calibration parameter sets for 103 reaches and produced the same parameterisations for 35 reaches. The method showed a statistically significant improvement to predictive performance and potentially provides more rational flux terms over traditional split-sample calibration methods. Particular strengths of the proposed calibration method is that it avoids extra weighting towards rare periods of good agreement and also prevents compensating biases through time. The method can be used as a diagnostic tool to evaluate stochasticity of modelled systems and used to determine suitable model structures of different time-series models. Although the method is demonstrated using a hydrological model, the method is not limited to the field of hydrology and could be adopted for many different time-series modelling applications.

  10. The k-sample problem in a multi-state model and testing transition probability matrices.

    PubMed

    Tattar, Prabhanjan N; Vaman, H J

    2014-07-01

    The choice of multi-state models is natural in analysis of survival data, e.g., when the subjects in a study pass through different states like 'healthy', 'in a state of remission', 'relapse' or 'dead' in a health related quality of life study. Competing risks is another common instance of the use of multi-state models. Statistical inference for such event history data can be carried out by assuming a stochastic process model. Under such a setting, comparison of the event history data generated by two different treatments calls for testing equality of the corresponding transition probability matrices. The present paper proposes solution to this class of problems by assuming a non-homogeneous Markov process to describe the transitions among the health states. A class of test statistics are derived for comparison of [Formula: see text] treatments by using a 'weight process'. This class, in particular, yields generalisations of the log-rank, Gehan, Peto-Peto and Harrington-Fleming tests. For an intrinsic comparison of the treatments, the 'leave-one-out' jackknife method is employed for identifying influential observations. The proposed methods are then used to develop the Kolmogorov-Smirnov type supremum tests corresponding to the various extended tests. To demonstrate the usefulness of the test procedures developed, a simulation study was carried out and an application to the Trial V data provided by International Breast Cancer Study Group is discussed. PMID:23722306