Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van't
2012-03-15
Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.
A stochastic model for the normal tissue complication probability (NTCP) and applicationss.
Stocks, Theresa; Hillen, Thomas; Gong, Jiafen; Burger, Martin
2016-09-02
The normal tissue complication probability (NTCP) is a measure for the estimated side effects of a given radiation treatment schedule. Here we use a stochastic logistic birth-death process to define an organ-specific and patient-specific NTCP. We emphasize an asymptotic simplification which relates the NTCP to the solution of a logistic differential equation. This framework is based on simple modelling assumptions and it prepares a framework for the use of the NTCP model in clinical practice. As example, we consider side effects of prostate cancer brachytherapy such as increase in urinal frequency, urinal retention and acute rectal dysfunction. © The authors 2016. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
Normal probability plots with confidence.
Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang
2015-01-01
Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods.
Rose, Brent S.; Aydogan, Bulent; Liang, Yun; Yeginer, Mete; Hasselle, Michael D.; Dandekar, Virag; Bafana, Rounak; Yashar, Catheryn M.; Mundt, Arno J.; Roeske, John C.; Mell, Loren K.
2011-03-01
Purpose: To test the hypothesis that increased pelvic bone marrow (BM) irradiation is associated with increased hematologic toxicity (HT) in cervical cancer patients undergoing chemoradiotherapy and to develop a normal tissue complication probability (NTCP) model for HT. Methods and Materials: We tested associations between hematologic nadirs during chemoradiotherapy and the volume of BM receiving {>=}10 and 20 Gy (V{sub 10} and V{sub 20}) using a previously developed linear regression model. The validation cohort consisted of 44 cervical cancer patients treated with concurrent cisplatin and pelvic radiotherapy. Subsequently, these data were pooled with data from 37 identically treated patients from a previous study, forming a cohort of 81 patients for normal tissue complication probability analysis. Generalized linear modeling was used to test associations between hematologic nadirs and dosimetric parameters, adjusting for body mass index. Receiver operating characteristic curves were used to derive optimal dosimetric planning constraints. Results: In the validation cohort, significant negative correlations were observed between white blood cell count nadir and V{sub 10} (regression coefficient ({beta}) = -0.060, p = 0.009) and V{sub 20} ({beta} = -0.044, p = 0.010). In the combined cohort, the (adjusted) {beta} estimates for log (white blood cell) vs. V{sub 10} and V{sub 20} were as follows: -0.022 (p = 0.025) and -0.021 (p = 0.002), respectively. Patients with V{sub 10} {>=} 95% were more likely to experience Grade {>=}3 leukopenia (68.8% vs. 24.6%, p < 0.001) than were patients with V{sub 20} > 76% (57.7% vs. 21.8%, p = 0.001). Conclusions: These findings support the hypothesis that HT increases with increasing pelvic BM volume irradiated. Efforts to maintain V{sub 10} < 95% and V{sub 20} < 76% may reduce HT.
Rose, Brent S; Aydogan, Bulent; Liang, Yun; Yeginer, Mete; Hasselle, Michael D; Dandekar, Virag; Bafana, Rounak; Yashar, Catheryn M; Mundt, Arno J; Roeske, John C; Mell, Loren K
2011-03-01
To test the hypothesis that increased pelvic bone marrow (BM) irradiation is associated with increased hematologic toxicity (HT) in cervical cancer patients undergoing chemoradiotherapy and to develop a normal tissue complication probability (NTCP) model for HT. We tested associations between hematologic nadirs during chemoradiotherapy and the volume of BM receiving≥10 and 20 Gy (V10 and V20) using a previously developed linear regression model. The validation cohort consisted of 44 cervical cancer patients treated with concurrent cisplatin and pelvic radiotherapy. Subsequently, these data were pooled with data from 37 identically treated patients from a previous study, forming a cohort of 81 patients for normal tissue complication probability analysis. Generalized linear modeling was used to test associations between hematologic nadirs and dosimetric parameters, adjusting for body mass index. Receiver operating characteristic curves were used to derive optimal dosimetric planning constraints. In the validation cohort, significant negative correlations were observed between white blood cell count nadir and V10 (regression coefficient (β)=-0.060, p=0.009) and V20 (β=-0.044, p=0.010). In the combined cohort, the (adjusted) β estimates for log (white blood cell) vs. V10 and V20 were as follows: -0.022 (p=0.025) and -0.021 (p=0.002), respectively. Patients with V10≥95% were more likely to experience Grade≥3 leukopenia (68.8% vs. 24.6%, p<0.001) than were patients with V20>76% (57.7% vs. 21.8%, p=0.001). These findings support the hypothesis that HT increases with increasing pelvic BM volume irradiated. Efforts to maintain V10<95% and V20<76% may reduce HT. Copyright © 2011 Elsevier Inc. All rights reserved.
Cella, Laura; Liuzzi, Raffaele; Conson, Manuel; D’Avino, Vittoria; Salvatore, Marco; Pacelli, Roberto
2013-10-01
Purpose: To establish a multivariate normal tissue complication probability (NTCP) model for radiation-induced asymptomatic heart valvular defects (RVD). Methods and Materials: Fifty-six patients treated with sequential chemoradiation therapy for Hodgkin lymphoma (HL) were retrospectively reviewed for RVD events. Clinical information along with whole heart, cardiac chambers, and lung dose distribution parameters was collected, and the correlations to RVD were analyzed by means of Spearman's rank correlation coefficient (Rs). For the selection of the model order and parameters for NTCP modeling, a multivariate logistic regression method using resampling techniques (bootstrapping) was applied. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC). Results: When we analyzed the whole heart, a 3-variable NTCP model including the maximum dose, whole heart volume, and lung volume was shown to be the optimal predictive model for RVD (Rs = 0.573, P<.001, AUC = 0.83). When we analyzed the cardiac chambers individually, for the left atrium and for the left ventricle, an NTCP model based on 3 variables including the percentage volume exceeding 30 Gy (V30), cardiac chamber volume, and lung volume was selected as the most predictive model (Rs = 0.539, P<.001, AUC = 0.83; and Rs = 0.557, P<.001, AUC = 0.82, respectively). The NTCP values increase as heart maximum dose or cardiac chambers V30 increase. They also increase with larger volumes of the heart or cardiac chambers and decrease when lung volume is larger. Conclusions: We propose logistic NTCP models for RVD considering not only heart irradiation dose but also the combined effects of lung and heart volumes. Our study establishes the statistical evidence of the indirect effect of lung size on radio-induced heart toxicity.
Rose, Brent S.; Aydogan, Bulent; Liang, Yun; Yeginer, Mete; Hasselle, Michael D.; Dandekar, Virag; Bafana, Rounak; Yashar, Catheryn M.; Mundt, Arno J.; Roeske, John C.; Mell, Loren K.
2009-01-01
PURPOSE To test the hypothesis that increased pelvic bone marrow (BM) irradiation is associated with increased hematologic toxicity (HT) in cervical cancer patients undergoing chemoradiotherapy (CRT), and to develop a normal tissue complication probability (NTCP) model for HT. METHODS AND MATERIALS We tested associations between hematologic nadirs during CRT and the volume of BM receiving ≥ 10 and 20 Gy (V10 and V20) using a previously developed linear regression model. The validation cohort consisted of 44 cervical cancer patients treated with concurrent cisplatin and pelvic radiotherapy. Subsequently, these data were pooled with 37 identically treated patients from a prior study, forming a cohort of 81 patients for NTCP analysis. Generalized linear modeling was used to test associations between hematologic nadirs and dosimetric parameters, adjusting for body mass index. Receiver operating characteristic curves were used to derive optimal dosimetric planning constraints. RESULTS In the validation cohort, significant negative correlations were observed between white blood cell count (WBC) nadir and V10 (regression coefficient (β)=−0.060, p=0.009) and V20 (β=−0.044, p=0.010). In the combined cohort, the (adjusted) β estimates for log(WBC) vs. V10 and V20 were: −0.022 (p=0.025) and −0.021 (p=0.002), respectively. Patients with V10 ≥ 95% were more likely to experience grade ≥ 3 leukopenia (68.8% vs. 24.6%, p<0.001) as were patients with V20 > 76% (57.7% vs. 21.8%, p=0.001). CONCLUSIONS These findings support the hypothesis that HT increases with increasing pelvic BM volume irradiated. Efforts to maintain V10 < 95% and V20 < 76% may reduce HT. PMID:20400238
Chow, James C L; Markel, Daniel; Jiang, Runqing
2010-09-01
The Gaussian error function was first used and verified in normal tissue complication probability (NTCP) calculation to reduce the dose-volume histogram (DVH) database by replacing the dose-volume bin set with the error function parameters for the differential DVH (dDVH). Seven-beam intensity modulated radiation therapy (IMRT) treatment planning was performed in three patients with small (40 cm3), medium (53 cm3), and large (87 cm3) prostate volume, selected from a group of 20 patients. Rectal dDVH varying with the interfraction prostate motion along the anterior-posterior direction was determined by the treatment planning system (TPS) and modeled by the Gaussian error function model for the three patients. Rectal NTCP was then calculated based on the routine dose-volume bin set of the rectum by the TPS and the error function model. The variations in the rectal NTCP with the prostate motion and volume were studied. For the ranges of prostate motion of 8-2, 4-8, and 4-3 mm along the anterior-posterior direction for the small, medium, and large prostate patient, the rectal NTCP was determined varying in the ranges of 4.6%-4.8%, 4.5%-4.7%, and 4.6%-4.7%, respectively. The deviation of the rectal NTCP calculated by the TPS and the Gaussian error function model was within +/- 0.1%. The Gaussian error function was successfully applied in the NTCP calculation by replacing the dose-volume bin set with the model parameters. This provides an option in the NTCP calculation using a reduced size of dose-volume database. Moreover, the rectal NTCP was found varying in about +/- 0.2% with the interfraction prostate motion along the anterior-posterior direction in the radiation treatment. The dependence of the variation in the rectal NTCP with the interfraction prostate motion on the prostate volume was found to be more significant in the patient with larger prostate.
Chow, James C L; Markel, Daniel; Jiang, Runqing
2010-09-01
The Gaussian error function was first used and verified in normal tissue complication probability (NTCP) calculation to reduce the dose-volume histogram (DVH) database by replacing the dose-volume bin set with the error function parameters for the differential DVH (dDVH). Seven-beam intensity modulated radiation therapy (IMRT) treatment planning was performed in three patients with small(40cm3), medium (53cm3), and large (87cm3) prostate volume, selected from a group of 20 patients. Rectal dDVH varying with the interfraction prostate motion along the anterior-posterior direction was determined by the treatment planning system (TPS) and modeled by the Gaussian error function model for the three patients. Rectal NTCP was then calculated based on the routine dose-volume bin set of the rectum by the TPS and the error function model. The variations in the rectal NTCP with the prostate motion and volume were studied. For the ranges of prostate motion of 8-2, 4-8, and 4-3 mm along the anterior-posterior direction for the small, medium, and large prostate patient, the rectal NTCP was determined varying in the ranges of 4.6%-4.8%, 4.5%-4.7%, and 4.6%-4.7%, respectively. The deviation of the rectal NTCP calculated by the TPS and the Gaussian error function model was within ±0.1%. The Gaussian error function was successfully applied in the NTCP calculation by replacing the dose-volume bin set with the model parameters. This provides an option in the NTCP calculation using a reduced size of dose-volume database. Moreover, the rectal NTCP was found varying in about ±0.2% with the interfraction prostate motion along the anterior-posterior direction in the radiation treatment. The dependence of the variation in the rectal NTCP with the interfraction prostate motion on the prostate volume was found to be more significant in the patient with larger prostate. © 2010 American Association of Physicists in Medicine.
Image-based modeling of normal tissue complication probability for radiation therapy.
Deasy, Joseph O; El Naqa, Issam
2008-01-01
We therefore conclude that NTCP models, at least in some cases, are definitely tools not toys. However, like any good tool, they can be abused and in fact could lead to injury with misuse. In particular, we have pointed out that it is risky indeed to apply NTCP models to dose distributions which are very dissimilar to the dose distributions for which the NTCP model has been validated. While this warning is somewhat fuzzy, it is clear that more research needs to be done in this area. We believe that ultimately for NTCP models to be used routinely in treatment planning in a safe and effective way, the actual application will need to be closely related to the characteristics of the data sets and the uncertainties of the treatment parameters in the models under consideration. Another sign that NTCP models are becoming tools rather than toys is that there is often good agreement as to what constitutes a correct direction of improving the reduced risk for that particular complication endpoint. Thus, for example, mean dose to normal lung almost always comes out as being the most predictive or nearly most predictive factor in the analysis of radiation pneumonitis.
Cheraghi, Susan; Nikoofar, Alireza; Bakhshandeh, Mohsen; Khoei, Samideh; Farahani, Saeid; Abdollahi, Hamid; Mahdavi, Seied Rabi
2017-10-02
The aim of this study was to generate the dose-response curves by six normal tissue complication probability (NTCP) models and ranking the models for prediction of radiation induced sensorineural hearing loss (SNHL) caused by head and neck radiation therapy (RT). Pure tone audiometry (PTA) was performed on 70 ears of patients for 12 months after the completion of radiation therapy. The SNHL was defined as a threshold shift ≤15 dB at 2 contiguous frequencies according to the common toxicity criteria for adverse events scoring system. The models evaluated were: Lyman and Logit; Mean Dose; Relative Seriality; Individual Critical Volume; and Population Critical Volume models. Maximum likelihood analysis was used to fit the models to experimental data. The appropriateness of the fit was determined by the 2-sample Kolmogorov-Smirnov test. Ranking of the models was made according to Akaike's information criterion. The dose of 50% complication rate (D50) was 51-60 Gy. Three of the examined models fitted well with clinical data in a 95% confidence interval. The relative seriality model was ranked as the best model of prediction for radiation induced SNHL. Cochlea shows a different behaviour against different NTCP models; it's may be due to its small size.
Bakhshandeh, Mohsen; Hashemi, Bijan; Mahdavi, Seied Rabi Mehdi; Nikoofar, Alireza; Vasheghani, Maryam; Kazemnejad, Anoshirvan
2013-02-01
Purpose: To determine the dose-response relationship of the thyroid for radiation-induced hypothyroidism in head-and-neck radiation therapy, according to 6 normal tissue complication probability models, and to find the best-fit parameters of the models. Methods and Materials: Sixty-five patients treated with primary or postoperative radiation therapy for various cancers in the head-and-neck region were prospectively evaluated. Patient serum samples (tri-iodothyronine, thyroxine, thyroid-stimulating hormone [TSH], free tri-iodothyronine, and free thyroxine) were measured before and at regular time intervals until 1 year after the completion of radiation therapy. Dose-volume histograms (DVHs) of the patients' thyroid gland were derived from their computed tomography (CT)-based treatment planning data. Hypothyroidism was defined as increased TSH (subclinical hypothyroidism) or increased TSH in combination with decreased free thyroxine and thyroxine (clinical hypothyroidism). Thyroid DVHs were converted to 2 Gy/fraction equivalent doses using the linear-quadratic formula with {alpha}/{beta} = 3 Gy. The evaluated models included the following: Lyman with the DVH reduced to the equivalent uniform dose (EUD), known as LEUD; Logit-EUD; mean dose; relative seriality; individual critical volume; and population critical volume models. The parameters of the models were obtained by fitting the patients' data using a maximum likelihood analysis method. The goodness of fit of the models was determined by the 2-sample Kolmogorov-Smirnov test. Ranking of the models was made according to Akaike's information criterion. Results: Twenty-nine patients (44.6%) experienced hypothyroidism. None of the models was rejected according to the evaluation of the goodness of fit. The mean dose model was ranked as the best model on the basis of its Akaike's information criterion value. The D{sub 50} estimated from the models was approximately 44 Gy. Conclusions: The implemented normal tissue
Defraene, Gilles; Van den Bergh, Laura; Al-Mamgani, Abrahim; Haustermans, Karin; Heemsbergen, Wilma; Van den Heuvel, Frank; Lebesque, Joos V.
2012-03-01
Purpose: To study the impact of clinical predisposing factors on rectal normal tissue complication probability modeling using the updated results of the Dutch prostate dose-escalation trial. Methods and Materials: Toxicity data of 512 patients (conformally treated to 68 Gy [n = 284] and 78 Gy [n = 228]) with complete follow-up at 3 years after radiotherapy were studied. Scored end points were rectal bleeding, high stool frequency, and fecal incontinence. Two traditional dose-based models (Lyman-Kutcher-Burman (LKB) and Relative Seriality (RS) and a logistic model were fitted using a maximum likelihood approach. Furthermore, these model fits were improved by including the most significant clinical factors. The area under the receiver operating characteristic curve (AUC) was used to compare the discriminating ability of all fits. Results: Including clinical factors significantly increased the predictive power of the models for all end points. In the optimal LKB, RS, and logistic models for rectal bleeding and fecal incontinence, the first significant (p = 0.011-0.013) clinical factor was 'previous abdominal surgery.' As second significant (p = 0.012-0.016) factor, 'cardiac history' was included in all three rectal bleeding fits, whereas including 'diabetes' was significant (p = 0.039-0.048) in fecal incontinence modeling but only in the LKB and logistic models. High stool frequency fits only benefitted significantly (p = 0.003-0.006) from the inclusion of the baseline toxicity score. For all models rectal bleeding fits had the highest AUC (0.77) where it was 0.63 and 0.68 for high stool frequency and fecal incontinence, respectively. LKB and logistic model fits resulted in similar values for the volume parameter. The steepness parameter was somewhat higher in the logistic model, also resulting in a slightly lower D{sub 50}. Anal wall DVHs were used for fecal incontinence, whereas anorectal wall dose best described the other two endpoints. Conclusions: Comparable
2013-01-01
Background The risk of radio-induced gastrointestinal (GI) complications is affected by several factors other than the dose to the rectum such as patient characteristics, hormonal or antihypertensive therapy, and acute rectal toxicity. Purpose of this work is to study clinical and dosimetric parameters impacting on late GI toxicity after prostate external beam radiotherapy (RT) and to establish multivariate normal tissue complication probability (NTCP) model for radiation-induced GI complications. Methods A total of 57 men who had undergone definitive RT for prostate cancer were evaluated for GI events classified using the RTOG/EORTC scoring system. Their median age was 73 years (range 53–85). The patients were assessed for GI toxicity before, during, and periodically after RT completion. Several clinical variables along with rectum dose-volume parameters (Vx) were collected and their correlation to GI toxicity was analyzed by Spearman’s rank correlation coefficient (Rs). Multivariate logistic regression method using resampling techniques was applied to select model order and parameters for NTCP modeling. Model performance was evaluated through the area under the receiver operating characteristic curve (AUC). Results At a median follow-up of 30 months, 37% (21/57) patients developed G1-2 acute GI events while 33% (19/57) were diagnosed with G1-2 late GI events. An NTCP model for late mild/moderate GI toxicity based on three variables including V65 (OR = 1.03), antihypertensive and/or anticoagulant (AH/AC) drugs (OR = 0.24), and acute GI toxicity (OR = 4.3) was selected as the most predictive model (Rs = 0.47, p < 0.001; AUC = 0.79). This three-variable model outperforms the logistic model based on V65 only (Rs = 0.28, p < 0.001; AUC = 0.69). Conclusions We propose a logistic NTCP model for late GI toxicity considering not only rectal irradiation dose but also clinical patient-specific factors. Accordingly, the risk of G1
Lee, Tsair-Fwu; Chao, Pei-Ju; Chang, Liyun; Ting, Hui-Min; Huang, Yu-Jie
2015-01-01
Symptomatic radiation pneumonitis (SRP), which decreases quality of life (QoL), is the most common pulmonary complication in patients receiving breast irradiation. If it occurs, acute SRP usually develops 4-12 weeks after completion of radiotherapy and presents as a dry cough, dyspnea and low-grade fever. If the incidence of SRP is reduced, not only the QoL but also the compliance of breast cancer patients may be improved. Therefore, we investigated the incidence SRP in breast cancer patients after hybrid intensity modulated radiotherapy (IMRT) to find the risk factors, which may have important effects on the risk of radiation-induced complications. In total, 93 patients with breast cancer were evaluated. The final endpoint for acute SRP was defined as those who had density changes together with symptoms, as measured using computed tomography. The risk factors for a multivariate normal tissue complication probability model of SRP were determined using the least absolute shrinkage and selection operator (LASSO) technique. Five risk factors were selected using LASSO: the percentage of the ipsilateral lung volume that received more than 20-Gy (IV20), energy, age, body mass index (BMI) and T stage. Positive associations were demonstrated among the incidence of SRP, IV20, and patient age. Energy, BMI and T stage showed a negative association with the incidence of SRP. Our analyses indicate that the risk of SPR following hybrid IMRT in elderly or low-BMI breast cancer patients is increased once the percentage of the ipsilateral lung volume receiving more than 20-Gy is controlled below a limitation. We suggest to define a dose-volume percentage constraint of IV20< 37% (or AIV20< 310cc) for the irradiated ipsilateral lung in radiation therapy treatment planning to maintain the incidence of SPR below 20%, and pay attention to the sequelae especially in elderly or low-BMI breast cancer patients. (AIV20: the absolute ipsilateral lung volume that received more than 20 Gy (cc).
Probability state modeling theory.
Bagwell, C Bruce; Hunsberger, Benjamin C; Herbert, Donald J; Munson, Mark E; Hill, Beth L; Bray, Chris M; Preffer, Frederic I
2015-07-01
As the technology of cytometry matures, there is mounting pressure to address two major issues with data analyses. The first issue is to develop new analysis methods for high-dimensional data that can directly reveal and quantify important characteristics associated with complex cellular biology. The other issue is to replace subjective and inaccurate gating with automated methods that objectively define subpopulations and account for population overlap due to measurement uncertainty. Probability state modeling (PSM) is a technique that addresses both of these issues. The theory and important algorithms associated with PSM are presented along with simple examples and general strategies for autonomous analyses. PSM is leveraged to better understand B-cell ontogeny in bone marrow in a companion Cytometry Part B manuscript. Three short relevant videos are available in the online supporting information for both of these papers. PSM avoids the dimensionality barrier normally associated with high-dimensionality modeling by using broadened quantile functions instead of frequency functions to represent the modulation of cellular epitopes as cells differentiate. Since modeling programs ultimately minimize or maximize one or more objective functions, they are particularly amenable to automation and, therefore, represent a viable alternative to subjective and inaccurate gating approaches.
Xu ZhiYong; Liang Shixiong; Zhu Ji; Zhu Xiaodong; Zhao Jiandong; Lu Haijie; Yang Yunli; Chen Long; Wang Anyu; Fu Xiaolong; Jiang Guoliang . E-mail: jianggl@21cn.com
2006-05-01
Purpose: To describe the probability of RILD by application of the Lyman-Kutcher-Burman normal-tissue complication (NTCP) model for primary liver carcinoma (PLC) treated with hypofractionated three-dimensional conformal radiotherapy (3D-CRT). Methods and Materials: A total of 109 PLC patients treated by 3D-CRT were followed for RILD. Of these patients, 93 were in liver cirrhosis of Child-Pugh Grade A, and 16 were in Child-Pugh Grade B. The Michigan NTCP model was used to predict the probability of RILD, and then the modified Lyman NTCP model was generated for Child-Pugh A and Child-Pugh B patients by maximum-likelihood analysis. Results: Of all patients, 17 developed RILD in which 8 were of Child-Pugh Grade A, and 9 were of Child-Pugh Grade B. The prediction of RILD by the Michigan model was underestimated for PLC patients. The modified n, m, TD{sub 5} (1) were 1.1, 0.28, and 40.5 Gy and 0.7, 0.43, and 23 Gy for patients with Child-Pugh A and B, respectively, which yielded better estimations of RILD probability. The hepatic tolerable doses (TD{sub 5}) would be MDTNL of 21 Gy and 6 Gy, respectively, for Child-Pugh A and B patients. Conclusions: The Michigan model was probably not fit to predict RILD in PLC patients. A modified Lyman NTCP model for RILD was recommended.
Liu, Mitchell; Moiseenko, Vitali; Agranovich, Alexander; Karvat, Anand; Kwan, Winkle; Saleh, Ziad H; Apte, Aditya A; Deasy, Joseph O
2010-10-01
Validating a predictive model for late rectal bleeding following external beam treatment for prostate cancer would enable safer treatments or dose escalation. We tested the normal tissue complication probability (NTCP) model recommended in the recent QUANTEC review (quantitative analysis of normal tissue effects in the clinic). One hundred and sixty one prostate cancer patients were treated with 3D conformal radiotherapy for prostate cancer at the British Columbia Cancer Agency in a prospective protocol. The total prescription dose for all patients was 74 Gy, delivered in 2 Gy/fraction. 159 3D treatment planning datasets were available for analysis. Rectal dose volume histograms were extracted and fitted to a Lyman-Kutcher-Burman NTCP model. Late rectal bleeding (>grade 2) was observed in 12/159 patients (7.5%). Multivariate logistic regression with dose-volume parameters (V50, V60, V70, etc.) was non-significant. Among clinical variables, only age was significant on a Kaplan-Meier log-rank test (p=0.007, with an optimal cut point of 77 years). Best-fit Lyman-Kutcher-Burman model parameters (with 95% confidence intervals) were: n = 0.068 (0.01, +infinity); m =0.14 (0.0, 0.86); and TD50 = 81 (27, 136) Gy. The peak values fall within the 95% QUANTEC confidence intervals. On this dataset, both models had only modest ability to predict complications: the best-fit model had a Spearman's rank correlation coefficient of rs = 0.099 (p = 0.11) and area under the receiver operating characteristic curve (AUC) of 0.62; the QUANTEC model had rs=0.096 (p= 0.11) and a corresponding AUC of 0.61. Although the QUANTEC model consistently predicted higher NTCP values, it could not be rejected according to the χ(2) test (p = 0.44). Observed complications, and best-fit parameter estimates, were consistent with the QUANTEC-preferred NTCP model. However, predictive power was low, at least partly because the rectal dose distribution characteristics do not vary greatly within this patient
Robertson, John M.; Soehn, Matthias; Yan Di
2010-05-01
Purpose: Understanding the dose-volume relationship of small bowel irradiation and severe acute diarrhea may help reduce the incidence of this side effect during adjuvant treatment for rectal cancer. Methods and Materials: Consecutive patients treated curatively for rectal cancer were reviewed, and the maximum grade of acute diarrhea was determined. The small bowel was outlined on the treatment planning CT scan, and a dose-volume histogram was calculated for the initial pelvic treatment (45 Gy). Logistic regression models were fitted for varying cutoff-dose levels from 5 to 45 Gy in 5-Gy increments. The model with the highest LogLikelihood was used to develop a cutoff-dose normal tissue complication probability (NTCP) model. Results: There were a total of 152 patients (48% preoperative, 47% postoperative, 5% other), predominantly treated prone (95%) with a three-field technique (94%) and a protracted venous infusion of 5-fluorouracil (78%). Acute Grade 3 diarrhea occurred in 21%. The largest LogLikelihood was found for the cutoff-dose logistic regression model with 15 Gy as the cutoff-dose, although the models for 20 Gy and 25 Gy had similar significance. According to this model, highly significant correlations (p <0.001) between small bowel volumes receiving at least 15 Gy and toxicity exist in the considered patient population. Similar findings applied to both the preoperatively (p = 0.001) and postoperatively irradiated groups (p = 0.001). Conclusion: The incidence of Grade 3 diarrhea was significantly correlated with the volume of small bowel receiving at least 15 Gy using a cutoff-dose NTCP model.
Bazan, Jose G.; Luxton, Gary; Kozak, Margaret M.; Anderson, Eric M.; Hancock, Steven L.; Kapp, Daniel S.; Kidd, Elizabeth A.; Koong, Albert C.; Chang, Daniel T.
2013-12-01
Purpose: To determine how chemotherapy agents affect radiation dose parameters that correlate with acute hematologic toxicity (HT) in patients treated with pelvic intensity modulated radiation therapy (P-IMRT) and concurrent chemotherapy. Methods and Materials: We assessed HT in 141 patients who received P-IMRT for anal, gynecologic, rectal, or prostate cancers, 95 of whom received concurrent chemotherapy. Patients were separated into 4 groups: mitomycin (MMC) + 5-fluorouracil (5FU, 37 of 141), platinum ± 5FU (Cis, 32 of 141), 5FU (26 of 141), and P-IMRT alone (46 of 141). The pelvic bone was contoured as a surrogate for pelvic bone marrow (PBM) and divided into subsites: ilium, lower pelvis, and lumbosacral spine (LSS). The volumes of each region receiving 5-40 Gy were calculated. The endpoint for HT was grade ≥3 (HT3+) leukopenia, neutropenia or thrombocytopenia. Normal tissue complication probability was calculated using the Lyman-Kutcher-Burman model. Logistic regression was used to analyze association between HT3+ and dosimetric parameters. Results: Twenty-six patients experienced HT3+: 10 of 37 (27%) MMC, 14 of 32 (44%) Cis, 2 of 26 (8%) 5FU, and 0 of 46 P-IMRT. PBM dosimetric parameters were correlated with HT3+ in the MMC group but not in the Cis group. LSS dosimetric parameters were well correlated with HT3+ in both the MMC and Cis groups. Constrained optimization (0
Bazan, Jose G.; Luxton, Gary; Mok, Edward C.; Koong, Albert C.; Chang, Daniel T.
2012-11-01
Purpose: To identify dosimetric parameters that correlate with acute hematologic toxicity (HT) in patients with squamous cell carcinoma of the anal canal treated with definitive chemoradiotherapy (CRT). Methods and Materials: We analyzed 33 patients receiving CRT. Pelvic bone (PBM) was contoured for each patient and divided into subsites: ilium, lower pelvis (LP), and lumbosacral spine (LSS). The volume of each region receiving at least 5, 10, 15, 20, 30, and 40 Gy was calculated. Endpoints included grade {>=}3 HT (HT3+) and hematologic event (HE), defined as any grade {>=}2 HT with a modification in chemotherapy dose. Normal tissue complication probability (NTCP) was evaluated with the Lyman-Kutcher-Burman (LKB) model. Logistic regression was used to test associations between HT and dosimetric/clinical parameters. Results: Nine patients experienced HT3+ and 15 patients experienced HE. Constrained optimization of the LKB model for HT3+ yielded the parameters m = 0.175, n = 1, and TD{sub 50} = 32 Gy. With this model, mean PBM doses of 25 Gy, 27.5 Gy, and 31 Gy result in a 10%, 20%, and 40% risk of HT3+, respectively. Compared with patients with mean PBM dose of <30 Gy, patients with mean PBM dose {>=}30 Gy had a 14-fold increase in the odds of developing HT3+ (p = 0.005). Several low-dose radiation parameters (i.e., PBM-V10) were associated with the development of HT3+ and HE. No association was found with the ilium, LP, or clinical factors. Conclusions: LKB modeling confirms the expectation that PBM acts like a parallel organ, implying that the mean dose to the organ is a useful predictor for toxicity. Low-dose radiation to the PBM was also associated with clinically significant HT. Keeping the mean PBM dose <22.5 Gy and <25 Gy is associated with a 5% and 10% risk of HT, respectively.
Multinomial mixture model with heterogeneous classification probabilities
Holland, M.D.; Gray, B.R.
2011-01-01
Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.
Dale, E; Hellebust, T P; Skjønsberg, A; Høgberg, T; Olsen, D R
2000-07-01
To calculate the normal tissue complication probability (NTCP) of late radiation effects on the rectum and bladder from repetitive CT scans during fractionated high-dose-rate brachytherapy (HDRB) and external beam radiotherapy (EBRT) of the uterine cervix and compare the NTCP with the clinical frequency of late effects. Fourteen patients with cancer of the uterine cervix (Stage IIb-IVa) underwent 3-6 (mean, 4.9) CT scans in treatment position during their course of HDRB using a ring applicator with an Iridium stepping source. The rectal and bladder walls were delineated on the treatment-planning system, such that a constant wall volume independent of organ filling was achieved. Dose-volume histograms (DVH) of the rectal and bladder walls were acquired. A method of summing multiple DVHs accounting for variable dose per fraction were applied to the DVHs of HDRB and EBRT together with the Lyman-Kutcher NTCP model fitted to clinical dose-volume tolerance data from recent studies. The D(mean) of the DVH from EBRT was close to the D(max) for both the rectum and bladder, confirming that the DVH from EBRT corresponded with homogeneous whole-organ irradiation. The NTCP of the rectum was 19.7% (13.5%, 25. 9%) (mean and 95% confidence interval), whereas the clinical frequency of late rectal sequelae (Grade 3-4, RTOG/EORTC) was 13% based on material from 200 patients. For the bladder the NTCP was 61. 9% (46.8%, 76.9%) as compared to the clinical frequency of Grade 3-4 late effects of 14%. If only 1 CT scan from HDRB was assumed available, the relative uncertainty (standard deviation or SD) of the NTCP value for an arbitrary patient was 20-30%, whereas 4 CT scans provided an uncertainty of 12-13%. The NTCP for the rectum was almost consistent with the clinical frequency of late effects, whereas the NTCP for bladder was too high. To obtain reliable (SD of 12-13%) NTCP values, 3-4 CT scans are needed during 5-7 fractions of HDRB treatments.
Docquière, Nicolas; Bondiau, Pierre-Yves; Balosso, Jacques
2016-01-01
Background The equivalent uniform dose (EUD) radiobiological model can be applied for lung cancer treatment plans to estimate the tumor control probability (TCP) and the normal tissue complication probability (NTCP) using different dose calculation models. Then, based on the different calculated doses, the quality adjusted life years (QALY) score can be assessed versus the uncomplicated tumor control probability (UTCP) concept in order to predict the overall outcome of the different treatment plans. Methods Nine lung cancer cases were included in this study. For the each patient, two treatments plans were generated. The doses were calculated respectively from pencil beam model, as pencil beam convolution (PBC) turning on 1D density correction with Modified Batho’s (MB) method, and point kernel model as anisotropic analytical algorithm (AAA) using exactly the same prescribed dose, normalized to 100% at isocentre point inside the target and beam arrangements. The radiotherapy outcomes and QALY were compared. The bootstrap method was used to improve the 95% confidence intervals (95% CI) estimation. Wilcoxon paired test was used to calculate P value. Results Compared to AAA considered as more realistic, the PBCMB overestimated the TCP while underestimating NTCP, P<0.05. Thus the UTCP and the QALY score were also overestimated. Conclusions To correlate measured QALY’s obtained from the follow-up of the patients with calculated QALY from DVH metrics, the more accurate dose calculation models should be first integrated in clinical use. Second, clinically measured outcomes are necessary to tune the parameters of the NTCP model used to link the treatment outcome with the QALY. Only after these two steps, the comparison and the ranking of different radiotherapy plans would be possible, avoiding over/under estimation of QALY and any other clinic-biological estimates. PMID:28149761
Modality, probability, and mental models.
Hinterecker, Thomas; Knauff, Markus; Johnson-Laird, P N
2016-10-01
We report 3 experiments investigating novel sorts of inference, such as: A or B or both. Therefore, possibly (A and B). Where the contents were sensible assertions, for example, Space tourism will achieve widespread popularity in the next 50 years or advances in material science will lead to the development of antigravity materials in the next 50 years, or both. Most participants accepted the inferences as valid, though they are invalid in modal logic and in probabilistic logic too. But, the theory of mental models predicts that individuals should accept them. In contrast, inferences of this sort—A or B but not both. Therefore, A or B or both—are both logically valid and probabilistically valid. Yet, as the model theory also predicts, most reasoners rejected them. The participants’ estimates of probabilities showed that their inferences tended not to be based on probabilistic validity, but that they did rate acceptable conclusions as more probable than unacceptable conclusions. We discuss the implications of the results for current theories of reasoning.
Dean, Jamie A; Wong, Kee H; Welsh, Liam C; Jones, Ann-Britt; Schick, Ulrike; Newbold, Kate L; Bhide, Shreerang A; Harrington, Kevin J; Nutting, Christopher M; Gulliford, Sarah L
2016-01-01
Background and Purpose Severe acute mucositis commonly results from head and neck (chemo)radiotherapy. A predictive model of mucositis could guide clinical decision-making and inform treatment planning. We aimed to generate such a model using spatial dose metrics and machine learning. Material and Methods Predictive models of severe acute mucositis were generated using radiotherapy dose (dose-volume and spatial dose metrics) and clinical data. Penalised logistic regression, support vector classification and random forest classification (RFC) models were generated and compared. Internal validation was performed (with 100-iteration cross-validation), using multiple metrics, including area under the receiver operating characteristic curve (AUC) and calibration slope, to assess performance. Associations between covariates and severe mucositis were explored using the models. Results The dose-volume-based models (standard) performed equally to those incorporating spatial information. Discrimination was similar between models, but the RFCstandard had the best calibration. The mean AUC and calibration slope for this model were 0.71 (s.d.=0.09) and 3.9 (s.d.=2.2), respectively. The volumes of oral cavity receiving intermediate and high doses were associated with severe mucositis. Conclusions The RFCstandard model performance is modest-to-good, but should be improved, and requires external validation. Reducing the volumes of oral cavity receiving intermediate and high doses may reduce mucositis incidence. PMID:27240717
Probability ratings in claustrophobic patients and normal controls.
Ost, L G; Csatlos, P
2000-11-01
Forty-nine DSM-IV diagnosed claustrophobics and 49 sex- and age-matched community controls, without any current or past psychiatric disorder, were asked to estimate the probability that three types if events would occur if they were in the described situations. The events were claustrophobic, generally negative, and positive in nature. The results showed that claustrophobics significantly overestimated the probability of events they specifically feared, i.e. the claustrophobic events, while there was no difference between the groups regarding generally negative events and positive events. This finding remained when the higher scores for claustrophobics on the Claustrophobia scale and the Anxiety Sensitivity Index were covaried out. The conclusion that can be drawn is that claustrophobics' probability ratings are characterized by distortions that are specifically connected to anxiety-arousing events and not negative events in general. The hypothesis is proposed that this may be explained by an exaggerated use of simplified rules-of-thumb for probability estimations that build on availability in memory, simulation, and representativity.
Calibrating Subjective Probabilities Using Hierarchical Bayesian Models
NASA Astrophysics Data System (ADS)
Merkle, Edgar C.
A body of psychological research has examined the correspondence between a judge's subjective probability of an event's outcome and the event's actual outcome. The research generally shows that subjective probabilities are noisy and do not match the "true" probabilities. However, subjective probabilities are still useful for forecasting purposes if they bear some relationship to true probabilities. The purpose of the current research is to exploit relationships between subjective probabilities and outcomes to create improved, model-based probabilities for forecasting. Once the model has been trained in situations where the outcome is known, it can then be used in forecasting situations where the outcome is unknown. These concepts are demonstrated using experimental psychology data, and potential applications are discussed.
Bivariate normal, conditional and rectangular probabilities: A computer program with applications
NASA Technical Reports Server (NTRS)
Swaroop, R.; Brownlow, J. D.; Ashwworth, G. R.; Winter, W. R.
1980-01-01
Some results for the bivariate normal distribution analysis are presented. Computer programs for conditional normal probabilities, marginal probabilities, as well as joint probabilities for rectangular regions are given: routines for computing fractile points and distribution functions are also presented. Some examples from a closed circuit television experiment are included.
Hammoudi, Nadjib; Duprey, Matthieu; Régnier, Philippe; Achkar, Marc; Boubrit, Lila; Preud'homme, Gisèle; Healy-Brucker, Aude; Vignalou, Jean-Baptiste; Pousset, Françoise; Komajda, Michel; Isnard, Richard
2014-02-01
Management of increased referrals for transthoracic echocardiography (TTE) examinations is a challenge. Patients with normal TTE examinations take less time to explore than those with heart abnormalities. A reliable method for assessing pretest probability of a normal TTE may optimize management of requests. To establish and validate, based on requests for examinations, a simple algorithm for defining pretest probability of a normal TTE. In a retrospective phase, factors associated with normality were investigated and an algorithm was designed. In a prospective phase, patients were classified in accordance with the algorithm as being at high or low probability of having a normal TTE. In the retrospective phase, 42% of 618 examinations were normal. In multivariable analysis, age and absence of cardiac history were associated to normality. Low pretest probability of normal TTE was defined by known cardiac history or, in case of doubt about cardiac history, by age>70 years. In the prospective phase, the prevalences of normality were 72% and 25% in high (n=167) and low (n=241) pretest probability of normality groups, respectively. The mean duration of normal examinations was significantly shorter than abnormal examinations (13.8 ± 9.2 min vs 17.6 ± 11.1 min; P=0.0003). A simple algorithm can classify patients referred for TTE as being at high or low pretest probability of having a normal examination. This algorithm might help to optimize management of requests in routine practice. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Site occupancy models with heterogeneous detection probabilities
Royle, J. Andrew
2006-01-01
Models for estimating the probability of occurrence of a species in the presence of imperfect detection are important in many ecological disciplines. In these ?site occupancy? models, the possibility of heterogeneity in detection probabilities among sites must be considered because variation in abundance (and other factors) among sampled sites induces variation in detection probability (p). In this article, I develop occurrence probability models that allow for heterogeneous detection probabilities by considering several common classes of mixture distributions for p. For any mixing distribution, the likelihood has the general form of a zero-inflated binomial mixture for which inference based upon integrated likelihood is straightforward. A recent paper by Link (2003, Biometrics 59, 1123?1130) demonstrates that in closed population models used for estimating population size, different classes of mixture distributions are indistinguishable from data, yet can produce very different inferences about population size. I demonstrate that this problem can also arise in models for estimating site occupancy in the presence of heterogeneous detection probabilities. The implications of this are discussed in the context of an application to avian survey data and the development of animal monitoring programs.
Modeling gap probability in discontinuous vegetation canopies
NASA Technical Reports Server (NTRS)
Li, Xiaowen; Strahler, Alan H.
1987-01-01
In the present model for the gap probability of a discontinuous vegetation canopy, the assumption of a negative exponential attenuation within individual plant canopies will yield a problem involving the distribution distances within canopies through which a ray will pass. If, however, the canopies intersect and/or overlap, so that foliage density remains constant within the overlap area, the problem can be approached with two types of approximations. Attention is presently given to the case of a comparison of modeled gap probabilities with those observed for a stand of Maryland pine, which shows good agreement for zenith angles of illumination up to about 45 deg.
Computational Modelling and Simulation Fostering New Approaches in Learning Probability
ERIC Educational Resources Information Center
Kuhn, Markus; Hoppe, Ulrich; Lingnau, Andreas; Wichmann, Astrid
2006-01-01
Discovery learning in mathematics in the domain of probability based on hands-on experiments is normally limited because of the difficulty in providing sufficient materials and data volume in terms of repetitions of the experiments. Our cooperative, computational modelling and simulation environment engages students and teachers in composing and…
Computational Modelling and Simulation Fostering New Approaches in Learning Probability
ERIC Educational Resources Information Center
Kuhn, Markus; Hoppe, Ulrich; Lingnau, Andreas; Wichmann, Astrid
2006-01-01
Discovery learning in mathematics in the domain of probability based on hands-on experiments is normally limited because of the difficulty in providing sufficient materials and data volume in terms of repetitions of the experiments. Our cooperative, computational modelling and simulation environment engages students and teachers in composing and…
A Quantum Probability Model of Causal Reasoning
Trueblood, Jennifer S.; Busemeyer, Jerome R.
2012-01-01
People can often outperform statistical methods and machine learning algorithms in situations that involve making inferences about the relationship between causes and effects. While people are remarkably good at causal reasoning in many situations, there are several instances where they deviate from expected responses. This paper examines three situations where judgments related to causal inference problems produce unexpected results and describes a quantum inference model based on the axiomatic principles of quantum probability theory that can explain these effects. Two of the three phenomena arise from the comparison of predictive judgments (i.e., the conditional probability of an effect given a cause) with diagnostic judgments (i.e., the conditional probability of a cause given an effect). The third phenomenon is a new finding examining order effects in predictive causal judgments. The quantum inference model uses the notion of incompatibility among different causes to account for all three phenomena. Psychologically, the model assumes that individuals adopt different points of view when thinking about different causes. The model provides good fits to the data and offers a coherent account for all three causal reasoning effects thus proving to be a viable new candidate for modeling human judgment. PMID:22593747
A quantum probability model of causal reasoning.
Trueblood, Jennifer S; Busemeyer, Jerome R
2012-01-01
People can often outperform statistical methods and machine learning algorithms in situations that involve making inferences about the relationship between causes and effects. While people are remarkably good at causal reasoning in many situations, there are several instances where they deviate from expected responses. This paper examines three situations where judgments related to causal inference problems produce unexpected results and describes a quantum inference model based on the axiomatic principles of quantum probability theory that can explain these effects. Two of the three phenomena arise from the comparison of predictive judgments (i.e., the conditional probability of an effect given a cause) with diagnostic judgments (i.e., the conditional probability of a cause given an effect). The third phenomenon is a new finding examining order effects in predictive causal judgments. The quantum inference model uses the notion of incompatibility among different causes to account for all three phenomena. Psychologically, the model assumes that individuals adopt different points of view when thinking about different causes. The model provides good fits to the data and offers a coherent account for all three causal reasoning effects thus proving to be a viable new candidate for modeling human judgment.
A probability distribution model for rain rate
NASA Technical Reports Server (NTRS)
Kedem, Benjamin; Pavlopoulos, Harry; Guan, Xiaodong; Short, David A.
1994-01-01
A systematic approach is suggested for modeling the probability distribution of rain rate. Rain rate, conditional on rain and averaged over a region, is modeled as a temporally homogeneous diffusion process with appropiate boundary conditions. The approach requires a drift coefficient-conditional average instantaneous rate of change of rain intensity-as well as a diffusion coefficient-the conditional average magnitude of the rate of growth and decay of rain rate about its drift. Under certain assumptions on the drift and diffusion coefficients compatible with rain rate, a new parametric family-containing the lognormal distribution-is obtained for the continuous part of the stationary limit probability distribution. The family is fitted to tropical rainfall from Darwin and Florida, and it is found that the lognormal distribution provides adequate fits as compared with other members of the family and also with the gamma distribution.
Probability Model for Designing Environment Condition
NASA Astrophysics Data System (ADS)
Lubis, Iswar; Nasution Mahyuddin, K. M.
2017-01-01
Transport equipment has the potential to contribute to environmental pollution. The pollution impact on the welfare of the environment. Thus, the potential of the environment needs to be raised to block the pollution. A design based on probability models of the determining factors in the environment should be a concern. This paper aims to reveal the scenarios as the beginning to express the clues, and based on surveys have been had the ability to choose a design.
Statistical Physics of Pairwise Probability Models
Roudi, Yasser; Aurell, Erik; Hertz, John A.
2009-01-01
Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of data: knowledge of the mean values and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying and using pairwise models. We build on our previous work on the subject and study the relation between different methods for fitting these models and evaluating their quality. In particular, using data from simulated cortical networks we study how the quality of various approximate methods for inferring the parameters in a pairwise model depends on the time bin chosen for binning the data. We also study the effect of the size of the time bin on the model quality itself, again using simulated data. We show that using finer time bins increases the quality of the pairwise model. We offer new ways of deriving the expressions reported in our previous work for assessing the quality of pairwise models. PMID:19949460
Tai An; Erickson, Beth; Li, X. Allen
2009-05-01
Purpose: The ability to predict normal tissue complication probability (NTCP) is essential for NTCP-based treatment planning. The purpose of this work is to estimate the Lyman NTCP model parameters for liver irradiation from published clinical data of different fractionation regimens. A new expression of normalized total dose (NTD) is proposed to convert NTCP data between different treatment schemes. Method and Materials: The NTCP data of radiation- induced liver disease (RILD) from external beam radiation therapy for primary liver cancer patients were selected for analysis. The data were collected from 4 institutions for tumor sizes in the range of of 8-10 cm. The dose per fraction ranged from 1.5 Gy to 6 Gy. A modified linear-quadratic model with two components corresponding to radiosensitive and radioresistant cells in the normal liver tissue was proposed to understand the new NTD formalism. Results: There are five parameters in the model: TD{sub 50}, m, n, {alpha}/{beta} and f. With two parameters n and {alpha}/{beta} fixed to be 1.0 and 2.0 Gy, respectively, the extracted parameters from the fitting are TD{sub 50}(1) = 40.3 {+-} 8.4Gy, m =0.36 {+-} 0.09, f = 0.156 {+-} 0.074 Gy and TD{sub 50}(1) = 23.9 {+-} 5.3Gy, m = 0.41 {+-} 0.15, f = 0.0 {+-} 0.04 Gy for patients with liver cirrhosis scores of Child-Pugh A and Child-Pugh B, respectively. The fitting results showed that the liver cirrhosis score significantly affects fractional dose dependence of NTD. Conclusion: The Lyman parameters generated presently and the new form of NTD may be used to predict NTCP for treatment planning of innovative liver irradiation with different fractionations, such as hypofractioned stereotactic body radiation therapy.
Modeling Capture Probabilities Of Potentially Habitable Exomoons
NASA Astrophysics Data System (ADS)
Sharzer, Charles; Porter, S.; Grundy, W.
2012-01-01
The satellites of extrasolar planets (exomoons) have been theorized to be a viable location for extraterrestrial life. New methods are quickly developing to detect their presence by examining the transits of extrasolar gas giants. In addition, models have shown that the probability for a captured exomoon to stabilize into a near-circular orbit at a close distance to a planet is greater than 50 percent. In this study, we model the interaction, potentially resulting in a capture, between a gas giant and a binary moving toward it on a hyperbolic tra jectory. We find that, for certain conditions, capture of an exomoon is not just possible, it is overwhelmingly likely. We hope to use the results of this experiment to determine initial parameters for a subsequent simulation modeling a physical system of a gas giant and binary orbiting a star.
Probability based models for estimation of wildfire risk
Haiganoush Preisler; D. R. Brillinger; R. E. Burgan; John Benoit
2004-01-01
We present a probability-based model for estimating fire risk. Risk is defined using three probabilities: the probability of fire occurrence; the conditional probability of a large fire given ignition; and the unconditional probability of a large fire. The model is based on grouped data at the 1 kmÂ²-day cell level. We fit a spatially and temporally explicit non-...
Probabilities on cladograms: introduction to the alpha model
NASA Astrophysics Data System (ADS)
Ford, Daniel J.
2005-11-01
The alpha model, a parametrized family of probabilities on cladograms (rooted binary leaf labeled trees), is introduced. This model is Markovian self-similar, deletion-stable (sampling consistent), and passes through the Yule, Uniform and Comb models. An explicit formula is given to calculate the probability of any cladogram or tree shape under the alpha model. Sackin's and Colless' index are shown to be O(n^{1+α}) with asymptotic covariance equal to 1. Thus the expected depth of a random leaf with n leaves is O(n^α). The number of cherries on a random alpha tree is shown to be asymptotically normal with known mean and variance. Finally the shape of published phylogenies is examined, using trees from Treebase.
ERIC Educational Resources Information Center
Gibbons, Robert D.; And Others
The probability integral of the multivariate normal distribution (ND) has received considerable attention since W. F. Sheppard's (1900) and K. Pearson's (1901) seminal work on the bivariate ND. This paper evaluates the formula that represents the "n x n" correlation matrix of the "chi(sub i)" and the standardized multivariate…
PABS: A Computer Program to Normalize Emission Probabilities and Calculate Realistic Uncertainties
Caron, D. S.; Browne, E.; Norman, E. B.
2009-08-21
The program PABS normalizes relative particle emission probabilities to an absolute scale and calculates the relevant uncertainties on this scale. The program is written in Java using the JDK 1.6 library. For additional information about system requirements, the code itself, and compiling from source, see the README file distributed with this program. The mathematical procedures used are given below.
Model estimates hurricane wind speed probabilities
NASA Astrophysics Data System (ADS)
Mumane, Richard J.; Barton, Chris; Collins, Eric; Donnelly, Jeffrey; Eisner, James; Emanuel, Kerry; Ginis, Isaac; Howard, Susan; Landsea, Chris; Liu, Kam-biu; Malmquist, David; McKay, Megan; Michaels, Anthony; Nelson, Norm; O Brien, James; Scott, David; Webb, Thompson, III
In the United States, intense hurricanes (category 3, 4, and 5 on the Saffir/Simpson scale) with winds greater than 50 m s -1 have caused more damage than any other natural disaster [Pielke and Pielke, 1997]. Accurate estimates of wind speed exceedance probabilities (WSEP) due to intense hurricanes are therefore of great interest to (re)insurers, emergency planners, government officials, and populations in vulnerable coastal areas.The historical record of U.S. hurricane landfall is relatively complete only from about 1900, and most model estimates of WSEP are derived from this record. During the 1899-1998 period, only two category-5 and 16 category-4 hurricanes made landfall in the United States. The historical record therefore provides only a limited sample of the most intense hurricanes.
Dinov, Ivo D; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas
2013-01-01
Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students' understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference.
Jensen, Ingelise; Carl, Jesper; Lund, Bente; Larsen, Erik H.; Nielsen, Jane
2011-07-01
Dose escalation in prostate radiotherapy is limited by normal tissue toxicities. The aim of this study was to assess the impact of margin size on tumor control and side effects for intensity-modulated radiation therapy (IMRT) and 3D conformal radiotherapy (3DCRT) treatment plans with increased dose. Eighteen patients with localized prostate cancer were enrolled. 3DCRT and IMRT plans were compared for a variety of margin sizes. A marker detectable on daily portal images was presupposed for narrow margins. Prescribed dose was 82 Gy within 41 fractions to the prostate clinical target volume (CTV). Tumor control probability (TCP) calculations based on the Poisson model including the linear quadratic approach were performed. Normal tissue complication probability (NTCP) was calculated for bladder, rectum and femoral heads according to the Lyman-Kutcher-Burman method. All plan types presented essentially identical TCP values and very low NTCP for bladder and femoral heads. Mean doses for these critical structures reached a minimum for IMRT with reduced margins. Two endpoints for rectal complications were analyzed. A marked decrease in NTCP for IMRT plans with narrow margins was seen for mild RTOG grade 2/3 as well as for proctitis/necrosis/stenosis/fistula, for which NTCP <7% was obtained. For equivalent TCP values, sparing of normal tissue was demonstrated with the narrow margin approach. The effect was more pronounced for IMRT than 3DCRT, with respect to NTCP for mild, as well as severe, rectal complications.
NASA Astrophysics Data System (ADS)
Ma, Lijun
2001-11-01
A recent multi-institutional clinical study suggested possible benefits of lowering the prescription isodose lines for stereotactic radiosurgery procedures. In this study, we investigate the dependence of the normal brain integral dose and the normal tissue complication probability (NTCP) on the prescription isodose values for γ-knife radiosurgery. An analytical dose model was developed for γ-knife treatment planning. The dose model was commissioned by fitting the measured dose profiles for each helmet size. The dose model was validated by comparing its results with the Leksell gamma plan (LGP, version 5.30) calculations. The normal brain integral dose and the NTCP were computed and analysed for an ensemble of treatment cases. The functional dependence of the normal brain integral dose and the NCTP versus the prescribing isodose values was studied for these cases. We found that the normal brain integral dose and the NTCP increase significantly when lowering the prescription isodose lines from 50% to 35% of the maximum tumour dose. Alternatively, the normal brain integral dose and the NTCP decrease significantly when raising the prescribing isodose lines from 50% to 65% of the maximum tumour dose. The results may be used as a guideline for designing future dose escalation studies for γ-knife applications.
Thompson, Sierra; Muzinic, Laura; Muzinic, Christopher; Niemiller, Matthew L.
2014-01-01
Abstract Multiple factors are thought to cause limb abnormalities in amphibian populations by altering processes of limb development and regeneration. We examined adult and juvenile axolotls (Ambystoma mexicanum) in the Ambystoma Genetic Stock Center (AGSC) for limb and digit abnormalities to investigate the probability of normal regeneration after bite injury. We observed that 80% of larval salamanders show evidence of bite injury at the time of transition from group housing to solitary housing. Among 717 adult axolotls that were surveyed, which included solitary‐housed males and group‐housed females, approximately half presented abnormalities, including examples of extra or missing digits and limbs, fused digits, and digits growing from atypical anatomical positions. Bite injury probably explains these limb defects, and not abnormal development, because limbs with normal anatomy regenerated after performing rostral amputations. We infer that only 43% of AGSC larvae will present four anatomically normal looking adult limbs after incurring a bite injury. Our results show regeneration of normal limb anatomy to be less than perfect after bite injury. PMID:25745564
Thompson, Sierra; Muzinic, Laura; Muzinic, Christopher; Niemiller, Matthew L; Voss, S Randal
2014-06-01
Multiple factors are thought to cause limb abnormalities in amphibian populations by altering processes of limb development and regeneration. We examined adult and juvenile axolotls (Ambystoma mexicanum) in the Ambystoma Genetic Stock Center (AGSC) for limb and digit abnormalities to investigate the probability of normal regeneration after bite injury. We observed that 80% of larval salamanders show evidence of bite injury at the time of transition from group housing to solitary housing. Among 717 adult axolotls that were surveyed, which included solitary-housed males and group-housed females, approximately half presented abnormalities, including examples of extra or missing digits and limbs, fused digits, and digits growing from atypical anatomical positions. Bite injury likely explains these limb defects, and not abnormal development, because limbs with normal anatomy regenerated after performing rostral amputations. We infer that only 43% of AGSC larvae will present four anatomically normal looking adult limbs after incurring a bite injury. Our results show regeneration of normal limb anatomy to be less than perfect after bite injury.
Stochastic population dynamic models as probability networks
M.E. and D.C. Lee. Borsuk
2009-01-01
The dynamics of a population and its response to environmental change depend on the balance of birth, death and age-at-maturity, and there have been many attempts to mathematically model populations based on these characteristics. Historically, most of these models were deterministic, meaning that the results were strictly determined by the equations of the model and...
Probability and Statistics in Sensor Performance Modeling
2010-12-01
transformed Rice- Nakagami distribution ......................................................................... 49 Report Documentation Page...acoustic or electromagnetic waves are scattered by both objects and turbulent wind. A version of the Rice- Nakagami model (specifically with a...Gaussian, lognormal, exponential, gamma, and the 2XX → transformed Rice- Nakagami —as well as a discrete model. (Other examples of statistical models
Probability Modeling and Thinking: What Can We Learn from Practice?
ERIC Educational Resources Information Center
Pfannkuch, Maxine; Budgett, Stephanie; Fewster, Rachel; Fitch, Marie; Pattenwise, Simeon; Wild, Chris; Ziedins, Ilze
2016-01-01
Because new learning technologies are enabling students to build and explore probability models, we believe that there is a need to determine the big enduring ideas that underpin probabilistic thinking and modeling. By uncovering the elements of the thinking modes of expert users of probability models we aim to provide a base for the setting of…
Probability Modeling and Thinking: What Can We Learn from Practice?
ERIC Educational Resources Information Center
Pfannkuch, Maxine; Budgett, Stephanie; Fewster, Rachel; Fitch, Marie; Pattenwise, Simeon; Wild, Chris; Ziedins, Ilze
2016-01-01
Because new learning technologies are enabling students to build and explore probability models, we believe that there is a need to determine the big enduring ideas that underpin probabilistic thinking and modeling. By uncovering the elements of the thinking modes of expert users of probability models we aim to provide a base for the setting of…
Bis[aminoguanidinium(1+)] hexafluorozirconate(IV): redeterminations and normal probability analysis.
Ross, C R; Bauer, M R; Nielson, R M; Abrahams, S C
2004-01-01
The crystal structure of bis[aminoguanidinium(1+)] hexafluorozirconate(IV), (CH(7)N(4))(2)[ZrF(6)], originally reported by Bukvetskii, Gerasimenko & Davidovich [Koord. Khim. (1990), 16, 1479-1484], has been redetermined independently using two different samples. Normal probability analysis confirms the reliability of all refined parameter standard uncertainties in the new determinations, whereas systematic error detectable in the earlier work leads to a maximum difference of 0.069 (6) A in atomic positions between the previously reported and present values of an F-atom y coordinate. Radiation-induced structural damage in aminoguanidinium polyfluorozirconates may result from minor displacements of H atoms in weak N-H...F bonds to new potential minima and subsequent anionic realignment.
Estimating Prior Model Probabilities Using an Entropy Principle
NASA Astrophysics Data System (ADS)
Ye, M.; Meyer, P. D.; Neuman, S. P.; Pohlmann, K.
2004-12-01
Considering conceptual model uncertainty is an important process in environmental uncertainty/risk analyses. Bayesian Model Averaging (BMA) (Hoeting et al., 1999) and its Maximum Likelihood version, MLBMA, (Neuman, 2003) jointly assess predictive uncertainty of competing alternative models to avoid bias and underestimation of uncertainty caused by relying on one single model. These methods provide posterior distribution (or, equivalently, leading moments) of quantities of interests for decision-making. One important step of these methods is to specify prior probabilities of alternative models for the calculation of posterior model probabilities. This problem, however, has not been satisfactorily resolved and equally likely prior model probabilities are usually accepted as a neutral choice. Ye et al. (2004) have shown that whereas using equally likely prior model probabilities has led to acceptable geostatistical estimates of log air permeability data from fractured unsaturated tuff at the Apache Leap Research Site (ALRS) in Arizona, identifying more accurate prior probabilities can improve these estimates. In this paper we present a new methodology to evaluate prior model probabilities by maximizing Shannon's entropy with restrictions postulated a priori based on model plausibility relationships. It yields optimum prior model probabilities conditional on prior information used to postulate the restrictions. The restrictions and corresponding prior probabilities can be modified as more information becomes available. The proposed method is relatively easy to use in practice as it is generally less difficult for experts to postulate relationships between models than to specify numerical prior model probability values. Log score, mean square prediction error (MSPE) and mean absolute predictive error (MAPE) criteria consistently show that applying our new method to the ALRS data reduces geostatistical estimation errors provided relationships between models are
Modelling spruce bark beetle infestation probability
Paulius Zolubas; Jose Negron; A. Steven Munson
2009-01-01
Spruce bark beetle (Ips typographus L.) risk model, based on pure Norway spruce (Picea abies Karst.) stand characteristics in experimental and control plots was developed using classification and regression tree statistical technique under endemic pest population density. The most significant variable in spruce bark beetle...
Probability model for analyzing fire management alternatives: theory and structure
Frederick W. Bratten
1982-01-01
A theoretical probability model has been developed for analyzing program alternatives in fire management. It includes submodels or modules for predicting probabilities of fire behavior, fire occurrence, fire suppression, effects of fire on land resources, and financial effects of fire. Generalized "fire management situations" are used to represent actual fire...
Using the Subjective Probability Model to Evaluate Academic Debate Arguments.
ERIC Educational Resources Information Center
Allen, Mike; Kellermann, Kathy
1988-01-01
Explores the worth of high impact/low probability arguments as "real world" policy arguments. Evaluates four National Debate Tournament (NDT) final round disadvantages by students using the subjective probability model. Finds that although NDT disadvantages were perceived to be a technically sound form of argumentation, they were not…
NASA Technical Reports Server (NTRS)
Falls, L. W.
1973-01-01
This document replaces Cape Kennedy empirical wind component statistics which are presently being used for aerospace engineering applications that require component wind probabilities for various flight azimuths and selected altitudes. The normal (Gaussian) distribution is presented as an adequate statistical model to represent component winds at Cape Kennedy. Head-, tail-, and crosswind components are tabulated for all flight azimuths for altitudes from 0 to 70 km by monthly reference periods. Wind components are given for 11 selected percentiles ranging from 0.135 percent to 99,865 percent for each month. Results of statistical goodness-of-fit tests are presented to verify the use of the Gaussian distribution as an adequate model to represent component winds at Cape Kennedy, Florida.
Gendist: An R Package for Generated Probability Distribution Models.
Abu Bakar, Shaiful Anuar; Nadarajah, Saralees; Absl Kamarul Adzhar, Zahrul Azmir; Mohamed, Ibrahim
2016-01-01
In this paper, we introduce the R package gendist that computes the probability density function, the cumulative distribution function, the quantile function and generates random values for several generated probability distribution models including the mixture model, the composite model, the folded model, the skewed symmetric model and the arc tan model. These models are extensively used in the literature and the R functions provided here are flexible enough to accommodate various univariate distributions found in other R packages. We also show its applications in graphing, estimation, simulation and risk measurements.
Gendist: An R Package for Generated Probability Distribution Models
Abu Bakar, Shaiful Anuar; Nadarajah, Saralees; ABSL Kamarul Adzhar, Zahrul Azmir; Mohamed, Ibrahim
2016-01-01
In this paper, we introduce the R package gendist that computes the probability density function, the cumulative distribution function, the quantile function and generates random values for several generated probability distribution models including the mixture model, the composite model, the folded model, the skewed symmetric model and the arc tan model. These models are extensively used in the literature and the R functions provided here are flexible enough to accommodate various univariate distributions found in other R packages. We also show its applications in graphing, estimation, simulation and risk measurements. PMID:27272043
Probability of Future Observations Exceeding One-Sided, Normal, Upper Tolerance Limits
Edwards, Timothy S.
2014-10-29
Normal tolerance limits are frequently used in dynamic environments specifications of aerospace systems as a method to account for aleatory variability in the environments. Upper tolerance limits, when used in this way, are computed from records of the environment and used to enforce conservatism in the specification by describing upper extreme values the environment may take in the future. Components and systems are designed to withstand these extreme loads to ensure they do not fail under normal use conditions. The degree of conservatism in the upper tolerance limits is controlled by specifying the coverage and confidence level (usually written inmore » “coverage/confidence” form). Moreover, in high-consequence systems it is common to specify tolerance limits at 95% or 99% coverage and confidence at the 50% or 90% level. Despite the ubiquity of upper tolerance limits in the aerospace community, analysts and decision-makers frequently misinterpret their meaning. The misinterpretation extends into the standards that govern much of the acceptance and qualification of commercial and government aerospace systems. As a result, the risk of a future observation of the environment exceeding the upper tolerance limit is sometimes significantly underestimated by decision makers. This note explains the meaning of upper tolerance limits and a related measure, the upper prediction limit. So, the objective of this work is to clarify the probability of exceeding these limits in flight so that decision-makers can better understand the risk associated with exceeding design and test levels during flight and balance the cost of design and development with that of mission failure.« less
Probability of Future Observations Exceeding One-Sided, Normal, Upper Tolerance Limits
Edwards, Timothy S.
2014-10-29
Normal tolerance limits are frequently used in dynamic environments specifications of aerospace systems as a method to account for aleatory variability in the environments. Upper tolerance limits, when used in this way, are computed from records of the environment and used to enforce conservatism in the specification by describing upper extreme values the environment may take in the future. Components and systems are designed to withstand these extreme loads to ensure they do not fail under normal use conditions. The degree of conservatism in the upper tolerance limits is controlled by specifying the coverage and confidence level (usually written in “coverage/confidence” form). Moreover, in high-consequence systems it is common to specify tolerance limits at 95% or 99% coverage and confidence at the 50% or 90% level. Despite the ubiquity of upper tolerance limits in the aerospace community, analysts and decision-makers frequently misinterpret their meaning. The misinterpretation extends into the standards that govern much of the acceptance and qualification of commercial and government aerospace systems. As a result, the risk of a future observation of the environment exceeding the upper tolerance limit is sometimes significantly underestimated by decision makers. This note explains the meaning of upper tolerance limits and a related measure, the upper prediction limit. So, the objective of this work is to clarify the probability of exceeding these limits in flight so that decision-makers can better understand the risk associated with exceeding design and test levels during flight and balance the cost of design and development with that of mission failure.
A model of subjective probabilities from small groups
NASA Technical Reports Server (NTRS)
Ferrell, W. R.; Rehm, K.
1982-01-01
Methods for aggregating the opinions of individual forecasters in order to improve the quality of probabilistic forecasts are presented. Experimental results obtained by Seaver in his study of probability judgments by groups of four people are considered. The decision variable partition model of subjective probability and a simple model of the effects of interaction on judgments were used to simulate the group judgment of discrete probabilities investigated by Seaver. The initial results of the simulation are very promising in that (1) they show the principal effects Seaver observed and (2) these effects can, for the most part, be traced to specific characteristics of the models.
Review of Literature for Model Assisted Probability of Detection
Meyer, Ryan M.; Crawford, Susan L.; Lareau, John P.; Anderson, Michael T.
2014-09-30
This is a draft technical letter report for NRC client documenting a literature review of model assisted probability of detection (MAPOD) for potential application to nuclear power plant components for improvement of field NDE performance estimations.
Aggregate and Individual Replication Probability within an Explicit Model of the Research Process
ERIC Educational Resources Information Center
Miller, Jeff; Schwarz, Wolf
2011-01-01
We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by…
NASA Astrophysics Data System (ADS)
Gyenis, Balázs
2017-02-01
We investigate Maxwell's attempt to justify the mathematical assumptions behind his 1860 Proposition IV according to which the velocity components of colliding particles follow the normal distribution. Contrary to the commonly held view we find that his molecular collision model plays a crucial role in reaching this conclusion, and that his model assumptions also permit inference to equalization of mean kinetic energies (temperatures), which is what he intended to prove in his discredited and widely ignored Proposition VI. If we take a charitable reading of his own proof of Proposition VI then it was Maxwell, and not Boltzmann, who gave the first proof of a tendency towards equilibrium, a sort of H-theorem. We also call attention to a potential conflation of notions of probabilistic and value independence in relevant prior works of his contemporaries and of his own, and argue that this conflation might have impacted his adoption of the suspect independence assumption of Proposition IV.
Model of the amplitude probability distribution of atmospheric radio noise
NASA Astrophysics Data System (ADS)
Kabanov, V. V.
1987-08-01
The proposed four-parameter model is a combination of two sets of distributions whose intersection is the Hall model. A comparison of the model with experimental data on narrow-band and broadband reception at frequencies from 400 Hz to 10 MHz shows that the model guarantees the accuracy of the description of statistically valid features of the probability distributions of atmospheric radio noise (ARN). The sufficiency of existing techniques for the suppression of ARN is demonstrated. Expressions for calculating the noise level at receiver outputs are obtained which take into account the dependence of the integrated probability distribution of the envelope on the reception band.
Fixation probability for lytic viruses: the attachment-lysis model.
Patwa, Z; Wahl, L M
2008-09-01
The fixation probability of a beneficial mutation is extremely sensitive to assumptions regarding the organism's life history. In this article we compute the fixation probability using a life-history model for lytic viruses, a key model organism in experimental studies of adaptation. The model assumes that attachment times are exponentially distributed, but that the lysis time, the time between attachment and host cell lysis, is constant. We assume that the growth of the wild-type viral population is controlled by periodic sampling (population bottlenecks) and also include the possibility that clearance may occur at a constant rate, for example, through washout in a chemostat. We then compute the fixation probability for mutations that increase the attachment rate, decrease the lysis time, increase the burst size, or reduce the probability of clearance. The fixation probability of these four types of beneficial mutations can be vastly different and depends critically on the time between population bottlenecks. We also explore mutations that affect lysis time, assuming that the burst size is constrained by the lysis time, for experimental protocols that sample either free phage or free phage and artificially lysed infected cells. In all cases we predict that the fixation probability of beneficial alleles is remarkably sensitive to the time between population bottlenecks.
Estuarine shoreline and sandline change model skill and predicted probabilities
Smith, Kathryn E. L.; Passeri, Davina; Plant, Nathaniel G.
2016-01-01
The Barrier Island and Estuarine Wetland Physical Change Assessment was created to calibrate and test probability models of barrier island estuarine shoreline and sandline change for study areas in Virginia, Maryland, and New Jersey. The models examined the influence of hydrologic and physical variables related to long-term and event-driven (Hurricane Sandy) estuarine back-barrier shoreline and overwash (sandline) change. Input variables were constructed into a Bayesian Network (BN) using Netica. To evaluate the ability of the BN to reproduce the observations used to train the model, the skill, log likelihood ratio and probability predictions were utilized. These data are the probability and skill metrics for all four models: the long-term (LT) back-barrier shoreline change, event-driven (HS) back-barrier shoreline change, long-term (LT) sandline change, and event-driven (HS) sandline change.
Normalization of Gravitational Acceleration Models
NASA Technical Reports Server (NTRS)
Eckman, Randy A.; Brown, Aaron J.; Adamo, Daniel R.
2011-01-01
Unlike the uniform density spherical shell approximations of Newton, the con- sequence of spaceflight in the real universe is that gravitational fields are sensitive to the nonsphericity of their generating central bodies. The gravitational potential of a nonspherical central body is typically resolved using spherical harmonic approximations. However, attempting to directly calculate the spherical harmonic approximations results in at least two singularities which must be removed in order to generalize the method and solve for any possible orbit, including polar orbits. Three unique algorithms have been developed to eliminate these singularities by Samuel Pines [1], Bill Lear [2], and Robert Gottlieb [3]. This paper documents the methodical normalization of two1 of the three known formulations for singularity-free gravitational acceleration (namely, the Lear [2] and Gottlieb [3] algorithms) and formulates a general method for defining normalization parameters used to generate normalized Legendre Polynomials and ALFs for any algorithm. A treatment of the conventional formulation of the gravitational potential and acceleration is also provided, in addition to a brief overview of the philosophical differences between the three known singularity-free algorithms.
Naive Probability: Model-Based Estimates of Unique Events.
Khemlani, Sangeet S; Lotstein, Max; Johnson-Laird, Philip N
2015-08-01
We describe a dual-process theory of how individuals estimate the probabilities of unique events, such as Hillary Clinton becoming U.S. President. It postulates that uncertainty is a guide to improbability. In its computer implementation, an intuitive system 1 simulates evidence in mental models and forms analog non-numerical representations of the magnitude of degrees of belief. This system has minimal computational power and combines evidence using a small repertoire of primitive operations. It resolves the uncertainty of divergent evidence for single events, for conjunctions of events, and for inclusive disjunctions of events, by taking a primitive average of non-numerical probabilities. It computes conditional probabilities in a tractable way, treating the given event as evidence that may be relevant to the probability of the dependent event. A deliberative system 2 maps the resulting representations into numerical probabilities. With access to working memory, it carries out arithmetical operations in combining numerical estimates. Experiments corroborated the theory's predictions. Participants concurred in estimates of real possibilities. They violated the complete joint probability distribution in the predicted ways, when they made estimates about conjunctions: P(A), P(B), P(A and B), disjunctions: P(A), P(B), P(A or B or both), and conditional probabilities P(A), P(B), P(B|A). They were faster to estimate the probabilities of compound propositions when they had already estimated the probabilities of each of their components. We discuss the implications of these results for theories of probabilistic reasoning. © 2014 Cognitive Science Society, Inc.
Maximum parsimony, substitution model, and probability phylogenetic trees.
Weng, J F; Thomas, D A; Mareels, I
2011-01-01
The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.
Gap probability - Measurements and models of a pecan orchard
NASA Technical Reports Server (NTRS)
Strahler, Alan H.; Li, Xiaowen; Moody, Aaron; Liu, YI
1992-01-01
Measurements and models are compared for gap probability in a pecan orchard. Measurements are based on panoramic photographs of 50* by 135 view angle made under the canopy looking upwards at regular positions along transects between orchard trees. The gap probability model is driven by geometric parameters at two levels-crown and leaf. Crown level parameters include the shape of the crown envelope and spacing of crowns; leaf level parameters include leaf size and shape, leaf area index, and leaf angle, all as functions of canopy position.
Gap probability - Measurements and models of a pecan orchard
NASA Technical Reports Server (NTRS)
Strahler, Alan H.; Li, Xiaowen; Moody, Aaron; Liu, YI
1992-01-01
Measurements and models are compared for gap probability in a pecan orchard. Measurements are based on panoramic photographs of 50* by 135 view angle made under the canopy looking upwards at regular positions along transects between orchard trees. The gap probability model is driven by geometric parameters at two levels-crown and leaf. Crown level parameters include the shape of the crown envelope and spacing of crowns; leaf level parameters include leaf size and shape, leaf area index, and leaf angle, all as functions of canopy position.
Semenenko, Vladimir A.; Tarima, Sergey S.; Devisetty, Kiran; Pelizzari, Charles A.; Liauw, Stanley L.
2013-03-15
Purpose: To perform validation of risk predictions for late rectal toxicity (LRT) in prostate cancer obtained using a new approach to synthesize published normal tissue complication data. Methods and Materials: A published study survey was performed to identify the dose-response relationships for LRT derived from nonoverlapping patient populations. To avoid mixing models based on different symptoms, the emphasis was placed on rectal bleeding. The selected models were used to compute the risk estimates of grade 2+ and grade 3+ LRT for an independent validation cohort composed of 269 prostate cancer patients with known toxicity outcomes. Risk estimates from single studies were combined to produce consolidated risk estimates. An agreement between the actuarial toxicity incidence 3 years after radiation therapy completion and single-study or consolidated risk estimates was evaluated using the concordance correlation coefficient. Goodness of fit for the consolidated risk estimates was assessed using the Hosmer-Lemeshow test. Results: A total of 16 studies of grade 2+ and 5 studies of grade 3+ LRT met the inclusion criteria. The consolidated risk estimates of grade 2+ and 3+ LRT were constructed using 3 studies each. For grade 2+ LRT, the concordance correlation coefficient for the consolidated risk estimates was 0.537 compared with 0.431 for the best-fit single study. For grade 3+ LRT, the concordance correlation coefficient for the consolidated risk estimates was 0.477 compared with 0.448 for the best-fit single study. No evidence was found for a lack of fit for the consolidated risk estimates using the Hosmer-Lemeshow test (P=.531 and P=.397 for grade 2+ and 3+ LRT, respectively). Conclusions: In a large cohort of prostate cancer patients, selected sets of consolidated risk estimates were found to be more accurate predictors of LRT than risk estimates derived from any single study.
Simulation modeling of the probability of magmatic disruption of the potential Yucca Mountain Site
Crowe, B.M.; Perry, F.V.; Valentine, G.A.; Wallmann, P.C.; Kossik, R.
1993-11-01
The first phase of risk simulation modeling was completed for the probability of magmatic disruption of a potential repository at Yucca Mountain. E1, the recurrence rate of volcanic events, is modeled using bounds from active basaltic volcanic fields and midpoint estimates of E1. The cumulative probability curves for El are generated by simulation modeling using a form of a triangular distribution. The 50% estimates are about 5 to 8 {times} 10{sup 8} events yr{sup {minus}1}. The simulation modeling shows that the cumulative probability distribution for E1 is more sensitive to the probability bounds then the midpoint estimates. The E2 (disruption probability) is modeled through risk simulation using a normal distribution and midpoint estimates from multiple alternative stochastic and structural models. The 50% estimate of E2 is 4.3 {times} 10{sup {minus}3} The probability of magmatic disruption of the potential Yucca Mountain site is 2.5 {times} 10{sup {minus}8} yr{sup {minus}1}. This median estimate decreases to 9.6 {times} 10{sup {minus}9} yr{sup {minus}1} if E1 is modified for the structural models used to define E2. The Repository Integration Program was tested to compare releases of a simulated repository (without volcanic events) to releases from time histories which may include volcanic disruptive events. Results show that the performance modeling can be used for sensitivity studies of volcanic effects.
Probability distribution of forecasts based on the ETAS model
NASA Astrophysics Data System (ADS)
Harte, D. S.
2017-07-01
Earthquake probability forecasts based on a point process model, which is defined with a conditional intensity function (e.g. ETAS), are generally made by using the history of the process to the current point in time, and by then simulating over the future time interval over which a forecast is required. By repeating the simulation multiple times, an estimate of the mean number of events together with the empirical probability distribution of event counts can be derived. This can involve a considerable amount of computation. Here we derive a recursive procedure to approximate the expected number of events when forecasts are based on the ETAS model. To assess the associated uncertainty of this expected number, we then derive the probability generating function of the distribution of the forecasted number of events. This theoretically derived distribution is very complex; hence we show how it can be approximated using the negative binomial distribution.
Scene text detection based on probability map and hierarchical model
NASA Astrophysics Data System (ADS)
Zhou, Gang; Liu, Yuehu
2012-06-01
Scene text detection is an important step for the text-based information extraction system. This problem is challenging due to the variations of size, unknown colors, and background complexity. We present a novel algorithm to robustly detect text in scene images. To segment text candidate connected components (CC) from images, a text probability map consisting of the text position and scale information is estimated by a text region detector. To filter out the non-text CCs, a hierarchical model consisting of two classifiers in cascade is utilized. The first stage of the model estimates text probabilities with unary component features. The second stage classifier is trained with both probability features and similarity features. Since the proposed method is learning-based, there are very few manual parameters required. Experimental results on the public benchmark ICDAR dataset show that our algorithm outperforms other state-of-the-art methods.
Camera-Model Identification Using Markovian Transition Probability Matrix
NASA Astrophysics Data System (ADS)
Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei
Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.
Predicting the Probability of Lightning Occurrence with Generalized Additive Models
NASA Astrophysics Data System (ADS)
Fabsic, Peter; Mayr, Georg; Simon, Thorsten; Zeileis, Achim
2017-04-01
This study investigates the predictability of lightning in complex terrain. The main objective is to estimate the probability of lightning occurrence in the Alpine region during summertime afternoons (12-18 UTC) at a spatial resolution of 64 × 64 km2. Lightning observations are obtained from the ALDIS lightning detection network. The probability of lightning occurrence is estimated using generalized additive models (GAM). GAMs provide a flexible modelling framework to estimate the relationship between covariates and the observations. The covariates, besides spatial and temporal effects, include numerous meteorological fields from the ECMWF ensemble system. The optimal model is chosen based on a forward selection procedure with out-of-sample mean squared error as a performance criterion. Our investigation shows that convective precipitation and mid-layer stability are the most influential meteorological predictors. Both exhibit intuitive, non-linear trends: higher values of convective precipitation indicate higher probability of lightning, and large values of the mid-layer stability measure imply low lightning potential. The performance of the model was evaluated against a climatology model containing both spatial and temporal effects. Taking the climatology model as a reference forecast, our model attains a Brier Skill Score of approximately 46%. The model's performance can be further enhanced by incorporating the information about lightning activity from the previous time step, which yields a Brier Skill Score of 48%. These scores show that the method is able to extract valuable information from the ensemble to produce reliable spatial forecasts of the lightning potential in the Alps.
Defining Predictive Probability Functions for Species Sampling Models.
Lee, Jaeyong; Quintana, Fernando A; Müller, Peter; Trippa, Lorenzo
2013-01-01
We review the class of species sampling models (SSM). In particular, we investigate the relation between the exchangeable partition probability function (EPPF) and the predictive probability function (PPF). It is straightforward to define a PPF from an EPPF, but the converse is not necessarily true. In this paper we introduce the notion of putative PPFs and show novel conditions for a putative PPF to define an EPPF. We show that all possible PPFs in a certain class have to define (unnormalized) probabilities for cluster membership that are linear in cluster size. We give a new necessary and sufficient condition for arbitrary putative PPFs to define an EPPF. Finally, we show posterior inference for a large class of SSMs with a PPF that is not linear in cluster size and discuss a numerical method to derive its PPF.
Probability bounds analysis for nonlinear population ecology models.
Enszer, Joshua A; Andrei Măceș, D; Stadtherr, Mark A
2015-09-01
Mathematical models in population ecology often involve parameters that are empirically determined and inherently uncertain, with probability distributions for the uncertainties not known precisely. Propagating such imprecise uncertainties rigorously through a model to determine their effect on model outputs can be a challenging problem. We illustrate here a method for the direct propagation of uncertainties represented by probability bounds though nonlinear, continuous-time, dynamic models in population ecology. This makes it possible to determine rigorous bounds on the probability that some specified outcome for a population is achieved, which can be a core problem in ecosystem modeling for risk assessment and management. Results can be obtained at a computational cost that is considerably less than that required by statistical sampling methods such as Monte Carlo analysis. The method is demonstrated using three example systems, with focus on a model of an experimental aquatic food web subject to the effects of contamination by ionic liquids, a new class of potentially important industrial chemicals. Copyright © 2015. Published by Elsevier Inc.
Modeling highway travel time distribution with conditional probability models
Oliveira Neto, Francisco Moraes; Chin, Shih-Miao; Hwang, Ho-Ling; Han, Lee
2014-01-01
ABSTRACT Under the sponsorship of the Federal Highway Administration's Office of Freight Management and Operations, the American Transportation Research Institute (ATRI) has developed performance measures through the Freight Performance Measures (FPM) initiative. Under this program, travel speed information is derived from data collected using wireless based global positioning systems. These telemetric data systems are subscribed and used by trucking industry as an operations management tool. More than one telemetric operator submits their data dumps to ATRI on a regular basis. Each data transmission contains truck location, its travel time, and a clock time/date stamp. Data from the FPM program provides a unique opportunity for studying the upstream-downstream speed distributions at different locations, as well as different time of the day and day of the week. This research is focused on the stochastic nature of successive link travel speed data on the continental United States Interstates network. Specifically, a method to estimate route probability distributions of travel time is proposed. This method uses the concepts of convolution of probability distributions and bivariate, link-to-link, conditional probability to estimate the expected distributions for the route travel time. Major contribution of this study is the consideration of speed correlation between upstream and downstream contiguous Interstate segments through conditional probability. The established conditional probability distributions, between successive segments, can be used to provide travel time reliability measures. This study also suggests an adaptive method for calculating and updating route travel time distribution as new data or information is added. This methodology can be useful to estimate performance measures as required by the recent Moving Ahead for Progress in the 21st Century Act (MAP 21).
Normal peer models and autistic children's learning.
Egel, A L; Richman, G S; Koegel, R L
1981-01-01
Present research and legislation regarding mainstreaming autistic children into normal classrooms have raised the importance of studying whether autistic children can benefit from observing normal peer models. The present investigation systematically assessed whether autistic children's learning of discrimination tasks could be improved if they observed normal children perform the tasks correctly. In the context of a multiple baseline design, four autistic children worked on five discrimination tasks that their teachers reported were posing difficulty. Throughout the baseline condition the children evidenced very low levels of correct responding on all five tasks. In the subsequent treatment condition, when normal peers modeled correct responses, the autistic children's correct responding increased dramatically. In each case, the peer modeling procedure produced rapid achievement of the acquisition which was maintained after the peer models were removed. These results are discussed in relation to issues concerning observational learning and in relation to the implications for mainstreaming autistic children into normal classrooms. PMID:7216930
The development of posterior probability models in risk-based integrity modeling.
Thodi, Premkumar N; Khan, Faisal I; Haddara, Mahmoud R
2010-03-01
There is a need for accurate modeling of mechanisms causing material degradation of equipment in process installation, to ensure safety and reliability of the equipment. Degradation mechanisms are stochastic processes. They can be best described using risk-based approaches. Risk-based integrity assessment quantifies the level of risk to which the individual components are subjected and provides means to mitigate them in a safe and cost-effective manner. The uncertainty and variability in structural degradations can be best modeled by probability distributions. Prior probability models provide initial description of the degradation mechanisms. As more inspection data become available, these prior probability models can be revised to obtain posterior probability models, which represent the current system and can be used to predict future failures. In this article, a rejection sampling-based Metropolis-Hastings (M-H) algorithm is used to develop posterior distributions. The M-H algorithm is a Markov chain Monte Carlo algorithm used to generate a sequence of posterior samples without actually knowing the normalizing constant. Ignoring the transient samples in the generated Markov chain, the steady state samples are rejected or accepted based on an acceptance criterion. To validate the estimated parameters of posterior models, analytical Laplace approximation method is used to compute the integrals involved in the posterior function. Results of the M-H algorithm and Laplace approximations are compared with conjugate pair estimations of known prior and likelihood combinations. The M-H algorithm provides better results and hence it is used for posterior development of the selected priors for corrosion and cracking.
Fixation probability in a two-locus intersexual selection model.
Durand, Guillermo; Lessard, Sabin
2016-06-01
We study a two-locus model of intersexual selection in a finite haploid population reproducing according to a discrete-time Moran model with a trait locus expressed in males and a preference locus expressed in females. We show that the probability of ultimate fixation of a single mutant allele for a male ornament introduced at random at the trait locus given any initial frequency state at the preference locus is increased by weak intersexual selection and recombination, weak or strong. Moreover, this probability exceeds the initial frequency of the mutant allele even in the case of a costly male ornament if intersexual selection is not too weak. On the other hand, the probability of ultimate fixation of a single mutant allele for a female preference towards a male ornament introduced at random at the preference locus is increased by weak intersexual selection and weak recombination if the female preference is not costly, and is strong enough in the case of a costly male ornament. The analysis relies on an extension of the ancestral recombination-selection graph for samples of haplotypes to take into account events of intersexual selection, while the symbolic calculation of the fixation probabilities is made possible in a reasonable time by an optimizing algorithm.
A joint-probability approach to crash prediction models.
Pei, Xin; Wong, S C; Sze, N N
2011-05-01
Many road safety researchers have used crash prediction models, such as Poisson and negative binomial regression models, to investigate the associations between crash occurrence and explanatory factors. Typically, they have attempted to separately model the crash frequencies of different severity levels. However, this method may suffer from serious correlations between the model estimates among different levels of crash severity. Despite efforts to improve the statistical fit of crash prediction models by modifying the data structure and model estimation method, little work has addressed the appropriate interpretation of the effects of explanatory factors on crash occurrence among different levels of crash severity. In this paper, a joint probability model is developed to integrate the predictions of both crash occurrence and crash severity into a single framework. For instance, the Markov chain Monte Carlo (MCMC) approach full Bayesian method is applied to estimate the effects of explanatory factors. As an illustration of the appropriateness of the proposed joint probability model, a case study is conducted on crash risk at signalized intersections in Hong Kong. The results of the case study indicate that the proposed model demonstrates a good statistical fit and provides an appropriate analysis of the influences of explanatory factors.
Probability theory for 3-layer remote sensing radiative transfer model: univariate case.
Ben-David, Avishai; Davidson, Charles E
2012-04-23
A probability model for a 3-layer radiative transfer model (foreground layer, cloud layer, background layer, and an external source at the end of line of sight) has been developed. The 3-layer model is fundamentally important as the primary physical model in passive infrared remote sensing. The probability model is described by the Johnson family of distributions that are used as a fit for theoretically computed moments of the radiative transfer model. From the Johnson family we use the SU distribution that can address a wide range of skewness and kurtosis values (in addition to addressing the first two moments, mean and variance). In the limit, SU can also describe lognormal and normal distributions. With the probability model one can evaluate the potential for detecting a target (vapor cloud layer), the probability of observing thermal contrast, and evaluate performance (receiver operating characteristics curves) in clutter-noise limited scenarios. This is (to our knowledge) the first probability model for the 3-layer remote sensing geometry that treats all parameters as random variables and includes higher-order statistics.
A propagation model of computer virus with nonlinear vaccination probability
NASA Astrophysics Data System (ADS)
Gan, Chenquan; Yang, Xiaofan; Liu, Wanping; Zhu, Qingyi
2014-01-01
This paper is intended to examine the effect of vaccination on the spread of computer viruses. For that purpose, a novel computer virus propagation model, which incorporates a nonlinear vaccination probability, is proposed. A qualitative analysis of this model reveals that, depending on the value of the basic reproduction number, either the virus-free equilibrium or the viral equilibrium is globally asymptotically stable. The results of simulation experiments not only demonstrate the validity of our model, but also show the effectiveness of nonlinear vaccination strategies. Through parameter analysis, some effective strategies for eradicating viruses are suggested.
NASA Astrophysics Data System (ADS)
Jaynes, E. T.; Bretthorst, G. Larry
2003-04-01
Foreword; Preface; Part I. Principles and Elementary Applications: 1. Plausible reasoning; 2. The quantitative rules; 3. Elementary sampling theory; 4. Elementary hypothesis testing; 5. Queer uses for probability theory; 6. Elementary parameter estimation; 7. The central, Gaussian or normal distribution; 8. Sufficiency, ancillarity, and all that; 9. Repetitive experiments, probability and frequency; 10. Physics of 'random experiments'; Part II. Advanced Applications: 11. Discrete prior probabilities, the entropy principle; 12. Ignorance priors and transformation groups; 13. Decision theory: historical background; 14. Simple applications of decision theory; 15. Paradoxes of probability theory; 16. Orthodox methods: historical background; 17. Principles and pathology of orthodox statistics; 18. The Ap distribution and rule of succession; 19. Physical measurements; 20. Model comparison; 21. Outliers and robustness; 22. Introduction to communication theory; References; Appendix A. Other approaches to probability theory; Appendix B. Mathematical formalities and style; Appendix C. Convolutions and cumulants.
An Integrated Modeling Framework for Probable Maximum Precipitation and Flood
NASA Astrophysics Data System (ADS)
Gangrade, S.; Rastogi, D.; Kao, S. C.; Ashfaq, M.; Naz, B. S.; Kabela, E.; Anantharaj, V. G.; Singh, N.; Preston, B. L.; Mei, R.
2015-12-01
With the increasing frequency and magnitude of extreme precipitation and flood events projected in the future climate, there is a strong need to enhance our modeling capabilities to assess the potential risks on critical energy-water infrastructures such as major dams and nuclear power plants. In this study, an integrated modeling framework is developed through high performance computing to investigate the climate change effects on probable maximum precipitation (PMP) and probable maximum flood (PMF). Multiple historical storms from 1981-2012 over the Alabama-Coosa-Tallapoosa River Basin near the Atlanta metropolitan area are simulated by the Weather Research and Forecasting (WRF) model using the Climate Forecast System Reanalysis (CFSR) forcings. After further WRF model tuning, these storms are used to simulate PMP through moisture maximization at initial and lateral boundaries. A high resolution hydrological model, Distributed Hydrology-Soil-Vegetation Model, implemented at 90m resolution and calibrated by the U.S. Geological Survey streamflow observations, is then used to simulate the corresponding PMF. In addition to the control simulation that is driven by CFSR, multiple storms from the Community Climate System Model version 4 under the Representative Concentrations Pathway 8.5 emission scenario are used to simulate PMP and PMF in the projected future climate conditions. The multiple PMF scenarios developed through this integrated modeling framework may be utilized to evaluate the vulnerability of existing energy-water infrastructures with various aspects associated PMP and PMF.
A Skew-Normal Mixture Regression Model
ERIC Educational Resources Information Center
Liu, Min; Lin, Tsung-I
2014-01-01
A challenge associated with traditional mixture regression models (MRMs), which rest on the assumption of normally distributed errors, is determining the number of unobserved groups. Specifically, even slight deviations from normality can lead to the detection of spurious classes. The current work aims to (a) examine how sensitive the commonly…
A Skew-Normal Mixture Regression Model
ERIC Educational Resources Information Center
Liu, Min; Lin, Tsung-I
2014-01-01
A challenge associated with traditional mixture regression models (MRMs), which rest on the assumption of normally distributed errors, is determining the number of unobserved groups. Specifically, even slight deviations from normality can lead to the detection of spurious classes. The current work aims to (a) examine how sensitive the commonly…
On the probability summation model for laser-damage thresholds
NASA Astrophysics Data System (ADS)
Clark, Clifton D.; Buffington, Gavin D.
2016-01-01
This paper explores the probability summation model in an attempt to provide insight to the model's utility and ultimately its validity. The model is a statistical description of multiple-pulse (MP) damage trends. It computes the probability of n pulses causing damage from knowledge of the single-pulse dose-response curve. Recently, the model has been used to make a connection between the observed n trends in MP damage thresholds for short pulses (<10 μs) and experimental uncertainties, suggesting that the observed trend is an artifact of experimental methods. We will consider the correct application of the model in this case. We also apply this model to the spot-size dependence of short pulse damage thresholds, which has not been done previously. Our results predict that the damage threshold trends with respect to the irradiated area should be similar to the MP damage threshold trends, and that observed spot-size dependence for short pulses seems to display this trend, which cannot be accounted for by the thermal models.
A model to assess dust explosion occurrence probability.
Hassan, Junaid; Khan, Faisal; Amyotte, Paul; Ferdous, Refaul
2014-03-15
Dust handling poses a potential explosion hazard in many industrial facilities. The consequences of a dust explosion are often severe and similar to a gas explosion; however, its occurrence is conditional to the presence of five elements: combustible dust, ignition source, oxidant, mixing and confinement. Dust explosion researchers have conducted experiments to study the characteristics of these elements and generate data on explosibility. These experiments are often costly but the generated data has a significant scope in estimating the probability of a dust explosion occurrence. This paper attempts to use existing information (experimental data) to develop a predictive model to assess the probability of a dust explosion occurrence in a given environment. The pro-posed model considers six key parameters of a dust explosion: dust particle diameter (PD), minimum ignition energy (MIE), minimum explosible concentration (MEC), minimum ignition temperature (MIT), limiting oxygen concentration (LOC) and explosion pressure (Pmax). A conditional probabilistic approach has been developed and embedded in the proposed model to generate a nomograph for assessing dust explosion occurrence. The generated nomograph provides a quick assessment technique to map the occurrence probability of a dust explosion for a given environment defined with the six parameters.
Opinion dynamics model with weighted influence: Exit probability and dynamics
NASA Astrophysics Data System (ADS)
Biswas, Soham; Sinha, Suman; Sen, Parongama
2013-08-01
We introduce a stochastic model of binary opinion dynamics in which the opinions are determined by the size of the neighboring domains. The exit probability here shows a step function behavior, indicating the existence of a separatrix distinguishing two different regions of basin of attraction. This behavior, in one dimension, is in contrast to other well known opinion dynamics models where no such behavior has been observed so far. The coarsening study of the model also yields novel exponent values. A lower value of persistence exponent is obtained in the present model, which involves stochastic dynamics, when compared to that in a similar type of model with deterministic dynamics. This apparently counterintuitive result is justified using further analysis. Based on these results, it is concluded that the proposed model belongs to a unique dynamical class.
Mortality Probability Model III and Simplified Acute Physiology Score II
Vasilevskis, Eduard E.; Kuzniewicz, Michael W.; Cason, Brian A.; Lane, Rondall K.; Dean, Mitzi L.; Clay, Ted; Rennie, Deborah J.; Vittinghoff, Eric; Dudley, R. Adams
2009-01-01
Background: To develop and compare ICU length-of-stay (LOS) risk-adjustment models using three commonly used mortality or LOS prediction models. Methods: Between 2001 and 2004, we performed a retrospective, observational study of 11,295 ICU patients from 35 hospitals in the California Intensive Care Outcomes Project. We compared the accuracy of the following three LOS models: a recalibrated acute physiology and chronic health evaluation (APACHE) IV-LOS model; and models developed using risk factors in the mortality probability model III at zero hours (MPM0) and the simplified acute physiology score (SAPS) II mortality prediction model. We evaluated models by calculating the following: (1) grouped coefficients of determination; (2) differences between observed and predicted LOS across subgroups; and (3) intraclass correlations of observed/expected LOS ratios between models. Results: The grouped coefficients of determination were APACHE IV with coefficients recalibrated to the LOS values of the study cohort (APACHE IVrecal) [R2 = 0.422], mortality probability model III at zero hours (MPM0 III) [R2 = 0.279], and simplified acute physiology score (SAPS II) [R2 = 0.008]. For each decile of predicted ICU LOS, the mean predicted LOS vs the observed LOS was significantly different (p ≤ 0.05) for three, two, and six deciles using APACHE IVrecal, MPM0 III, and SAPS II, respectively. Plots of the predicted vs the observed LOS ratios of the hospitals revealed a threefold variation in LOS among hospitals with high model correlations. Conclusions: APACHE IV and MPM0 III were more accurate than SAPS II for the prediction of ICU LOS. APACHE IV is the most accurate and best calibrated model. Although it is less accurate, MPM0 III may be a reasonable option if the data collection burden or the treatment effect bias is a consideration. PMID:19363210
Quantum Probability -- A New Direction for Modeling in Cognitive Science
NASA Astrophysics Data System (ADS)
Roy, Sisir
2014-07-01
Human cognition is still a puzzling issue in research and its appropriate modeling. It depends on how the brain behaves at that particular instance and identifies and responds to a signal among myriads of noises that are present in the surroundings (called external noise) as well as in the neurons themselves (called internal noise). Thus it is not surprising to assume that the functionality consists of various uncertainties, possibly a mixture of aleatory and epistemic uncertainties. It is also possible that a complicated pathway consisting of both types of uncertainties in continuum play a major role in human cognition. For more than 200 years mathematicians and philosophers have been using probability theory to describe human cognition. Recently in several experiments with human subjects, violation of traditional probability theory has been clearly revealed in plenty of cases. Literature survey clearly suggests that classical probability theory fails to model human cognition beyond a certain limit. While the Bayesian approach may seem to be a promising candidate to this problem, the complete success story of Bayesian methodology is yet to be written. The major problem seems to be the presence of epistemic uncertainty and its effect on cognition at any given time. Moreover the stochasticity in the model arises due to the unknown path or trajectory (definite state of mind at each time point), a person is following. To this end a generalized version of probability theory borrowing ideas from quantum mechanics may be a plausible approach. A superposition state in quantum theory permits a person to be in an indefinite state at each point of time. Such an indefinite state allows all the states to have the potential to be expressed at each moment. Thus a superposition state appears to be able to represent better, the uncertainty, ambiguity or conflict experienced by a person at any moment demonstrating that mental states follow quantum mechanics during perception and
Modeling spatial variation in avian survival and residency probabilities
Saracco, James F.; Royle, J. Andrew; DeSante, David F.; Gardner, Beth
2010-01-01
The importance of understanding spatial variation in processes driving animal population dynamics is widely recognized. Yet little attention has been paid to spatial modeling of vital rates. Here we describe a hierarchical spatial autoregressive model to provide spatially explicit year-specific estimates of apparent survival (phi) and residency (pi) probabilities from capture-recapture data. We apply the model to data collected on a declining bird species, Wood Thrush (Hylocichla mustelina), as part of a broad-scale bird-banding network, the Monitoring Avian Productivity and Survivorship (MAPS) program. The Wood Thrush analysis showed variability in both phi and pi among years and across space. Spatial heterogeneity in residency probability was particularly striking, suggesting the importance of understanding the role of transients in local populations. We found broad-scale spatial patterning in Wood Thrush phi and pi that lend insight into population trends and can direct conservation and research. The spatial model developed here represents a significant advance over approaches to investigating spatial pattern in vital rates that aggregate data at coarse spatial scales and do not explicitly incorporate spatial information in the model. Further development and application of hierarchical capture-recapture models offers the opportunity to more fully investigate spatiotemporal variation in the processes that drive population changes.
An empirical probability model of detecting species at low densities.
Delaney, David G; Leung, Brian
2010-06-01
False negatives, not detecting things that are actually present, are an important but understudied problem. False negatives are the result of our inability to perfectly detect species, especially those at low density such as endangered species or newly arriving introduced species. They reduce our ability to interpret presence-absence survey data and make sound management decisions (e.g., rapid response). To reduce the probability of false negatives, we need to compare the efficacy and sensitivity of different sampling approaches and quantify an unbiased estimate of the probability of detection. We conducted field experiments in the intertidal zone of New England and New York to test the sensitivity of two sampling approaches (quadrat vs. total area search, TAS), given different target characteristics (mobile vs. sessile). Using logistic regression we built detection curves for each sampling approach that related the sampling intensity and the density of targets to the probability of detection. The TAS approach reduced the probability of false negatives and detected targets faster than the quadrat approach. Mobility of targets increased the time to detection but did not affect detection success. Finally, we interpreted two years of presence-absence data on the distribution of the Asian shore crab (Hemigrapsus sanguineus) in New England and New York, using our probability model for false negatives. The type of experimental approach in this paper can help to reduce false negatives and increase our ability to detect species at low densities by refining sampling approaches, which can guide conservation strategies and management decisions in various areas of ecology such as conservation biology and invasion ecology.
Probability of a disease outbreak in stochastic multipatch epidemic models.
Lahodny, Glenn E; Allen, Linda J S
2013-07-01
Environmental heterogeneity, spatial connectivity, and movement of individuals play important roles in the spread of infectious diseases. To account for environmental differences that impact disease transmission, the spatial region is divided into patches according to risk of infection. A system of ordinary differential equations modeling spatial spread of disease among multiple patches is used to formulate two new stochastic models, a continuous-time Markov chain, and a system of stochastic differential equations. An estimate for the probability of disease extinction is computed by approximating the Markov chain model with a multitype branching process. Numerical examples illustrate some differences between the stochastic models and the deterministic model, important for prevention of disease outbreaks that depend on the location of infectious individuals, the risk of infection, and the movement of individuals.
Human Inferences about Sequences: A Minimal Transition Probability Model
2016-01-01
The brain constantly infers the causes of the inputs it receives and uses these inferences to generate statistical expectations about future observations. Experimental evidence for these expectations and their violations include explicit reports, sequential effects on reaction times, and mismatch or surprise signals recorded in electrophysiology and functional MRI. Here, we explore the hypothesis that the brain acts as a near-optimal inference device that constantly attempts to infer the time-varying matrix of transition probabilities between the stimuli it receives, even when those stimuli are in fact fully unpredictable. This parsimonious Bayesian model, with a single free parameter, accounts for a broad range of findings on surprise signals, sequential effects and the perception of randomness. Notably, it explains the pervasive asymmetry between repetitions and alternations encountered in those studies. Our analysis suggests that a neural machinery for inferring transition probabilities lies at the core of human sequence knowledge. PMID:28030543
Bayesian failure probability model sensitivity study. Final report
Not Available
1986-05-30
The Office of the Manager, National Communications System (OMNCS) has developed a system-level approach for estimating the effects of High-Altitude Electromagnetic Pulse (HEMP) on the connectivity of telecommunications networks. This approach incorporates a Bayesian statistical model which estimates the HEMP-induced failure probabilities of telecommunications switches and transmission facilities. The purpose of this analysis is to address the sensitivity of the Bayesian model. This is done by systematically varying two model input parameters--the number of observations, and the equipment failure rates. Throughout the study, a non-informative prior distribution is used. The sensitivity of the Bayesian model to the noninformative prior distribution is investigated from a theoretical mathematical perspective.
Defining prior probabilities for hydrologic model structures in UK catchments
NASA Astrophysics Data System (ADS)
Clements, Michiel; Pianosi, Francesca; Wagener, Thorsten; Coxon, Gemma; Freer, Jim; Booij, Martijn
2014-05-01
The selection of a model structure is an essential part of the hydrological modelling process. Recently flexible modeling frameworks have been proposed where hybrid model structures can be obtained by mixing together components from a suite of existing hydrological models. When sufficient and reliable data are available, this framework can be successfully utilised to identify the most appropriate structure, and associated optimal parameters, for a given catchment by maximizing the different models ability to reproduce the desired range of flow behaviour. In this study, we use a flexible modelling framework to address a rather different question: can the most appropriate model structure be inferred a priori (i.e without using flow observations) from catchment characteristics like topography, geology, land use, and climate? Furthermore and more generally, can we define priori probabilities of different model structures as a function of catchment characteristics? To address these questions we propose a two-step methodology and demonstrate it by application to a national database of meteo-hydrological data and catchment characteristics for 89 catchments across the UK. In the first step, each catchment is associated with its most appropriate model structure. We consider six possible structures obtained by combining two soil moisture accounting components widely used in the UK (Penman and PDM) and three different flow routing modules (linear, parallel, leaky). We measure the suitability of a model structure by the probability of finding behavioural parameterizations for that model structure when applied to the catchment under study. In the second step, we use regression analysis to establish a relation between selected model structures and the catchment characteristics. Specifically, we apply Classification And Regression Trees (CART) and show that three catchment characteristics, the Base Flow Index, the Runoff Coefficient and the mean Drainage Path Slope, can be used
[Estimating survival of thrushes: modeling capture-recapture probabilities].
Burskiî, O V
2011-01-01
The stochastic modeling technique serves as a way to correctly separate "return rate" of marked animals into survival rate (phi) and capture probability (p). The method can readily be used with the program MARK freely distributed through Internet (Cooch, White, 2009). Input data for the program consist of "capture histories" of marked animals--strings of units and zeros indicating presence or absence of the individual among captures (or sightings) along the set of consequent recapture occasions (e.g., years). Probability of any history is a product of binomial probabilities phi, p or their complements (1 - phi) and (1 - p) for each year of observation over the individual. Assigning certain values to parameters phi and p, one can predict the composition of all individual histories in the sample and assess the likelihood of the prediction. The survival parameters for different occasions and cohorts of individuals can be set either equal or different, as well as recapture parameters can be set in different ways. There is a possibility to constraint the parameters, according to the hypothesis being tested, in the form of a specific model. Within the specified constraints, the program searches for parameter values that describe the observed composition of histories with the maximum likelihood. It computes the parameter estimates along with confidence limits and the overall model likelihood. There is a set of tools for testing the model goodness-of-fit under assumption of equality of survival rates among individuals and independence of their fates. Other tools offer a proper selection among a possible variety of models, providing the best parity between details and precision in describing reality. The method was applied to 20-yr recapture and resighting data series on 4 thrush species (genera Turdus, Zoothera) breeding in the Yenisei River floodplain within the middle taiga subzone. The capture probabilities were quite independent of observational efforts fluctuations
Predictions of Geospace Drivers By the Probability Distribution Function Model
NASA Astrophysics Data System (ADS)
Bussy-Virat, C.; Ridley, A. J.
2014-12-01
Geospace drivers like the solar wind speed, interplanetary magnetic field (IMF), and solar irradiance have a strong influence on the density of the thermosphere and the near-Earth space environment. This has important consequences on the drag on satellites that are in low orbit and therefore on their position. One of the basic problems with space weather prediction is that these drivers can only be measured about one hour before they affect the environment. In order to allow for adequate planning for some members of the commercial, military, or civilian communities, reliable long-term space weather forecasts are needed. The study presents a model for predicting geospace drivers up to five days in advance. This model uses the same general technique to predict the solar wind speed, the three components of the IMF, and the solar irradiance F10.7. For instance, it uses Probability distribution functions (PDFs) to relate the current solar wind speed and slope to the future solar wind speed, as well as the solar wind speed to the solar wind speed one solar rotation in the future. The PDF Model has been compared to other models for predictions of the speed. It has been found that it is better than using the current solar wind speed (i.e., persistence), and better than the Wang-Sheeley-Arge Model for prediction horizons of 24 hours. Once the drivers are predicted, and the uncertainty on the drivers are specified, the density in the thermosphere can be derived using various models of the thermosphere, such as the Global Ionosphere Thermosphere Model. In addition, uncertainties on the densities can be estimated, based on ensembles of simulations. From the density and uncertainty predictions, satellite positions, as well as the uncertainty in those positions can be estimated. These can assist operators in determining the probability of collisions between objects in low Earth orbit.
Can quantum probability provide a new direction for cognitive modeling?
Pothos, Emmanuel M; Busemeyer, Jerome R
2013-06-01
Classical (Bayesian) probability (CP) theory has led to an influential research tradition for modeling cognitive processes. Cognitive scientists have been trained to work with CP principles for so long that it is hard even to imagine alternative ways to formalize probabilities. However, in physics, quantum probability (QP) theory has been the dominant probabilistic approach for nearly 100 years. Could QP theory provide us with any advantages in cognitive modeling as well? Note first that both CP and QP theory share the fundamental assumption that it is possible to model cognition on the basis of formal, probabilistic principles. But why consider a QP approach? The answers are that (1) there are many well-established empirical findings (e.g., from the influential Tversky, Kahneman research tradition) that are hard to reconcile with CP principles; and (2) these same findings have natural and straightforward explanations with quantum principles. In QP theory, probabilistic assessment is often strongly context- and order-dependent, individual states can be superposition states (that are impossible to associate with specific values), and composite systems can be entangled (they cannot be decomposed into their subsystems). All these characteristics appear perplexing from a classical perspective. However, our thesis is that they provide a more accurate and powerful account of certain cognitive processes. We first introduce QP theory and illustrate its application with psychological examples. We then review empirical findings that motivate the use of quantum theory in cognitive theory, but also discuss ways in which QP and CP theories converge. Finally, we consider the implications of a QP theory approach to cognition for human rationality.
Panic attacks during sleep: a hyperventilation-probability model.
Ley, R
1988-09-01
Panic attacks during sleep are analysed in terms of a hyperventilation theory of panic disorder. The theory assumes that panic attacks during sleep are a manifestation of severe chronic hyperventilation, a dysfunctional state in which renal compensation has led to a relatively steady state of diminished bicarbonate. Reductions in respiration during deep non-REM sleep lead to respiratory acidosis which triggers hyperventilatory hypocapnea and subsequent panic. A probability model designed to predict when during sleep panic attacks are likely to occur is supported by relevant data from studies of sleep and panic attacks. Implications for treatment are discussed.
Jamming probabilities for a vacancy in the dimer model.
Poghosyan, V S; Priezzhev, V B; Ruelle, P
2008-04-01
Following the recent proposal made by [J. Bouttier, Phys. Rev. E 76, 041140 (2007)], we study analytically the mobility properties of a single vacancy in the close-packed dimer model on the square lattice. Using the spanning web representation, we find determinantal expressions for various observable quantities. In the limiting case of large lattices, they can be reduced to the calculation of Toeplitz determinants and minors thereof. The probability for the vacancy to be strictly jammed and other diffusion characteristics are computed exactly.
Estimating transition probabilities among everglades wetland communities using multistate models
Hotaling, A.S.; Martin, J.; Kitchens, W.M.
2009-01-01
In this study we were able to provide the first estimates of transition probabilities of wet prairie and slough vegetative communities in Water Conservation Area 3A (WCA3A) of the Florida Everglades and to identify the hydrologic variables that determine these transitions. These estimates can be used in management models aimed at restoring proportions of wet prairie and slough habitats to historical levels in the Everglades. To determine what was driving the transitions between wet prairie and slough communities we evaluated three hypotheses: seasonality, impoundment, and wet and dry year cycles using likelihood-based multistate models to determine the main driver of wet prairie conversion in WCA3A. The most parsimonious model included the effect of wet and dry year cycles on vegetative community conversions. Several ecologists have noted wet prairie conversion in southern WCA3A but these are the first estimates of transition probabilities among these community types. In addition, to being useful for management of the Everglades we believe that our framework can be used to address management questions in other ecosystems. ?? 2009 The Society of Wetland Scientists.
Naive Probability: A Mental Model Theory of Extensional Reasoning.
ERIC Educational Resources Information Center
Johnson-Laird, P. N.; Legrenzi, Paolo; Girotto, Vittorio; Legrenzi, Maria Sonino; Caverni, Jean-Paul
1999-01-01
Outlines a theory of naive probability in which individuals who are unfamiliar with the probability calculus can infer the probabilities of events in an "extensional" way. The theory accommodates reasoning based on numerical premises, and explains how naive reasoners can infer posterior probabilities without relying on Bayes's theorem.…
Modeling evolution using the probability of fixation: history and implications.
McCandlish, David M; Stoltzfus, Arlin
2014-09-01
Many models of evolution calculate the rate of evolution by multiplying the rate at which new mutations originate within a population by a probability of fixation. Here we review the historical origins, contemporary applications, and evolutionary implications of these "origin-fixation" models, which are widely used in evolutionary genetics, molecular evolution, and phylogenetics. Origin-fixation models were first introduced in 1969, in association with an emerging view of "molecular" evolution. Early origin-fixation models were used to calculate an instantaneous rate of evolution across a large number of independently evolving loci; in the 1980s and 1990s, a second wave of origin-fixation models emerged to address a sequence of fixation events at a single locus. Although origin fixation models have been applied to a broad array of problems in contemporary evolutionary research, their rise in popularity has not been accompanied by an increased appreciation of their restrictive assumptions or their distinctive implications. We argue that origin-fixation models constitute a coherent theory of mutation-limited evolution that contrasts sharply with theories of evolution that rely on the presence of standing genetic variation. A major unsolved question in evolutionary biology is the degree to which these models provide an accurate approximation of evolution in natural populations.
Nahorniak, Matthew
2015-01-01
In ecology, as in other research fields, efficient sampling for population estimation often drives sample designs toward unequal probability sampling, such as in stratified sampling. Design based statistical analysis tools are appropriate for seamless integration of sample design into the statistical analysis. However, it is also common and necessary, after a sampling design has been implemented, to use datasets to address questions that, in many cases, were not considered during the sampling design phase. Questions may arise requiring the use of model based statistical tools such as multiple regression, quantile regression, or regression tree analysis. However, such model based tools may require, for ensuring unbiased estimation, data from simple random samples, which can be problematic when analyzing data from unequal probability designs. Despite numerous method specific tools available to properly account for sampling design, too often in the analysis of ecological data, sample design is ignored and consequences are not properly considered. We demonstrate here that violation of this assumption can lead to biased parameter estimates in ecological research. In addition, to the set of tools available for researchers to properly account for sampling design in model based analysis, we introduce inverse probability bootstrapping (IPB). Inverse probability bootstrapping is an easily implemented method for obtaining equal probability re-samples from a probability sample, from which unbiased model based estimates can be made. We demonstrate the potential for bias in model-based analyses that ignore sample inclusion probabilities, and the effectiveness of IPB sampling in eliminating this bias, using both simulated and actual ecological data. For illustration, we considered three model based analysis tools—linear regression, quantile regression, and boosted regression tree analysis. In all models, using both simulated and actual ecological data, we found inferences to be
Nahorniak, Matthew; Larsen, David P; Volk, Carol; Jordan, Chris E
2015-01-01
In ecology, as in other research fields, efficient sampling for population estimation often drives sample designs toward unequal probability sampling, such as in stratified sampling. Design based statistical analysis tools are appropriate for seamless integration of sample design into the statistical analysis. However, it is also common and necessary, after a sampling design has been implemented, to use datasets to address questions that, in many cases, were not considered during the sampling design phase. Questions may arise requiring the use of model based statistical tools such as multiple regression, quantile regression, or regression tree analysis. However, such model based tools may require, for ensuring unbiased estimation, data from simple random samples, which can be problematic when analyzing data from unequal probability designs. Despite numerous method specific tools available to properly account for sampling design, too often in the analysis of ecological data, sample design is ignored and consequences are not properly considered. We demonstrate here that violation of this assumption can lead to biased parameter estimates in ecological research. In addition, to the set of tools available for researchers to properly account for sampling design in model based analysis, we introduce inverse probability bootstrapping (IPB). Inverse probability bootstrapping is an easily implemented method for obtaining equal probability re-samples from a probability sample, from which unbiased model based estimates can be made. We demonstrate the potential for bias in model-based analyses that ignore sample inclusion probabilities, and the effectiveness of IPB sampling in eliminating this bias, using both simulated and actual ecological data. For illustration, we considered three model based analysis tools--linear regression, quantile regression, and boosted regression tree analysis. In all models, using both simulated and actual ecological data, we found inferences to be
Recent Advances in Model-Assisted Probability of Detection
NASA Technical Reports Server (NTRS)
Thompson, R. Bruce; Brasche, Lisa J.; Lindgren, Eric; Swindell, Paul; Winfree, William P.
2009-01-01
The increased role played by probability of detection (POD) in structural integrity programs, combined with the significant time and cost associated with the purely empirical determination of POD, provides motivation for alternate means to estimate this important metric of NDE techniques. One approach to make the process of POD estimation more efficient is to complement limited empirical experiments with information from physics-based models of the inspection process or controlled laboratory experiments. The Model-Assisted Probability of Detection (MAPOD) Working Group was formed by the Air Force Research Laboratory, the FAA Technical Center, and NASA to explore these possibilities. Since the 2004 inception of the MAPOD Working Group, 11 meetings have been held in conjunction with major NDE conferences. This paper will review the accomplishments of this group, which includes over 90 members from around the world. Included will be a discussion of strategies developed to combine physics-based and empirical understanding, draft protocols that have been developed to guide application of the strategies, and demonstrations that have been or are being carried out in a number of countries. The talk will conclude with a discussion of future directions, which will include documentation of benefits via case studies, development of formal protocols for engineering practice, as well as a number of specific technical issues.
Probability model for estimating colorectal polyp progression rates.
Gopalappa, Chaitra; Aydogan-Cremaschi, Selen; Das, Tapas K; Orcun, Seza
2011-03-01
According to the American Cancer Society, colorectal cancer (CRC) is the third most common cause of cancer related deaths in the United States. Experts estimate that about 85% of CRCs begin as precancerous polyps, early detection and treatment of which can significantly reduce the risk of CRC. Hence, it is imperative to develop population-wide intervention strategies for early detection of polyps. Development of such strategies requires precise values of population-specific rates of incidence of polyp and its progression to cancerous stage. There has been a considerable amount of research in recent years on developing screening based CRC intervention strategies. However, these are not supported by population-specific mathematical estimates of progression rates. This paper addresses this need by developing a probability model that estimates polyp progression rates considering race and family history of CRC; note that, it is ethically infeasible to obtain polyp progression rates through clinical trials. We use the estimated rates to simulate the progression of polyps in the population of the State of Indiana, and also the population of a clinical trial conducted in the State of Minnesota, which was obtained from literature. The results from the simulations are used to validate the probability model.
Benassi, Marcello; Strigari, Lidia
2016-01-01
An overview of radiotherapy (RT) induced normal tissue complication probability (NTCP) models is presented. NTCP models based on empirical and mechanistic approaches that describe a specific radiation induced late effect proposed over time for conventional RT are reviewed with particular emphasis on their basic assumptions and related mathematical translation and their weak and strong points. PMID:28044088
Probability of detection models for eddy current NDE methods
Rajesh, S. N.
1993-04-30
The development of probability of detection (POD) models for a variety of nondestructive evaluation (NDE) methods is motivated by a desire to quantify the variability introduced during the process of testing. Sources of variability involved in eddy current methods of NDE include those caused by variations in liftoff, material properties, probe canting angle, scan format, surface roughness and measurement noise. This thesis presents a comprehensive POD model for eddy current NDE. Eddy current methods of nondestructive testing are used widely in industry to inspect a variety of nonferromagnetic and ferromagnetic materials. The development of a comprehensive POD model is therefore of significant importance. The model incorporates several sources of variability characterized by a multivariate Gaussian distribution and employs finite element analysis to predict the signal distribution. The method of mixtures is then used for estimating optimal threshold values. The research demonstrates the use of a finite element model within a probabilistic framework to the spread in the measured signal for eddy current nondestructive methods. Using the signal distributions for various flaw sizes the POD curves for varying defect parameters have been computed. In contrast to experimental POD models, the cost of generating such curves is very low and complex defect shapes can be handled very easily. The results are also operator independent.
A count probability cookbok: Spurious effects and the scaling model
NASA Technical Reports Server (NTRS)
Colombi, S.; Bouchet, F. R.; Schaeffer, R.
1995-01-01
We study the errors brought by finite volume effects and dilution effects on the practical determination of the count probability distribution function P(sub N)(n,l), which is the probability of having N objects in a cell of volume l cubed for a set of average number density n. Dilution effects are particularly revelant to the so-called sparse sampling strategy. This work is mainly done in the framework of the Bailan & Schaeffer scaling model, which assumes that the Q-body correlation functions obey the scaling relation Xi(sub Q)(lambda r(sub l),....lambda r(sub Q) = lambda(exp -(Q-1)gamma) Xi(sub Q)(r(sub 1),....r(sub Q)). We use three synthetic samples as references to perform our analysis: a fractal generated by a Rayleigh-Levy random walk with approximately 3 x 10(exp 4) objects, a sample dominated by a spherical power-law cluster with approximately 3 x 10(exp 4) objects and a cold dark matter (CDM) universe involving approximately 3 x 10(exp 5) matter particles.
Detecting Gustatory–Olfactory Flavor Mixtures: Models of Probability Summation
Veldhuizen, Maria G.; Shepard, Timothy G.; Shavit, Adam Y.
2012-01-01
Odorants and flavorants typically contain many components. It is generally easier to detect multicomponent stimuli than to detect a single component, through either neural integration or probability summation (PS) (or both). PS assumes that the sensory effects of 2 (or more) stimulus components (e.g., gustatory and olfactory components of a flavorant) are detected in statistically independent channels, that each channel makes a separate decision whether a component is detected, and that the behavioral response depends solely on the separate decisions. Models of PS traditionally assume high thresholds for detecting each component, noise being irrelevant. The core assumptions may be adapted, however, to signal-detection theory, where noise limits detection. The present article derives predictions of high-threshold and signal-detection models of independent-decision PS in detecting gustatory–olfactory flavorants, comparing predictions in yes/no and 2-alternative forced-choice tasks using blocked and intermixed stimulus designs. The models also extend to measures of response times to suprathreshold flavorants. Predictions derived from high-threshold and signal-detection models differ markedly. Available empirical evidence on gustatory–olfactory flavor detection suggests that neither the high-threshold nor the signal-detection versions of PS can readily account for the results, which likely reflect neural integration in the flavor system. PMID:22075720
Louwe, R. J. W.; Wendling, M.; Herk, M. B. van; Mijnheer, B. J.
2007-04-15
Irradiation of the heart is one of the major concerns during radiotherapy of breast cancer. Three-dimensional (3D) treatment planning would therefore be useful but cannot always be performed for left-sided breast treatments, because CT data may not be available. However, even if 3D dose calculations are available and an estimate of the normal tissue damage can be made, uncertainties in patient positioning may significantly influence the heart dose during treatment. Therefore, 3D reconstruction of the actual heart dose during breast cancer treatment using electronic imaging portal device (EPID) dosimetry has been investigated. A previously described method to reconstruct the dose in the patient from treatment portal images at the radiological midsurface was used in combination with a simple geometrical model of the irradiated heart volume to enable calculation of dose-volume histograms (DVHs), to independently verify this aspect of the treatment without using 3D data from a planning CT scan. To investigate the accuracy of our method, the DVHs obtained with full 3D treatment planning system (TPS) calculations and those obtained after resampling the TPS dose in the radiological midsurface were compared for fifteen breast cancer patients for whom CT data were available. In addition, EPID dosimetry as well as 3D dose calculations using our TPS, film dosimetry, and ionization chamber measurements were performed in an anthropomorphic phantom. It was found that the dose reconstructed using EPID dosimetry and the dose calculated with the TPS agreed within 1.5% in the lung/heart region. The dose-volume histograms obtained with EPID dosimetry were used to estimate the normal tissue complication probability (NTCP) for late excess cardiac mortality. Although the accuracy of these NTCP calculations might be limited due to the uncertainty in the NTCP model, in combination with our portal dosimetry approach it allows incorporation of the actual heart dose. For the anthropomorphic
Rianthavorn, Pornpimol; Tangngamsakul, Onjira
2016-11-01
We evaluated risk factors and assessed predicted probabilities for grade III or higher vesicoureteral reflux (dilating reflux) in children with a first simple febrile urinary tract infection and normal renal and bladder ultrasound. Data for 167 children 2 to 72 months old with a first febrile urinary tract infection and normal ultrasound were compared between those who had dilating vesicoureteral reflux (12 patients, 7.2%) and those who did not. Exclusion criteria consisted of history of prenatal hydronephrosis or familial reflux and complicated urinary tract infection. The logistic regression model was used to identify independent variables associated with dilating reflux. Predicted probabilities for dilating reflux were assessed. Patient age and prevalence of nonEscherichia coli bacteria were greater in children who had dilating reflux compared to those who did not (p = 0.02 and p = 0.004, respectively). Gender distribution was similar between the 2 groups (p = 0.08). In multivariate analysis older age and nonE. coli bacteria independently predicted dilating reflux, with odds ratios of 1.04 (95% CI 1.01-1.07, p = 0.02) and 3.76 (95% CI 1.05-13.39, p = 0.04), respectively. The impact of nonE. coli bacteria on predicted probabilities of dilating reflux increased with patient age. We support the concept of selective voiding cystourethrogram in children with a first simple febrile urinary tract infection and normal ultrasound. Voiding cystourethrogram should be considered in children with late onset urinary tract infection due to nonE. coli bacteria since they are at risk for dilating reflux even if the ultrasound is normal. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Low-probability flood risk modeling for New York City.
Aerts, Jeroen C J H; Lin, Ning; Botzen, Wouter; Emanuel, Kerry; de Moel, Hans
2013-05-01
The devastating impact by Hurricane Sandy (2012) again showed New York City (NYC) is one of the most vulnerable cities to coastal flooding around the globe. The low-lying areas in NYC can be flooded by nor'easter storms and North Atlantic hurricanes. The few studies that have estimated potential flood damage for NYC base their damage estimates on only a single, or a few, possible flood events. The objective of this study is to assess the full distribution of hurricane flood risk in NYC. This is done by calculating potential flood damage with a flood damage model that uses many possible storms and surge heights as input. These storms are representative for the low-probability/high-impact flood hazard faced by the city. Exceedance probability-loss curves are constructed under different assumptions about the severity of flood damage. The estimated flood damage to buildings for NYC is between US$59 and 129 millions/year. The damage caused by a 1/100-year storm surge is within a range of US$2 bn-5 bn, while this is between US$5 bn and 11 bn for a 1/500-year storm surge. An analysis of flood risk in each of the five boroughs of NYC finds that Brooklyn and Queens are the most vulnerable to flooding. This study examines several uncertainties in the various steps of the risk analysis, which resulted in variations in flood damage estimations. These uncertainties include: the interpolation of flood depths; the use of different flood damage curves; and the influence of the spectra of characteristics of the simulated hurricanes. © 2013 Society for Risk Analysis.
High-resolution urban flood modelling - a joint probability approach
NASA Astrophysics Data System (ADS)
Hartnett, Michael; Olbert, Agnieszka; Nash, Stephen
2017-04-01
(Divoky et al., 2005). Nevertheless, such events occur and in Ireland alone there are several cases of serious damage due to flooding resulting from a combination of high sea water levels and river flows driven by the same meteorological conditions (e.g. Olbert et al. 2015). A November 2009 fluvial-coastal flooding of Cork City bringing €100m loss was one such incident. This event was used by Olbert et al. (2015) to determine processes controlling urban flooding and is further explored in this study to elaborate on coastal and fluvial flood mechanisms and their roles in controlling water levels. The objective of this research is to develop a methodology to assess combined effect of multiple source flooding on flood probability and severity in urban areas and to establish a set of conditions that dictate urban flooding due to extreme climatic events. These conditions broadly combine physical flood drivers (such as coastal and fluvial processes), their mechanisms and thresholds defining flood severity. The two main physical processes controlling urban flooding: high sea water levels (coastal flooding) and high river flows (fluvial flooding), and their threshold values for which flood is likely to occur, are considered in this study. Contribution of coastal and fluvial drivers to flooding and their impacts are assessed in a two-step process. The first step involves frequency analysis and extreme value statistical modelling of storm surges, tides and river flows and ultimately the application of joint probability method to estimate joint exceedence return periods for combination of surges, tide and river flows. In the second step, a numerical model of Cork Harbour MSN_Flood comprising a cascade of four nested high-resolution models is used to perform simulation of flood inundation under numerous hypothetical coastal and fluvial flood scenarios. The risk of flooding is quantified based on a range of physical aspects such as the extent and depth of inundation (Apel et al
Effectiveness of Incorporating Adversary Probability Perception Modeling in Security Games
2015-01-30
security game (SSG) algorithms. Given recent work on human decision-making, we adjust the existing subjective utility function to account for...data from previous security game experiments with human subjects. Our results show the incorporation of probability perceptions into the SUQR can...provide improvements in the ability to predict probabilities of attack in certain games .
A Probability Model of Accuracy in Deception Detection Experiments.
ERIC Educational Resources Information Center
Park, Hee Sun; Levine, Timothy R.
2001-01-01
Extends the recent work on the veracity effect in deception detection. Explains the probabilistic nature of a receiver's accuracy in detecting deception and analyzes a receiver's detection of deception in terms of set theory and conditional probability. Finds that accuracy is shown to be a function of the relevant conditional probability and the…
A Probability Model of Accuracy in Deception Detection Experiments.
ERIC Educational Resources Information Center
Park, Hee Sun; Levine, Timothy R.
2001-01-01
Extends the recent work on the veracity effect in deception detection. Explains the probabilistic nature of a receiver's accuracy in detecting deception and analyzes a receiver's detection of deception in terms of set theory and conditional probability. Finds that accuracy is shown to be a function of the relevant conditional probability and the…
Estimation of State Transition Probabilities: A Neural Network Model
NASA Astrophysics Data System (ADS)
Saito, Hiroshi; Takiyama, Ken; Okada, Masato
2015-12-01
Humans and animals can predict future states on the basis of acquired knowledge. This prediction of the state transition is important for choosing the best action, and the prediction is only possible if the state transition probability has already been learned. However, how our brains learn the state transition probability is unknown. Here, we propose a simple algorithm for estimating the state transition probability by utilizing the state prediction error. We analytically and numerically confirmed that our algorithm is able to learn the probability completely with an appropriate learning rate. Furthermore, our learning rule reproduced experimentally reported psychometric functions and neural activities in the lateral intraparietal area in a decision-making task. Thus, our algorithm might describe the manner in which our brains learn state transition probabilities and predict future states.
Minimal normal measurement models of quantum instruments
NASA Astrophysics Data System (ADS)
Pellonpää, Juha-Pekka; Tukiainen, Mikko
2017-06-01
In this work we study the minimal normal measurement models of quantum instruments. We show that usually the apparatus' Hilbert space in such a model is unitarily isomorphic to the minimal Stinespring dilation space of the instrument. However, if the Hilbert space of the system is infinite-dimensional and the multiplicities of the outcomes of the associated observable (POVM) are all infinite then this may not be the case. In these pathological cases the minimal apparatus' Hilbert space is shown to be unitarily isomorphic to the instrument's minimal dilation space augmented by one extra dimension.
Normal brain ageing: models and mechanisms
Toescu, Emil C
2005-01-01
Normal ageing is associated with a degree of decline in a number of cognitive functions. Apart from the issues raised by the current attempts to expand the lifespan, understanding the mechanisms and the detailed metabolic interactions involved in the process of normal neuronal ageing continues to be a challenge. One model, supported by a significant amount of experimental evidence, views the cellular ageing as a metabolic state characterized by an altered function of the metabolic triad: mitochondria–reactive oxygen species (ROS)–intracellular Ca2+. The perturbation in the relationship between the members of this metabolic triad generate a state of decreased homeostatic reserve, in which the aged neurons could maintain adequate function during normal activity, as demonstrated by the fact that normal ageing is not associated with widespread neuronal loss, but become increasingly vulnerable to the effects of excessive metabolic loads, usually associated with trauma, ischaemia or neurodegenerative processes. This review will concentrate on some of the evidence showing altered mitochondrial function with ageing and also discuss some of the functional consequences that would result from such events, such as alterations in mitochondrial Ca2+ homeostasis, ATP production and generation of ROS. PMID:16321805
Modeling pore corrosion in normally open gold- plated copper connectors.
Battaile, Corbett Chandler; Moffat, Harry K.; Sun, Amy Cha-Tien; Enos, David George; Serna, Lysle M.; Sorensen, Neil Robert
2008-09-01
The goal of this study is to model the electrical response of gold plated copper electrical contacts exposed to a mixed flowing gas stream consisting of air containing 10 ppb H{sub 2}S at 30 C and a relative humidity of 70%. This environment accelerates the attack normally observed in a light industrial environment (essentially a simplified version of the Battelle Class 2 environment). Corrosion rates were quantified by measuring the corrosion site density, size distribution, and the macroscopic electrical resistance of the aged surface as a function of exposure time. A pore corrosion numerical model was used to predict both the growth of copper sulfide corrosion product which blooms through defects in the gold layer and the resulting electrical contact resistance of the aged surface. Assumptions about the distribution of defects in the noble metal plating and the mechanism for how corrosion blooms affect electrical contact resistance were needed to complete the numerical model. Comparisons are made to the experimentally observed number density of corrosion sites, the size distribution of corrosion product blooms, and the cumulative probability distribution of the electrical contact resistance. Experimentally, the bloom site density increases as a function of time, whereas the bloom size distribution remains relatively independent of time. These two effects are included in the numerical model by adding a corrosion initiation probability proportional to the surface area along with a probability for bloom-growth extinction proportional to the corrosion product bloom volume. The cumulative probability distribution of electrical resistance becomes skewed as exposure time increases. While the electrical contact resistance increases as a function of time for a fraction of the bloom population, the median value remains relatively unchanged. In order to model this behavior, the resistance calculated for large blooms has been weighted more heavily.
Sabelnikov, Alexander; Zhukov, Vladimir; Kempf, Ruth
2006-05-15
Real-time biosensors are expected to provide significant help in emergency response management should a terrorist attack with the use of biowarfare, BW, agents occur. In spite of recent and spectacular progress in the field of biosensors, several core questions still remain unaddressed. For instance, how sensitive should be a sensor? To what levels of infection would the different sensitivity limits correspond? How the probabilities of identification correspond to the probabilities of infection by an agent? In this paper, an attempt was made to address these questions. A simple probability model was generated for the calculation of risks of infection of humans exposed to different doses of infectious agents and of the probability of their simultaneous real-time detection/identification by a model biosensor and its network. A model biosensor was defined as a single device that included an aerosol sampler and a device for identification by any known (or conceived) method. A network of biosensors was defined as a set of several single biosensors that operated in a similar way and dealt with the same amount of an agent. Neither the particular deployment of sensors within the network, nor the spacious and timely distribution of agent aerosols due to wind, ventilation, humidity, temperature, etc., was considered by the model. Three model biosensors based on PCR-, antibody/antigen-, and MS-technique were used for simulation. A wide range of their metric parameters encompassing those of commercially available and laboratory biosensors, and those of future, theoretically conceivable devices was used for several hundred simulations. Based on the analysis of the obtained results, it is concluded that small concentrations of aerosolized agents that are still able to provide significant risks of infection especially for highly infectious agents (e.g. for small pox those risk are 1, 8, and 37 infected out of 1000 exposed, depending on the viability of the virus preparation) will
Model and test in a fungus of the probability that beneficial mutations survive drift.
Gifford, Danna R; de Visser, J Arjan G M; Wahl, Lindi M
2013-02-23
Determining the probability of fixation of beneficial mutations is critically important for building predictive models of adaptive evolution. Despite considerable theoretical work, models of fixation probability have stood untested for nearly a century. However, recent advances in experimental and theoretical techniques permit the development of models with testable predictions. We developed a new model for the probability of surviving genetic drift, a major component of fixation probability, for novel beneficial mutations in the fungus Aspergillus nidulans, based on the life-history characteristics of its colony growth on a solid surface. We tested the model by measuring the probability of surviving drift in 11 adapted strains introduced into wild-type populations of different densities. We found that the probability of surviving drift increased with mutant invasion fitness, and decreased with wild-type density, as expected. The model accurately predicted the survival probability for the majority of mutants, yielding one of the first direct tests of the extinction probability of beneficial mutations.
Aerosol Behavior Log-Normal Distribution Model.
GIESEKE, J. A.
2001-10-22
HAARM3, an acronym for Heterogeneous Aerosol Agglomeration Revised Model 3, is the third program in the HAARM series developed to predict the time-dependent behavior of radioactive aerosols under postulated LMFBR accident conditions. HAARM3 was developed to include mechanisms of aerosol growth and removal which had not been accounted for in the earlier models. In addition, experimental measurements obtained on sodium oxide aerosols have been incorporated in the code. As in HAARM2, containment gas temperature, pressure, and temperature gradients normal to interior surfaces are permitted to vary with time. The effects of reduced density on sodium oxide agglomerate behavior and of nonspherical shape of particles on aerosol behavior mechanisms are taken into account, and aerosol agglomeration due to turbulent air motion is considered. Also included is a capability to calculate aerosol concentration attenuation factors and to restart problems requiring long computing times.
Modeling the Effect of Reward Amount on Probability Discounting
Myerson, Joel; Green, Leonard; Morris, Joshua
2011-01-01
The present study with college students examined the effect of amount on the discounting of probabilistic monetary rewards. A hyperboloid function accurately described the discounting of hypothetical rewards ranging in amount from $20 to $10,000,000. The degree of discounting increased continuously with amount of probabilistic reward. This effect of amount was not due to changes in the rate parameter of the discounting function, but rather was due to increases in the exponent. These results stand in contrast to those observed with the discounting of delayed monetary rewards, in which the degree of discounting decreases with reward amount due to amount-dependent decreases in the rate parameter. Taken together, this pattern of results suggests that delay and probability discounting reflect different underlying mechanisms. That is, the fact that the exponent in the delay discounting function is independent of amount is consistent with a psychophysical scaling interpretation, whereas the finding that the exponent of the probability-discounting function is amount-dependent is inconsistent with such an interpretation. Instead, the present results are consistent with the idea that the probability-discounting function is itself the product of a value function and a weighting function. This idea was first suggested by Kahneman and Tversky (1979), although their prospect theory does not predict amount effects like those observed. The effect of amount on probability discounting was parsimoniously incorporated into our hyperboloid discounting function by assuming that the exponent was proportional to the amount raised to a power. The amount-dependent exponent of the probability-discounting function may be viewed as reflecting the effect of amount on the weighting of the probability with which the reward will be received. PMID:21541126
Modeling the effect of reward amount on probability discounting.
Myerson, Joel; Green, Leonard; Morris, Joshua
2011-03-01
The present study with college students examined the effect of amount on the discounting of probabilistic monetary rewards. A hyperboloid function accurately described the discounting of hypothetical rewards ranging in amount from $20 to $10,000,000. The degree of discounting increased continuously with amount of probabilistic reward. This effect of amount was not due to changes in the rate parameter of the discounting function, but rather was due to increases in the exponent. These results stand in contrast to those observed with the discounting of delayed monetary rewards, in which the degree of discounting decreases with reward amount due to amount-dependent decreases in the rate parameter. Taken together, this pattern of results suggests that delay and probability discounting reflect different underlying mechanisms. That is, the fact that the exponent in the delay discounting function is independent of amount is consistent with a psychophysical scaling interpretation, whereas the finding that the exponent of the probability-discounting function is amount-dependent is inconsistent with such an interpretation. Instead, the present results are consistent with the idea that the probability-discounting function is itself the product of a value function and a weighting function. This idea was first suggested by Kahneman and Tversky (1979), although their prospect theory does not predict amount effects like those observed. The effect of amount on probability discounting was parsimoniously incorporated into our hyperboloid discounting function by assuming that the exponent was proportional to the amount raised to a power. The amount-dependent exponent of the probability-discounting function may be viewed as reflecting the effect of amount on the weighting of the probability with which the reward will be received.
Using skew-logistic probability density function as a model for age-specific fertility rate pattern.
Asili, Sahar; Rezaei, Sadegh; Najjar, Lotfollah
2014-01-01
Fertility rate is one of the most important global indexes. Past researchers found models which fit to age-specific fertility rates. For example, mixture probability density functions have been proposed for situations with bi-modal fertility patterns. This model is less useful for unimodal age-specific fertility rate patterns, so a model based on skew-symmetric (skew-normal) pdf was proposed by Mazzuco and Scarpa (2011) which was flexible for unimodal and bimodal fertility patterns. In this paper, we introduce skew-logistic probability density function as a better model: its residuals are less than those of the skew-normal model and it can more precisely estimate the parameters of the model.
Naive Probability: Model-based Estimates of Unique Events
2014-05-04
1. Introduction Probabilistic thinking is ubiquitous in both numerate and innumerate cultures. Aristotle ...wrote: “A probability is a thing that happens for the most part” ( Aristotle , Rhetoric, Book I, 1357a35, see Barnes, 1984). His account, as Franklin...1984). The complete works of Aristotle . Princeton, NJ: Princeton University Press
The Sequential Probability Ratio Test and Binary Item Response Models
ERIC Educational Resources Information Center
Nydick, Steven W.
2014-01-01
The sequential probability ratio test (SPRT) is a common method for terminating item response theory (IRT)-based adaptive classification tests. To decide whether a classification test should stop, the SPRT compares a simple log-likelihood ratio, based on the classification bound separating two categories, to prespecified critical values. As has…
The Sequential Probability Ratio Test and Binary Item Response Models
ERIC Educational Resources Information Center
Nydick, Steven W.
2014-01-01
The sequential probability ratio test (SPRT) is a common method for terminating item response theory (IRT)-based adaptive classification tests. To decide whether a classification test should stop, the SPRT compares a simple log-likelihood ratio, based on the classification bound separating two categories, to prespecified critical values. As has…
Smits, Iris A M; Timmerman, Marieke E; Stegeman, Alwin
2016-05-01
Maximum likelihood estimation of the linear factor model for continuous items assumes normally distributed item scores. We consider deviations from normality by means of a skew-normally distributed factor model or a quadratic factor model. We show that the item distributions under a skew-normal factor are equivalent to those under a quadratic model up to third-order moments. The reverse only holds if the quadratic loadings are equal to each other and within certain bounds. We illustrate that observed data which follow any skew-normal factor model can be so well approximated with the quadratic factor model that the models are empirically indistinguishable, and that the reverse does not hold in general. The choice between the two models to account for deviations of normality is illustrated by an empirical example from clinical psychology.
Modeling seismic noise by normal mode summation
NASA Astrophysics Data System (ADS)
Gualtieri, L.; Stutzmann, E.; Capdeville, Y.; Ardhuin, F.; Schimmel, M.; Mangeney, A.; Morelli, A.
2012-12-01
Cross-correlation of ambient seismic noise plays a fundamental role to extract and better understand seismic properties of the Earth. The knowledge of the distribution of noise sources and the theory behind the seismic noise generation turns out to be of fundamental importance in the study of seismic noise cross-correlation. To improve these knowledge, we model the secondary microseismic noise, i.e. the period band 5-12 s, using normal mode summation and focus our attention on the noise sources distribution varying both in space and in time. Longuet-Higgins (1950) showed that the sources of the secondary microseismic noise are due to the pressure fluctuations that are generated by the interaction of ocean waves either in deep ocean or close by the coast and due to coastal reflection. Considering a recent ocean wave model (Ardhuin et al., 2011) that takes into account coastal reflection, we compute the vertical force due to the pressure fluctuation that has to be applied at the surface of the ocean. Noise sources are discretized in a spherical grid with constant resolution of 50 km and they are used to compute synthetic seismograms and spectra by normal mode summation. We show that we retrieve the maximum force amplitude for periods of 6-7 s which is consistent with the position of the maximum peak in the spectra and that, for long period in the secondary microseismic band, i.e. around 12 s, mostly the sources generated by coastal reflection have a strong influence on the microseism generation. We also show that the displacement of the ground is amplified in relation with the ocean bathymetry in agreement with Longuet-Higgins' theory and that the ocean site amplification can be computed using normal modes. We investigate also the role of the attenuation considering sources at regional scale. We are able to reproduce seasonal variations and to identify the noise sources having the main contribution in the spectra. We obtain a good agreement between synthetic and real
NASA Astrophysics Data System (ADS)
Baer, P.; Mastrandrea, M.
2006-12-01
Simple probabilistic models which attempt to estimate likely transient temperature change from specified CO2 emissions scenarios must make assumptions about at least six uncertain aspects of the causal chain between emissions and temperature: current radiative forcing (including but not limited to aerosols), current land use emissions, carbon sinks, future non-CO2 forcing, ocean heat uptake, and climate sensitivity. Of these, multiple PDFs (probability density functions) have been published for the climate sensitivity, a couple for current forcing and ocean heat uptake, one for future non-CO2 forcing, and none for current land use emissions or carbon cycle uncertainty (which are interdependent). Different assumptions about these parameters, as well as different model structures, will lead to different estimates of likely temperature increase from the same emissions pathway. Thus policymakers will be faced with a range of temperature probability distributions for the same emissions scenarios, each described by a central tendency and spread. Because our conventional understanding of uncertainty and probability requires that a probabilistically defined variable of interest have only a single mean (or median, or modal) value and a well-defined spread, this "multidimensional" uncertainty defies straightforward utilization in policymaking. We suggest that there are no simple solutions to the questions raised. Crucially, we must dispel the notion that there is a "true" probability probabilities of this type are necessarily subjective, and reasonable people may disagree. Indeed, we suggest that what is at stake is precisely the question, what is it reasonable to believe, and to act as if we believe? As a preliminary suggestion, we demonstrate how the output of a simple probabilistic climate model might be evaluated regarding the reasonableness of the outputs it calculates with different input PDFs. We suggest further that where there is insufficient evidence to clearly
A general tumour control probability model for non-uniform dose distributions.
González, Sara J; Carando, Daniel G
2008-06-01
Perfectly uniform dose distributions over target volumes are almost impossible to achieve in clinical practice, due to surrounding normal tissues dose constraints that are commonly imposed to treatment plans. This article introduces a new approach to compute tumour control probabilities (TCPs) under inhomogeneous dose conditions. The equivalent subvolume model presented here does not assume independence between cell responses and can be derived from any homogeneous dose TCP model. To check the consistency of this model, some natural properties are shown to hold, including the so-called uniform dose theorem. In the spirit of the equivalent uniform dose (EUD) concept introduced by Niemierko (1997, Med. Phys., 24, 103-110), the probability-EUD is defined. This concept together with the methodology introduced to compute TCPs for inhomogeneous doses is applied to different uniform dose TCP models. As expected, the TCP takes into account the whole dose distribution over the target volume, but it is influenced more strongly by the low-dose regions. Finally, the proposed methodology and other approaches to the inhomogeneous dose scenario are compared.
Simplifying Probability Elicitation and Uncertainty Modeling in Bayesian Networks
Paulson, Patrick R; Carroll, Thomas E; Sivaraman, Chitra; Neorr, Peter A; Unwin, Stephen D; Hossain, Shamina S
2011-04-16
In this paper we contribute two methods that simplify the demands of knowledge elicitation for particular types of Bayesian networks. The first method simplify the task of providing probabilities when the states that a random variable takes can be described by a new, fully ordered state set in which a state implies all the preceding states. The second method leverages Dempster-Shafer theory of evidence to provide a way for the expert to express the degree of ignorance that they feel about the estimates being provided.
Modeling seismic noise by normal mode summation
NASA Astrophysics Data System (ADS)
Gualtieri, L.; Stutzmann, E.; Capdeville, Y.; Ardhuin, F.; Schimmel, M.; Mangeney, A.; Morelli, A.
2012-04-01
Microseismic noise is the continuous oscillation of the ground in the period band 5-20 s. We observe seasonal variations of this noise that are stable over the last 20 years. Microseism spectra display 2 peaks, and the strongest peak, in the period band 5-12 s, correspond to the so called secondary microseism. Longuet-Higgins (1950) showed that the corresponding sources are pressure fluctuations that are generated by the interaction of ocean waves either in deep ocean or due to coastal reflection. Considering an ocean wave model that takes into account coastal reflection, we compute the pressure fluctuation as a vertical force applied at the surface of the ocean. The sources are discretized in a spherical grid with constant grid spacing of 50 km. We then compute the synthetic spectra by normal mode summation in a realistic Earth model. We show that the maximum force amplitude is for periods 6-7 s which is consistent with the period of the seismic spectra maximum peak and that, for periods around 12 s, only the sources generated by coastal reflection have a strong influence for the microseism generation. We also show that the displacement of the ground is amplified in relation with the ocean bathymetry in agreement with Longuet-Higgins' theory. We obtain a good agreement between synthetic and real seismic spectra in the period band 5-12sec. Modeling seismic noise is a useful tool for selecting particular noise data such as the strongest peaks and further investigating the corresponding sources. These noise sources may then be used for tomography.
ERIC Educational Resources Information Center
Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas
2013-01-01
Data analysis requires subtle probability reasoning to answer questions like "What is the chance of event A occurring, given that event B was observed?" This generic question arises in discussions of many intriguing scientific questions such as "What is the probability that an adolescent weighs between 120 and 140 pounds given that…
ERIC Educational Resources Information Center
Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas
2013-01-01
Data analysis requires subtle probability reasoning to answer questions like "What is the chance of event A occurring, given that event B was observed?" This generic question arises in discussions of many intriguing scientific questions such as "What is the probability that an adolescent weighs between 120 and 140 pounds given that…
NASA Astrophysics Data System (ADS)
Homer, Rachel M.; Law, David W.; Molyneaux, Thomas C. K.
2015-07-01
In previous studies, a 1-D numerical predictive tool to simulate the salt induced corrosion of port assets in Australia has been developed into a 2-D and 3-D model based on current predictive probabilistic models. These studies use a probability distribution function based on the mean and standard deviation of the parameters for a structure incorporating surface chloride concentration, diffusion coefficient and cover. In this paper, this previous work is extended through an investigation of the distribution of actual cover by specified cover, element type and method of construction. Significant differences are found for the measured cover within structures, by method of construction, element type and specified cover. The data are not normally distributed and extreme values, usually low, are found in a number of locations. Elements cast insitu are less likely to meet the specified cover and the measured cover is more dispersed than those in elements which are precast. Individual probability distribution functions are available and are tested against the original function. Methods of combining results so that one distribution is available for a structure are formulated and evaluated. The ability to utilise the model for structures where no measurement have been taken is achieved by transposing results based on the specified cover.
Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.
2012-04-01
Purpose: To determine whether normal tissue complication probability (NTCP) analyses of the human spinal cord by use of the Lyman-Kutcher-Burman (LKB) model, supplemented by linear-quadratic modeling to account for the effect of fractionation, predict the risk of myelopathy from stereotactic radiosurgery (SRS). Methods and Materials: From November 2001 to July 2008, 24 spinal hemangioblastomas in 17 patients were treated with SRS. Of the tumors, 17 received 1 fraction with a median dose of 20 Gy (range, 18-30 Gy) and 7 received 20 to 25 Gy in 2 or 3 sessions, with cord maximum doses of 22.7 Gy (range, 17.8-30.9 Gy) and 22.0 Gy (range, 20.2-26.6 Gy), respectively. By use of conventional values for {alpha}/{beta}, volume parameter n, 50% complication probability dose TD{sub 50}, and inverse slope parameter m, a computationally simplified implementation of the LKB model was used to calculate the biologically equivalent uniform dose and NTCP for each treatment. Exploratory calculations were performed with alternate values of {alpha}/{beta} and n. Results: In this study 1 case (4%) of myelopathy occurred. The LKB model using radiobiological parameters from Emami and the logistic model with parameters from Schultheiss overestimated complication rates, predicting 13 complications (54%) and 18 complications (75%), respectively. An increase in the volume parameter (n), to assume greater parallel organization, improved the predictive value of the models. Maximum-likelihood LKB fitting of {alpha}/{beta} and n yielded better predictions (0.7 complications), with n = 0.023 and {alpha}/{beta} = 17.8 Gy. Conclusions: The spinal cord tolerance to the dosimetry of SRS is higher than predicted by the LKB model using any set of accepted parameters. Only a high {alpha}/{beta} value in the LKB model and only a large volume effect in the logistic model with Schultheiss data could explain the low number of complications observed. This finding emphasizes that radiobiological models
Hussein, M; Aldridge, S; Guerrero Urbano, T; Nisbet, A
2012-01-01
Objective The aim of this study was to investigate the effect of 6 and 15-MV photon energies on intensity-modulated radiation therapy (IMRT) prostate cancer treatment plan outcome and to compare the theoretical risks of secondary induced malignancies. Methods Separate prostate cancer IMRT plans were prepared for 6 and 15-MV beams. Organ-equivalent doses were obtained through thermoluminescent dosemeter measurements in an anthropomorphic Aldersen radiation therapy human phantom. The neutron dose contribution at 15 MV was measured using polyallyl-diglycol-carbonate neutron track etch detectors. Risk coefficients from the International Commission on Radiological Protection Report 103 were used to compare the risk of fatal secondary induced malignancies in out-of-field organs and tissues for 6 and 15 MV. For the bladder and the rectum, a comparative evaluation of the risk using three separate models was carried out. Dose–volume parameters for the rectum, bladder and prostate planning target volume were evaluated, as well as normal tissue complication probability (NTCP) and tumour control probability calculations. Results There is a small increased theoretical risk of developing a fatal cancer from 6 MV compared with 15 MV, taking into account all the organs. Dose–volume parameters for the rectum and bladder show that 15 MV results in better volume sparing in the regions below 70 Gy, but the volume exposed increases slightly beyond this in comparison with 6 MV, resulting in a higher NTCP for the rectum of 3.6% vs 3.0% (p=0.166). Conclusion The choice to treat using IMRT at 15 MV should not be excluded, but should be based on risk vs benefit while considering the age and life expectancy of the patient together with the relative risk of radiation-induced cancer and NTCPs. PMID:22010028
Modeling Conditional Probabilities in Complex Educational Assessments. CSE Technical Report.
ERIC Educational Resources Information Center
Mislevy, Robert J.; Almond, Russell; Dibello, Lou; Jenkins, Frank; Steinberg, Linda; Yan, Duanli; Senturk, Deniz
An active area in psychometric research is coordinated task design and statistical analysis built around cognitive models. Compared with classical test theory and item response theory, there is often less information from observed data about the measurement-model parameters. On the other hand, there is more information from the grounding…
Modeling the Spectra of Dense Hydrogen Plasmas: Beyond Occupation Probability
NASA Astrophysics Data System (ADS)
Gomez, T. A.; Montgomery, M. H.; Nagayama, T.; Kilcrease, D. P.; Winget, D. E.
2017-03-01
Accurately measuring the masses of white dwarf stars is crucial in many astrophysical contexts (e.g., asteroseismology and cosmochronology). These masses are most commonly determined by fitting a model atmosphere to an observed spectrum; this is known as the spectroscopic method. However, for cases in which more than one method may be employed, there are well known discrepancies between masses determined by the spectroscopic method and those determined by astrometric, dynamical, and/or gravitational-redshift methods. In an effort to resolve these discrepancies, we are developing a new model of hydrogen in a dense plasma that is a significant departure from previous models. Experiments at Sandia National Laboratories are currently underway to validate these new models, and we have begun modifications to incorporate these models into stellar-atmosphere codes.
Application of Probability Methods to Assess Crash Modeling Uncertainty
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Stockwell, Alan E.; Hardy, Robin C.
2003-01-01
Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stress-strain behaviors, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the effects of finite element modeling assumptions on the predicted responses. The vertical drop test of a Fokker F28 fuselage section will be the focus of this paper. The results of a probabilistic analysis using finite element simulations will be compared with experimental data.
Application of Probability Methods to Assess Crash Modeling Uncertainty
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Stockwell, Alan E.; Hardy, Robin C.
2007-01-01
Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stress-strain behaviors, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the effects of finite element modeling assumptions on the predicted responses. The vertical drop test of a Fokker F28 fuselage section will be the focus of this paper. The results of a probabilistic analysis using finite element simulations will be compared with experimental data.
NASA Astrophysics Data System (ADS)
Silva, Antonio
2005-03-01
It is well-known that the mathematical theory of Brownian motion was first developed in the Ph. D. thesis of Louis Bachelier for the French stock market before Einstein [1]. In Ref. [2] we studied the so-called Heston model, where the stock-price dynamics is governed by multiplicative Brownian motion with stochastic diffusion coefficient. We solved the corresponding Fokker-Planck equation exactly and found an analytic formula for the time-dependent probability distribution of stock price changes (returns). The formula interpolates between the exponential (tent-shaped) distribution for short time lags and the Gaussian (parabolic) distribution for long time lags. The theoretical formula agrees very well with the actual stock-market data ranging from the Dow-Jones index [2] to individual companies [3], such as Microsoft, Intel, etc. [] [1] Louis Bachelier, ``Th'eorie de la sp'eculation,'' Annales Scientifiques de l''Ecole Normale Sup'erieure, III-17:21-86 (1900).[] [2] A. A. Dragulescu and V. M. Yakovenko, ``Probability distribution of returns in the Heston model with stochastic volatility,'' Quantitative Finance 2, 443--453 (2002); Erratum 3, C15 (2003). [cond-mat/0203046] [] [3] A. C. Silva, R. E. Prange, and V. M. Yakovenko, ``Exponential distribution of financial returns at mesoscopic time lags: a new stylized fact,'' Physica A 344, 227--235 (2004). [cond-mat/0401225
Takemura, Kazuhisa; Murakami, Hajime
2016-01-01
A probability weighting function (w(p)) is considered to be a nonlinear function of probability (p) in behavioral decision theory. This study proposes a psychophysical model of probability weighting functions derived from a hyperbolic time discounting model and a geometric distribution. The aim of the study is to show probability weighting functions from the point of view of waiting time for a decision maker. Since the expected value of a geometrically distributed random variable X is 1/p, we formulized the probability weighting function of the expected value model for hyperbolic time discounting as w(p) = (1 - k log p)(-1). Moreover, the probability weighting function is derived from Loewenstein and Prelec's (1992) generalized hyperbolic time discounting model. The latter model is proved to be equivalent to the hyperbolic-logarithmic weighting function considered by Prelec (1998) and Luce (2001). In this study, we derive a model from the generalized hyperbolic time discounting model assuming Fechner's (1860) psychophysical law of time and a geometric distribution of trials. In addition, we develop median models of hyperbolic time discounting and generalized hyperbolic time discounting. To illustrate the fitness of each model, a psychological experiment was conducted to assess the probability weighting and value functions at the level of the individual participant. The participants were 50 university students. The results of individual analysis indicated that the expected value model of generalized hyperbolic discounting fitted better than previous probability weighting decision-making models. The theoretical implications of this finding are discussed.
Takemura, Kazuhisa; Murakami, Hajime
2016-01-01
A probability weighting function (w(p)) is considered to be a nonlinear function of probability (p) in behavioral decision theory. This study proposes a psychophysical model of probability weighting functions derived from a hyperbolic time discounting model and a geometric distribution. The aim of the study is to show probability weighting functions from the point of view of waiting time for a decision maker. Since the expected value of a geometrically distributed random variable X is 1/p, we formulized the probability weighting function of the expected value model for hyperbolic time discounting as w(p) = (1 − k log p)−1. Moreover, the probability weighting function is derived from Loewenstein and Prelec's (1992) generalized hyperbolic time discounting model. The latter model is proved to be equivalent to the hyperbolic-logarithmic weighting function considered by Prelec (1998) and Luce (2001). In this study, we derive a model from the generalized hyperbolic time discounting model assuming Fechner's (1860) psychophysical law of time and a geometric distribution of trials. In addition, we develop median models of hyperbolic time discounting and generalized hyperbolic time discounting. To illustrate the fitness of each model, a psychological experiment was conducted to assess the probability weighting and value functions at the level of the individual participant. The participants were 50 university students. The results of individual analysis indicated that the expected value model of generalized hyperbolic discounting fitted better than previous probability weighting decision-making models. The theoretical implications of this finding are discussed. PMID:27303338
The probability distribution model of air pollution index and its dominants in Kuala Lumpur
NASA Astrophysics Data System (ADS)
AL-Dhurafi, Nasr Ahmed; Razali, Ahmad Mahir; Masseran, Nurulkamal; Zamzuri, Zamira Hasanah
2016-11-01
This paper focuses on the statistical modeling for the distributions of air pollution index (API) and its sub-indexes data observed at Kuala Lumpur in Malaysia. Five pollutants or sub-indexes are measured including, carbon monoxide (CO); sulphur dioxide (SO2); nitrogen dioxide (NO2), and; particulate matter (PM10). Four probability distributions are considered, namely log-normal, exponential, Gamma and Weibull in search for the best fit distribution to the Malaysian air pollutants data. In order to determine the best distribution for describing the air pollutants data, five goodness-of-fit criteria's are applied. This will help in minimizing the uncertainty in pollution resource estimates and improving the assessment phase of planning. The conflict in criterion results for selecting the best distribution was overcome by using the weight of ranks method. We found that the Gamma distribution is the best distribution for the majority of air pollutants data in Kuala Lumpur.
Probability distribution analysis of observational extreme events and model evaluation
NASA Astrophysics Data System (ADS)
Yu, Q.; Lau, A. K. H.; Fung, J. C. H.; Tsang, K. T.
2016-12-01
Earth's surface temperatures were the warmest in 2015 since modern record-keeping began in 1880, according to the latest study. In contrast, a cold weather occurred in many regions of China in January 2016, and brought the first snowfall to Guangzhou, the capital city of Guangdong province in 67 years. To understand the changes of extreme weather events as well as project its future scenarios, this study use statistical models to analyze on multiple climate data. We first use Granger-causality test to identify the attribution of global mean temperature rise and extreme temperature events with CO2 concentration. The four statistical moments (mean, variance, skewness, kurtosis) of daily maximum temperature distribution is investigated on global climate observational, reanalysis (1961-2010) and model data (1961-2100). Furthermore, we introduce a new tail index based on the four moments, which is a more robust index to measure extreme temperatures. Our results show that the CO2 concentration can provide information to the time series of mean and extreme temperature, but not vice versa. Based on our new tail index, we find that other than mean and variance, skewness is an important indicator should be considered to estimate extreme temperature changes and model evaluation. Among the 12 climate model data we investigate, the fourth version of Community Climate System Model (CCSM4) from National Center for Atmospheric Research performs well on the new index we introduce, which indicate the model have a substantial capability to project the future changes of extreme temperature in the 21st century. The method also shows its ability to measure extreme precipitation/ drought events. In the future we will introduce a new diagram to systematically evaluate the performance of the four statistical moments in climate model output, moreover, the human and economic impacts of extreme weather events will also be conducted.
Azarang, Leyla; Scheike, Thomas; de Uña-Álvarez, Jacobo
2017-02-26
In this work, we present direct regression analysis for the transition probabilities in the possibly non-Markov progressive illness-death model. The method is based on binomial regression, where the response is the indicator of the occupancy for the given state along time. Randomly weighted score equations that are able to remove the bias due to censoring are introduced. By solving these equations, one can estimate the possibly time-varying regression coefficients, which have an immediate interpretation as covariate effects on the transition probabilities. The performance of the proposed estimator is investigated through simulations. We apply the method to data from the Registry of Systematic Lupus Erythematosus RELESSER, a multicenter registry created by the Spanish Society of Rheumatology. Specifically, we investigate the effect of age at Lupus diagnosis, sex, and ethnicity on the probability of damage and death along time. Copyright © 2017 John Wiley & Sons, Ltd.
Modeling Outcomes from Probability Tasks: Sixth Graders Reasoning Together
ERIC Educational Resources Information Center
Alston, Alice; Maher, Carolyn
2003-01-01
This report considers the reasoning of sixth grade students as they explore problem tasks concerning the fairness of dice games. The particular focus is the students' interactions, verbal and non-verbal, as they build and justify representations that extend their basic understanding of number combinations in order to model the outcome set of a…
The effect of interruption probability in lattice model of two-lane traffic flow with passing
NASA Astrophysics Data System (ADS)
Peng, Guanghan
2016-11-01
A new lattice model is proposed by taking into account the interruption probability with passing for two-lane freeway. The effect of interruption probability with passing is investigated about the linear stability condition and the mKdV equation through linear stability analysis and nonlinear analysis, respectively. Furthermore, numerical simulation is carried out to study traffic phenomena resulted from the interruption probability with passing in two-lane system. The results show that the interruption probability with passing can improve the stability of traffic flow for low reaction coefficient while the interruption probability with passing can destroy the stability of traffic flow for high reaction coefficient on two-lane highway.
Physical Model Assisted Probability of Detection in Nondestructive Evaluation
NASA Astrophysics Data System (ADS)
Li, M.; Meeker, W. Q.; Thompson, R. B.
2011-06-01
Nondestructive evaluation is used widely in many engineering and industrial areas to detect defects or flaws such as cracks inside parts or structures during manufacturing or for products in service. The standard statistical model is a simple empirical linear regression between the (possibly transformed) signal response variables and the (possibly transformed) explanatory variables. For some applications, such a simple empirical approach is inadequate. An important alternative approach is to use knowledge of the physics of the inspection process to provide information about the underlying relationship between the response and explanatory variables. Use of such knowledge can greatly increase the power and accuracy of the statistical analysis and enable, when needed, proper extrapolation outside the range of the observed explanatory variables. This paper describes a set of physical model-assisted analyses to study the capability of two different ultrasonic testing inspection methods to detect synthetic hard alpha inclusion and flat-bottom hole defects in a titanium forging disk.
Probabilistic Independence Networks for Hidden Markov Probability Models
NASA Technical Reports Server (NTRS)
Smyth, Padhraic; Heckerman, Cavid; Jordan, Michael I
1996-01-01
In this paper we explore hidden Markov models(HMMs) and related structures within the general framework of probabilistic independence networks (PINs). The paper contains a self-contained review of the basic principles of PINs. It is shown that the well-known forward-backward (F-B) and Viterbi algorithms for HMMs are special cases of more general enference algorithms for arbitrary PINs.
A simulation model for estimating probabilities of defects in welds
Chapman, O.J.V.; Khaleel, M.A.; Simonen, F.A.
1996-12-01
In recent work for the US Nuclear Regulatory Commission in collaboration with Battelle Pacific Northwest National Laboratory, Rolls-Royce and Associates, Ltd., has adapted an existing model for piping welds to address welds in reactor pressure vessels. This paper describes the flaw estimation methodology as it applies to flaws in reactor pressure vessel welds (but not flaws in base metal or flaws associated with the cladding process). Details of the associated computer software (RR-PRODIGAL) are provided. The approach uses expert elicitation and mathematical modeling to simulate the steps in manufacturing a weld and the errors that lead to different types of weld defects. The defects that may initiate in weld beads include center cracks, lack of fusion, slag, pores with tails, and cracks in heat affected zones. Various welding processes are addressed including submerged metal arc welding. The model simulates the effects of both radiographic and dye penetrant surface inspections. Output from the simulation gives occurrence frequencies for defects as a function of both flaw size and flaw location (surface connected and buried flaws). Numerical results are presented to show the effects of submerged metal arc versus manual metal arc weld processes.
Inferring Tree Causal Models of Cancer Progression with Probability Raising
Mauri, Giancarlo; Antoniotti, Marco; Mishra, Bud
2014-01-01
Existing techniques to reconstruct tree models of progression for accumulative processes, such as cancer, seek to estimate causation by combining correlation and a frequentist notion of temporal priority. In this paper, we define a novel theoretical framework called CAPRESE (CAncer PRogression Extraction with Single Edges) to reconstruct such models based on the notion of probabilistic causation defined by Suppes. We consider a general reconstruction setting complicated by the presence of noise in the data due to biological variation, as well as experimental or measurement errors. To improve tolerance to noise we define and use a shrinkage-like estimator. We prove the correctness of our algorithm by showing asymptotic convergence to the correct tree under mild constraints on the level of noise. Moreover, on synthetic data, we show that our approach outperforms the state-of-the-art, that it is efficient even with a relatively small number of samples and that its performance quickly converges to its asymptote as the number of samples increases. For real cancer datasets obtained with different technologies, we highlight biologically significant differences in the progressions inferred with respect to other competing techniques and we also show how to validate conjectured biological relations with progression models. PMID:25299648
Inferring tree causal models of cancer progression with probability raising.
Olde Loohuis, Loes; Loohuis, Loes Olde; Caravagna, Giulio; Graudenzi, Alex; Ramazzotti, Daniele; Mauri, Giancarlo; Antoniotti, Marco; Mishra, Bud
2014-01-01
Existing techniques to reconstruct tree models of progression for accumulative processes, such as cancer, seek to estimate causation by combining correlation and a frequentist notion of temporal priority. In this paper, we define a novel theoretical framework called CAPRESE (CAncer PRogression Extraction with Single Edges) to reconstruct such models based on the notion of probabilistic causation defined by Suppes. We consider a general reconstruction setting complicated by the presence of noise in the data due to biological variation, as well as experimental or measurement errors. To improve tolerance to noise we define and use a shrinkage-like estimator. We prove the correctness of our algorithm by showing asymptotic convergence to the correct tree under mild constraints on the level of noise. Moreover, on synthetic data, we show that our approach outperforms the state-of-the-art, that it is efficient even with a relatively small number of samples and that its performance quickly converges to its asymptote as the number of samples increases. For real cancer datasets obtained with different technologies, we highlight biologically significant differences in the progressions inferred with respect to other competing techniques and we also show how to validate conjectured biological relations with progression models.
Marcie, S; Fellah, M; Chami, S; Mekki, F
2015-01-01
Objective: The aim of this study is to assess and quantify patients' set-up errors using an electronic portal imaging device and to evaluate their dosimetric and biological impact in terms of generalized equivalent uniform dose (gEUD) on predictive models, such as the tumour control probability (TCP) and the normal tissue complication probability (NTCP). Methods: 20 patients treated for nasopharyngeal cancer were enrolled in the radiotherapy–oncology department of HCA. Systematic and random errors were quantified. The dosimetric and biological impact of these set-up errors on the target volume and the organ at risk (OARs) coverage were assessed using calculation of dose–volume histogram, gEUD, TCP and NTCP. For this purpose, an in-house software was developed and used. Results: The standard deviations (1SDs) of the systematic set-up and random set-up errors were calculated for the lateral and subclavicular fields and gave the following results: ∑ = 0.63 ± (0.42) mm and σ = 3.75 ± (0.79) mm, respectively. Thus a planning organ at risk volume (PRV) margin of 3 mm was defined around the OARs, and a 5-mm margin used around the clinical target volume. The gEUD, TCP and NTCP calculations obtained with and without set-up errors have shown increased values for tumour, where ΔgEUD (tumour) = 1.94% Gy (p = 0.00721) and ΔTCP = 2.03%. The toxicity of OARs was quantified using gEUD and NTCP. The values of ΔgEUD (OARs) vary from 0.78% to 5.95% in the case of the brainstem and the optic chiasm, respectively. The corresponding ΔNTCP varies from 0.15% to 0.53%, respectively. Conclusion: The quantification of set-up errors has a dosimetric and biological impact on the tumour and on the OARs. The developed in-house software using the concept of gEUD, TCP and NTCP biological models has been successfully used in this study. It can be used also to optimize the treatment plan established for our patients. Advances in knowledge: The g
Transition probabilities matrix of Markov Chain in the fatigue crack growth model
NASA Astrophysics Data System (ADS)
Nopiah, Zulkifli Mohd; Januri, Siti Sarah; Ariffin, Ahmad Kamal; Masseran, Nurulkamal; Abdullah, Shahrum
2016-10-01
Markov model is one of the reliable method to describe the growth of the crack from the initial until fracture phase. One of the important subjects in the crack growth models is to obtain the transition probability matrix of the fatigue. Determining probability transition matrix is important in Markov Chain model for describing probability behaviour of fatigue life in the structure. In this paper, we obtain transition probabilities of a Markov chain based on the Paris law equation to describe the physical meaning of fatigue crack growth problem. The results show that the transition probabilities are capable to calculate the probability of damage in the future with the possibilities of comparing each stage between time.
Probability models in the analysis of radiation-related complications: utility and limitations
Potish, R.A.; Boen, J.; Jones, T.K. Jr.; Levitt, S.H.
1981-07-01
In order to predict radiation-related enteric damage, 92 women were studied who had received identical radiation doses for cancer of the ovary from 1970 through 1977. A logistic model was used to predict the probability of complication as a function of number of laparotomies, hypertension, and thin physique. The utility and limitations of such probability models are presented.
Interpretation of the results of statistical measurements. [search for basic probability model
NASA Technical Reports Server (NTRS)
Olshevskiy, V. V.
1973-01-01
For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.
A probability model for the distribution of the number of migrants at the household level.
Yadava, K N; Singh, R B
1991-01-01
A probability model to characterize the pattern of total number of migrants from a household has been developed. Earlier models which had several limitations have been improved. The assumptions of the proposed model were migrants from a household occur in clusters and may rarely happen, and the risk of migration occurring in a cluster vary from household to household. Data from 3514 households from either semiurban, remote, or growth center villages in India were applied to the proposed probability model. The Rural Development and Population Growth--A Sample Survey 1978 defined household as a group of people who normally live together and eat from a shared kitchen. The people do not necessarily reside in the village, however, but work elsewhere and send remittances. They consider themselves to be part of the household. Observed and expected frequencies significantly agreed only for those in the high social status group (p.05). The mean number of clusters/household was greater for remote villages (.26) than growth center and semiurban villages (.22 and .13, respectively). On the other hand, the mean number of migrants/cluster was smaller for remote villages (2.1) than growth center and semiurban villages (2.17 and 2.62, respectively). These results may indicate that men migrate alone in different clusters from remote villages and men from growth center and semiurban villages migrate with their families in fewer number of clusters. Men from growth center and semiurban villages tended to be well educated and professionals. The mean number of migrants/household was higher for remote villages (.56) than growth center (.47) and semiurban (.33) villages. Commuting to work accounted for this difference.
Jakobi, Annika; Bandurska-Luque, Anna; Stützer, Kristin; Haase, Robert; Löck, Steffen; Wack, Linda-Jacqueline; Mönnich, David; Thorwarth, Daniela; and others
2015-08-01
Purpose: The purpose of this study was to determine, by treatment plan comparison along with normal tissue complication probability (NTCP) modeling, whether a subpopulation of patients with head and neck squamous cell carcinoma (HNSCC) could be identified that would gain substantial benefit from proton therapy in terms of NTCP. Methods and Materials: For 45 HNSCC patients, intensity modulated radiation therapy (IMRT) was compared to intensity modulated proton therapy (IMPT). Physical dose distributions were evaluated as well as the resulting NTCP values, using modern models for acute mucositis, xerostomia, aspiration, dysphagia, laryngeal edema, and trismus. Patient subgroups were defined based on primary tumor location. Results: Generally, IMPT reduced the NTCP values while keeping similar target coverage for all patients. Subgroup analyses revealed a higher individual reduction of swallowing-related side effects by IMPT for patients with tumors in the upper head and neck area, whereas the risk reduction of acute mucositis was more pronounced in patients with tumors in the larynx region. More patients with tumors in the upper head and neck area had a reduction in NTCP of more than 10%. Conclusions: Subgrouping can help to identify patients who may benefit more than others from the use of IMPT and, thus, can be a useful tool for a preselection of patients in the clinic where there are limited PT resources. Because the individual benefit differs within a subgroup, the relative merits should additionally be evaluated by individual treatment plan comparisons.
An improved cellular automaton traffic model considering gap-dependent delay probability
NASA Astrophysics Data System (ADS)
Li, Qi-Lang; Wang, Bing-Hong; Liu, Mu-Ren
2011-04-01
In this paper, the delay probability of the original Nagel and Schreckenberg model is modified to simulate one-lane traffic flow. The delay probability of a vehicle depends on its corresponding gap. According to simulation results, it has been found that the structure of the fundamental diagram of the new model is sensitively dependent on the values of the delay probability. In comparison with the NS model, one notes that the fundamental diagram of the new model is more consistent with the results measured in the real traffic, and the velocity distributions of the new model are relatively reasonable.
Orton, Andrew; Gordon, John; Vigh, Tyler; Tonkin, Allison; Cannon, George
2017-05-03
There is no consensus standard regarding the placement of the inferior field border in whole brain radiation therapy (WBRT) plans, with most providers choosing to cover the first versus (vs.) second cervical vertebrae (C1 vs. C2). We hypothesize that extending coverage to C2 may increase predicted rates of xerostomia. Fifteen patients underwent computed tomography (CT) simulation; two WBRT plans were then produced, one covering C2 and the other covering C1. The plans were otherwise standard, and patients were prescribed doses of 25, 30 and 37.5 gray (Gy). Dose-volume statistics were obtained and normal tissue complication probabilities (NTCPs) were estimated using the Lyman-Burman-Kutcher model. Mean parotid dose and predicted xerostomia rates were compared for plans covering C2 vs. C1 using a two-sided patient-matched t-test. Plans were also evaluated to determine whether extending the lower field border to cover C2 would result in a violation of commonly accepted dosimetric planning constraints. The mean dose to both parotid glands was significantly higher in WBRT plans covering C2 compared to plans covering C1 for all dose prescriptions (p<0.01). Normal tissue complication probabilities were also significantly higher when covering C2 vs. C1, for all prescribed doses (p<0.01). Predicted median rates of xerostomia ranged from <0.03%-21% for plans covering C2 vs. <0.001%-12% for patients treated with plans covering C1 (p<0.01), dependent on the treatment dose and NTCP model. Plans covering C2 were unable to constrain at least one parotid to <20 Gy in 31% of plans vs. 9% of plans when C1 was covered. A total parotid dose constraint of <25 Gy was violated in 13% of plans covering C2 vs. 0% of plans covering C1. Coverage of C2 significantly increases the mean parotid dose and predicted NTCPs and results in more frequent violation of commonly accepted dosimetric planning constraints.
Probability distributions of molecular observables computed from Markov models.
Noé, Frank
2008-06-28
Molecular dynamics (MD) simulations can be used to estimate transition rates between conformational substates of the simulated molecule. Such an estimation is associated with statistical uncertainty, which depends on the number of observed transitions. In turn, it induces uncertainties in any property computed from the simulation, such as free energy differences or the time scales involved in the system's kinetics. Assessing these uncertainties is essential for testing the reliability of a given observation and also to plan further simulations in such a way that the most serious uncertainties will be reduced with minimal effort. Here, a rigorous statistical method is proposed to approximate the complete statistical distribution of any observable of an MD simulation provided that one can identify conformational substates such that the transition process between them may be modeled with a memoryless jump process, i.e., Markov or Master equation dynamics. The method is based on sampling the statistical distribution of Markov transition matrices that is induced by the observed transition events. It allows physically meaningful constraints to be included, such as sampling only matrices that fulfill detailed balance, or matrices that produce a predefined equilibrium distribution of states. The method is illustrated on mus MD simulations of a hexapeptide for which the distributions and uncertainties of the free energy differences between conformations, the transition matrix elements, and the transition matrix eigenvalues are estimated. It is found that both constraints, detailed balance and predefined equilibrium distribution, can significantly reduce the uncertainty of some observables.
Dhawan, Andrew; Kaveh, Kamran; Kohandel, Mohammad; Sivaloganathan, Sivabal
2014-11-22
Estimating the required dose in radiotherapy is of crucial importance since the administrated dose should be sufficient to eradicate the tumor and at the same time should inflict minimal damage on normal cells. The probability that a given dose and schedule of ionizing radiation eradicates all the tumor cells in a given tissue is called the tumor control probability (TCP), and is often used to compare various treatment strategies used in radiation therapy. In this paper, we aim to investigate the effects of including cell-cycle phase on the TCP by analyzing a stochastic model of a tumor comprised of actively dividing cells and quiescent cells with different radiation sensitivities. Moreover, we use a novel numerical approach based on the method of characteristics for partial differential equations, validated by the Gillespie algorithm, to compute the TCP as a function of time. We derive an exact phase-diagram for the steady-state TCP of the model and show that at high, clinically-relevant doses of radiation, the distinction between active and quiescent tumor cells (i.e. accounting for cell-cycle effects) becomes of negligible importance in terms of its effect on the TCP curve. However, for very low doses of radiation, these proportions become significant determinants of the TCP. We also present the results of TCP as a function of time for different values of asymmetric division factor. We observe that our results differ from the results in the literature using similar existing models, even though similar parameters values are used, and the reasons for this are discussed.
NASA Astrophysics Data System (ADS)
Neupauer, R. M.; Wilson, J. L.
2005-02-01
Backward location and travel time probability density functions characterize the possible former locations (or the source location) of contamination that is observed in an aquifer. For an observed contaminant particle the backward location probability density function (PDF) describes its position at a fixed time prior to sampling, and the backward travel time probability density function describes the amount of time required for the particle to travel to the sampling location from a fixed upgradient position. The backward probability model has been developed for a single observation of contamination (e.g., Neupauer and Wilson, 1999). In practical situations, contamination is sampled at multiple locations and times, and these additional data provide information that can be used to better characterize the former position of contamination. Through Bayes' theorem we combine the individual PDFs for each observation to obtain a PDF for multiple observations that describes the possible source locations or release times of all observed contaminant particles, assuming they originated from the same instantaneous point source. We show that the multiple-observation probability density function is the normalized product of the single-observation PDFs. The additional information available from multiple observations reduces the variances of the source location and travel time probability density functions and improves the characterization of the contamination source. We apply the backward probability model to a trichloroethylene (TCE) plume at the Massachusetts Military Reservation (MMR). We use four TCE samples distributed throughout the plume to obtain single-observation and multiple-observation location and travel time PDFs in three dimensions. These PDFs provide information about the possible sources of contamination. Under assumptions that the existing MMR model is properly calibrated and the conceptual model is correct the results confirm the two suspected sources of
Simpson, Daniel R; Song, William Y; Moiseenko, Vitali; Rose, Brent S; Yashar, Catheryn M; Mundt, Arno J; Mell, Loren K
2012-05-01
To test the hypothesis that increased bowel radiation dose is associated with acute gastrointestinal (GI) toxicity in cervical cancer patients undergoing concurrent chemotherapy and intensity-modulated radiation therapy (IMRT), using a previously derived normal tissue complication probability (NTCP) model. Fifty patients with Stage I-III cervical cancer undergoing IMRT and concurrent weekly cisplatin were analyzed. Acute GI toxicity was graded using the Radiation Therapy Oncology Group scale, excluding upper GI events. A logistic model was used to test correlations between acute GI toxicity and bowel dosimetric parameters. The primary objective was to test the association between Grade ≥2 GI toxicity and the volume of bowel receiving ≥45 Gy (V(45)) using the logistic model. Twenty-three patients (46%) had Grade ≥2 GI toxicity. The mean (SD) V(45) was 143 mL (99). The mean V(45) values for patients with and without Grade ≥2 GI toxicity were 176 vs. 115 mL, respectively. Twenty patients (40%) had V(45) >150 mL. The proportion of patients with Grade ≥2 GI toxicity with and without V(45) >150 mL was 65% vs. 33% (p = 0.03). Logistic model parameter estimates V50 and γ were 161 mL (95% confidence interval [CI] 60-399) and 0.31 (95% CI 0.04-0.63), respectively. On multivariable logistic regression, increased V(45) was associated with an increased odds of Grade ≥2 GI toxicity (odds ratio 2.19 per 100 mL, 95% CI 1.04-4.63, p = 0.04). Our results support the hypothesis that increasing bowel V(45) is correlated with increased GI toxicity in cervical cancer patients undergoing IMRT and concurrent cisplatin. Reducing bowel V(45) could reduce the risk of Grade ≥2 GI toxicity by approximately 50% per 100 mL of bowel spared. Copyright © 2012 Elsevier Inc. All rights reserved.
Skew-normal antedependence models for skewed longitudinal data.
Chang, Shu-Ching; Zimmerman, Dale L
2016-06-01
Antedependence models, also known as transition models, have proven to be useful for longitudinal data exhibiting serial correlation, especially when the variances and/or same-lag correlations are time-varying. Statistical inference procedures associated with normal antedependence models are well-developed and have many nice properties, but they are not appropriate for longitudinal data that exhibit considerable skewness. We propose two direct extensions of normal antedependence models to skew-normal antedependence models. The first is obtained by imposing antedependence on a multivariate skew-normal distribution, and the second is a sequential autoregressive model with skew-normal innovations. For both models, necessary and sufficient conditions for [Formula: see text]th-order antedependence are established, and likelihood-based estimation and testing procedures for models satisfying those conditions are developed. The procedures are applied to simulated data and to real data from a study of cattle growth.
Time‐dependent renewal‐model probabilities when date of last earthquake is unknown
Field, Edward H.; Jordan, Thomas H.
2015-01-01
We derive time-dependent, renewal-model earthquake probabilities for the case in which the date of the last event is completely unknown, and compare these with the time-independent Poisson probabilities that are customarily used as an approximation in this situation. For typical parameter values, the renewal-model probabilities exceed Poisson results by more than 10% when the forecast duration exceeds ~20% of the mean recurrence interval. We also derive probabilities for the case in which the last event is further constrained to have occurred before historical record keeping began (the historic open interval), which can only serve to increase earthquake probabilities for typically applied renewal models.We conclude that accounting for the historic open interval can improve long-term earthquake rupture forecasts for California and elsewhere.
Cold and hot cognition: quantum probability theory and realistic psychological modeling.
Corr, Philip J
2013-06-01
Typically, human decision making is emotionally "hot" and does not conform to "cold" classical probability (CP) theory. As quantum probability (QP) theory emphasises order, context, superimposition states, and nonlinear dynamic effects, one of its major strengths may be its power to unify formal modeling and realistic psychological theory (e.g., information uncertainty, anxiety, and indecision, as seen in the Prisoner's Dilemma).
ERIC Educational Resources Information Center
Jenny, Mirjam A.; Rieskamp, Jörg; Nilsson, Håkan
2014-01-01
Judging whether multiple events will co-occur is an important aspect of everyday decision making. The underlying probabilities of occurrence are usually unknown and have to be inferred from experience. Using a rigorous, quantitative model comparison, we investigate how people judge the conjunctive probabilities of multiple events to co-occur. In 2…
ERIC Educational Resources Information Center
Jenny, Mirjam A.; Rieskamp, Jörg; Nilsson, Håkan
2014-01-01
Judging whether multiple events will co-occur is an important aspect of everyday decision making. The underlying probabilities of occurrence are usually unknown and have to be inferred from experience. Using a rigorous, quantitative model comparison, we investigate how people judge the conjunctive probabilities of multiple events to co-occur. In 2…
Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno
2016-01-01
Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision. PMID:27303323
Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno
2016-01-01
Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision.
Rønjom, Marianne F; Brink, Carsten; Lorenzen, Ebbe L; Hegedüs, Laszlo; Johansen, Jørgen
2015-01-01
To examine the variations of risk-estimates of radiation-induced hypothyroidism (HT) from our previously developed normal tissue complication probability (NTCP) model in patients with head and neck squamous cell carcinoma (HNSCC) in relation to variability of delineation of the thyroid gland. In a previous study for development of an NTCP model for HT, the thyroid gland was delineated in 246 treatment plans of patients with HNSCC. Fifty of these plans were randomly chosen for re-delineation for a study of the intra- and inter-observer variability of thyroid volume, Dmean and estimated risk of HT. Bland-Altman plots were used for assessment of the systematic (mean) and random [standard deviation (SD)] variability of the three parameters, and a method for displaying the spatial variation in delineation differences was developed. Intra-observer variability resulted in a mean difference in thyroid volume and Dmean of 0.4 cm(3) (SD ± 1.6) and -0.5 Gy (SD ± 1.0), respectively, and 0.3 cm(3) (SD ± 1.8) and 0.0 Gy (SD ± 1.3) for inter-observer variability. The corresponding mean differences of NTCP values for radiation-induced HT due to intra- and inter-observer variations were insignificantly small, -0.4% (SD ± 6.0) and -0.7% (SD ± 4.8), respectively, but as the SDs show, for some patients the difference in estimated NTCP was large. For the entire study population, the variation in predicted risk of radiation-induced HT in head and neck cancer was small and our NTCP model was robust against observer variations in delineation of the thyroid gland. However, for the individual patient, there may be large differences in estimated risk which calls for precise delineation of the thyroid gland to obtain correct dose and NTCP estimates for optimized treatment planning in the individual patient.
A FRAX® model for the assessment of fracture probability in Belgium.
Johansson, H; Kanis, J A; McCloskey, E V; Odén, A; Devogelaer, J-P; Kaufman, J-M; Neuprez, A; Hiligsmann, M; Bruyere, O; Reginster, J-Y
2011-02-01
A country-specific FRAX® model was developed from the epidemiology of fracture and death in Belgium. Fracture probabilities were identified that corresponded to currently accepted reimbursement thresholds. The objective of this study was to evaluate a Belgian version of the WHO fracture risk assessment (FRAX®) tool to compute 10-year probabilities of osteoporotic fracture in men and women. A particular aim was to determine fracture probabilities that corresponded to the reimbursement policy for the management of osteoporosis in Belgium and the clinical scenarios that gave equivalent fracture probabilities. Fracture probabilities were computed from published data on the fracture and death hazards in Belgium. Probabilities took account of age, sex, the presence of clinical risk factors and femoral neck bone mineral density (BMD). Fracture probabilities were determined that were equivalent to intervention (reimbursement) thresholds currently used in Belgium. Fracture probability increased with age, lower BMI, decreasing BMD T-score and all clinical risk factors used alone or combined. The 10-year probabilities of a major osteoporosis-related fracture that corresponded to current reimbursement guidelines ranged from approximately 7.5% at the age of 50 years to 26% at the age of 80 years where a prior fragility fracture was used as an intervention threshold. For women at the threshold of osteoporosis (femoral neck T-score = -2.5 SD), the respective probabilities ranged from 7.4% to 15%. Several combinations of risk-factor profiles were identified that gave similar or higher fracture probabilities than those currently accepted for reimbursement in Belgium. The FRAX® tool has been used to identify possible thresholds for therapeutic intervention in Belgium, based on equivalence of risk with current guidelines. The FRAX® model supports a shift from the current DXA-based intervention strategy, towards a strategy based on fracture probability of a major osteoporotic
A removal model for estimating detection probabilities from point-count surveys
Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.
2000-01-01
We adapted a removal model to estimate detection probability during point count surveys. The model assumes one factor influencing detection during point counts is the singing frequency of birds. This may be true for surveys recording forest songbirds when most detections are by sound. The model requires counts to be divided into several time intervals. We used time intervals of 2, 5, and 10 min to develop a maximum-likelihood estimator for the detectability of birds during such surveys. We applied this technique to data from bird surveys conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. The overall detection probability for all birds was 75%. We found differences in detection probability among species. Species that sing frequently such as Winter Wren and Acadian Flycatcher had high detection probabilities (about 90%) and species that call infrequently such as Pileated Woodpecker had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. This method of estimating detectability during point count surveys offers a promising new approach to using count data to address questions of the bird abundance, density, and population trends.
Developing a probability-based model of aquifer vulnerability in an agricultural region
NASA Astrophysics Data System (ADS)
Chen, Shih-Kai; Jang, Cheng-Shin; Peng, Yi-Huei
2013-04-01
SummaryHydrogeological settings of aquifers strongly influence the regional groundwater movement and pollution processes. Establishing a map of aquifer vulnerability is considerably critical for planning a scheme of groundwater quality protection. This study developed a novel probability-based DRASTIC model of aquifer vulnerability in the Choushui River alluvial fan, Taiwan, using indicator kriging and to determine various risk categories of contamination potentials based on estimated vulnerability indexes. Categories and ratings of six parameters in the probability-based DRASTIC model were probabilistically characterized according to the parameter classification methods of selecting a maximum estimation probability and calculating an expected value. Moreover, the probability-based estimation and assessment gave us an excellent insight into propagating the uncertainty of parameters due to limited observation data. To examine the prediction capacity of pollutants for the developed probability-based DRASTIC model, medium, high, and very high risk categories of contamination potentials were compared with observed nitrate-N exceeding 0.5 mg/L indicating the anthropogenic groundwater pollution. The analyzed results reveal that the developed probability-based DRASTIC model is capable of predicting high nitrate-N groundwater pollution and characterizing the parameter uncertainty via the probability estimation processes.
Love, Jeffrey J.; Rigler, E. Joshua; Pulkkinen, Antti; Riley, Pete
2015-01-01
An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to −Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, −Dst≥850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42,2.41] times per century; a 100-yr magnetic storm is identified as having a −Dst≥880 nT (greater than Carrington) but a wide 95% confidence interval of [490,1187] nT.
NASA Astrophysics Data System (ADS)
Love, J. J.; Rigler, E. J.; Pulkkinen, A. A.; Riley, P.
2015-12-01
An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to -Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, -Dst > 850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42, 2.41] times per century; a 100-yr magnetic storm is identified as having a -Dst > 880 nT (greater than Carrington) but a wide 95% confidence interval of [490, 1187] nT. This work is partially motivated by United States National Science and Technology Council and Committee on Space Research and International Living with a Star priorities and strategic plans for the assessment and mitigation of space-weather hazards.
Statistical eye model for normal eyes.
Rozema, Jos J; Atchison, David A; Tassignon, Marie-José
2011-06-23
To create a binocular statistical eye model based on previously measured ocular biometric data. Thirty-nine parameters were determined for a group of 127 healthy subjects (37 male, 90 female; 96.8% Caucasian) with an average age of 39.9 ± 12.2 years and spherical equivalent refraction of -0.98 ± 1.77 D. These parameters described the biometry of both eyes and the subjects' age. Missing parameters were complemented by data from a previously published study. After confirmation of the Gaussian shape of their distributions, these parameters were used to calculate their mean and covariance matrices. These matrices were then used to calculate a multivariate Gaussian distribution. From this, an amount of random biometric data could be generated, which were then randomly selected to create a realistic population of random eyes. All parameters had Gaussian distributions, with the exception of the parameters that describe total refraction (i.e., three parameters per eye). After these non-Gaussian parameters were omitted from the model, the generated data were found to be statistically indistinguishable from the original data for the remaining 33 parameters (TOST [two one-sided t tests]; P < 0.01). Parameters derived from the generated data were also significantly indistinguishable from those calculated with the original data (P > 0.05). The only exception to this was the lens refractive index, for which the generated data had a significantly larger SD. A statistical eye model can describe the biometric variations found in a population and is a useful addition to the classic eye models.
A removal model for estimating detection probabilities from point-count surveys
Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.
2002-01-01
Use of point-count surveys is a popular method for collecting data on abundance and distribution of birds. However, analyses of such data often ignore potential differences in detection probability. We adapted a removal model to directly estimate detection probability during point-count surveys. The model assumes that singing frequency is a major factor influencing probability of detection when birds are surveyed using point counts. This may be appropriate for surveys in which most detections are by sound. The model requires counts to be divided into several time intervals. Point counts are often conducted for 10 min, where the number of birds recorded is divided into those first observed in the first 3 min, the subsequent 2 min, and the last 5 min. We developed a maximum-likelihood estimator for the detectability of birds recorded during counts divided into those intervals. This technique can easily be adapted to point counts divided into intervals of any length. We applied this method to unlimited-radius counts conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. We found differences in detection probability among species. Species that sing frequently such as Winter Wren (Troglodytes troglodytes) and Acadian Flycatcher (Empidonax virescens) had high detection probabilities (~90%) and species that call infrequently such as Pileated Woodpecker (Dryocopus pileatus) had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. We used the same approach to estimate detection probability and density for a subset of the observations with limited-radius point counts.
Study on Effects of the Stochastic Delay Probability for 1d CA Model of Traffic Flow
NASA Astrophysics Data System (ADS)
Xue, Yu; Chen, Yan-Hong; Kong, Ling-Jiang
Considering the effects of different factors on the stochastic delay probability, the delay probability has been classified into three cases. The first case corresponding to the brake state has a large delay probability if the anticipant velocity is larger than the gap between the successive cars. The second one corresponding to the following-the-leader rule has intermediate delay probability if the anticipant velocity is equal to the gap. Finally, the third case is the acceleration, which has minimum delay probability. The fundamental diagram obtained by numerical simulation shows the different properties compared to that by the NaSch model, in which there exist two different regions, corresponding to the coexistence state, and jamming state respectively.
[A FRAX model for the assessment of fracture probability in Belgium].
Neuprez, A; Johansson, H; Kanis, J A; McCloskey, E V; Odén, A; Bruyère, O; Hiligsmann, M; Devogelaer, J P; Kaufman, J M; Reginster, J Y
2009-12-01
The objective of this study was to evaluate a Belgian version of the WHO fracture risk assessment (FRAX) tool to compute 10-year probabilities of osteoporotic fracture in men and women. A particular aim was to determine fracture probabilities that corresponded to the reimbursement policy for the management of osteoporosis in Belgium and the clinical scenarios that gave equivalent fracture probabilities. Fracture probabilities were computed from published data on the fracture and death hazards in Belgium. Probabilities took account of age, sex, the presence of clinical risk factors and femoral neck BMD. Fracture probabilities were determined that were equivalent to intervention (reimbursement) thresholds currently used in Belgium. Fracture probability increased with age, lower BMI, decreasing BMD T-Score, and all clinical risk factors used alone or combined. The FRAX tool has been used to identify possible thresholds for therapeutic intervention in Belgium, based on equivalence of risk with current guidelines. The FRAX model supports a shift from the current DXA based intervention strategy, towards a strategy based on fracture probability of a major osteoporotic fracture that in turn may improve identification of patients at increased fracture risk. The approach will need to be supported by health economic analyses.
Simpson, Daniel R.; Song, William Y.; Moiseenko, Vitali; Rose, Brent S.; Yashar, Catheryn M.; Mundt, Arno J.; Mell, Loren K.
2012-05-01
Purpose: To test the hypothesis that increased bowel radiation dose is associated with acute gastrointestinal (GI) toxicity in cervical cancer patients undergoing concurrent chemotherapy and intensity-modulated radiation therapy (IMRT), using a previously derived normal tissue complication probability (NTCP) model. Methods: Fifty patients with Stage I-III cervical cancer undergoing IMRT and concurrent weekly cisplatin were analyzed. Acute GI toxicity was graded using the Radiation Therapy Oncology Group scale, excluding upper GI events. A logistic model was used to test correlations between acute GI toxicity and bowel dosimetric parameters. The primary objective was to test the association between Grade {>=}2 GI toxicity and the volume of bowel receiving {>=}45 Gy (V{sub 45}) using the logistic model. Results: Twenty-three patients (46%) had Grade {>=}2 GI toxicity. The mean (SD) V{sub 45} was 143 mL (99). The mean V{sub 45} values for patients with and without Grade {>=}2 GI toxicity were 176 vs. 115 mL, respectively. Twenty patients (40%) had V{sub 45} >150 mL. The proportion of patients with Grade {>=}2 GI toxicity with and without V{sub 45} >150 mL was 65% vs. 33% (p = 0.03). Logistic model parameter estimates V50 and {gamma} were 161 mL (95% confidence interval [CI] 60-399) and 0.31 (95% CI 0.04-0.63), respectively. On multivariable logistic regression, increased V{sub 45} was associated with an increased odds of Grade {>=}2 GI toxicity (odds ratio 2.19 per 100 mL, 95% CI 1.04-4.63, p = 0.04). Conclusions: Our results support the hypothesis that increasing bowel V{sub 45} is correlated with increased GI toxicity in cervical cancer patients undergoing IMRT and concurrent cisplatin. Reducing bowel V{sub 45} could reduce the risk of Grade {>=}2 GI toxicity by approximately 50% per 100 mL of bowel spared.
Fuss, Ian G; Navarro, Daniel J
2013-10-01
In recent years quantum probability models have been used to explain many aspects of human decision making, and as such quantum models have been considered a viable alternative to Bayesian models based on classical probability. One criticism that is often leveled at both kinds of models is that they lack a clear interpretation in terms of psychological mechanisms. In this paper we discuss the mechanistic underpinnings of a quantum walk model of human decision making and response time. The quantum walk model is compared to standard sequential sampling models, and the architectural assumptions of both are considered. In particular, we show that the quantum model has a natural interpretation in terms of a cognitive architecture that is both massively parallel and involves both co-operative (excitatory) and competitive (inhibitory) interactions between units. Additionally, we introduce a family of models that includes aspects of the classical and quantum walk models.
Trending in Probability of Collision Measurements via a Bayesian Zero-Inflated Beta Mixed Model
NASA Technical Reports Server (NTRS)
Vallejo, Jonathon; Hejduk, Matt; Stamey, James
2015-01-01
We investigate the performance of a generalized linear mixed model in predicting the Probabilities of Collision (Pc) for conjunction events. Specifically, we apply this model to the log(sub 10) transformation of these probabilities and argue that this transformation yields values that can be considered bounded in practice. Additionally, this bounded random variable, after scaling, is zero-inflated. Consequently, we model these values using the zero-inflated Beta distribution, and utilize the Bayesian paradigm and the mixed model framework to borrow information from past and current events. This provides a natural way to model the data and provides a basis for answering questions of interest, such as what is the likelihood of observing a probability of collision equal to the effective value of zero on a subsequent observation.
A stochastic model for the probability of malaria extinction by mass drug administration.
Pemberton-Ross, Peter; Chitnis, Nakul; Pothin, Emilie; Smith, Thomas A
2017-09-18
Mass drug administration (MDA) has been proposed as an intervention to achieve local extinction of malaria. Although its effect on the reproduction number is short lived, extinction may subsequently occur in a small population due to stochastic fluctuations. This paper examines how the probability of stochastic extinction depends on population size, MDA coverage and the reproduction number under control, R c . A simple compartmental model is developed which is used to compute the probability of extinction using probability generating functions. The expected time to extinction in small populations after MDA for various scenarios in this model is calculated analytically. The results indicate that mass drug administration (Firstly, R c must be sustained at R c < 1.2 to avoid the rapid re-establishment of infections in the population. Secondly, the MDA must produce effective cure rates of >95% to have a non-negligible probability of successful elimination. Stochastic fluctuations only significantly affect the probability of extinction in populations of about 1000 individuals or less. The expected time to extinction via stochastic fluctuation is less than 10 years only in populations less than about 150 individuals. Clustering of secondary infections and of MDA distribution both contribute positively to the potential probability of success, indicating that MDA would most effectively be administered at the household level. There are very limited circumstances in which MDA will lead to local malaria elimination with a substantial probability.
NASA Astrophysics Data System (ADS)
Maslennikova, Yu. S.; Nugmanov, I. S.
2016-08-01
The problem of probability density function estimation for a random process is one of the most common in practice. There are several methods to solve this problem. Presented laboratory work uses methods of the mathematical statistics to detect patterns in the realization of random process. On the basis of ergodic theory, we construct algorithm for estimating univariate probability density distribution function for a random process. Correlational analysis of realizations is applied to estimate the necessary size of the sample and the time of observation. Hypothesis testing for two probability distributions (normal and Cauchy) is used on the experimental data, using χ2 criterion. To facilitate understanding and clarity of the problem solved, we use ELVIS II platform and LabVIEW software package that allows us to make the necessary calculations, display results of the experiment and, most importantly, to control the experiment. At the same time students are introduced to a LabVIEW software package and its capabilities.
Kwasniok, Frank
2013-11-01
A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.
NASA Astrophysics Data System (ADS)
Kwasniok, Frank
2013-11-01
A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.
Height probabilities in the Abelian sandpile model on the generalized finite Bethe lattice
NASA Astrophysics Data System (ADS)
Chen, Haiyan; Zhang, Fuji
2013-08-01
In this paper, we study the sandpile model on the generalized finite Bethe lattice with a particular boundary condition. Using a combinatorial method, we give the exact expressions for all single-site probabilities and some two-site joint probabilities. As a by-product, we prove that the height probabilities of bulk vertices are all the same for the Bethe lattice with certain given boundary condition, which was found from numerical evidence by Grassberger and Manna ["Some more sandpiles," J. Phys. (France) 51, 1077-1098 (1990)], 10.1051/jphys:0199000510110107700 but without a proof.
Reasenberg, P.A.; Hanks, T.C.; Bakun, W.H.
2003-01-01
The moment magnitude M 7.8 earthquake in 1906 profoundly changed the rate of seismic activity over much of northern California. The low rate of seismic activity in the San Francisco Bay region (SFBR) since 1906, relative to that of the preceding 55 yr, is often explained as a stress-shadow effect of the 1906 earthquake. However, existing elastic and visco-elastic models of stress change fail to fully account for the duration of the lowered rate of earthquake activity. We use variations in the rate of earthquakes as a basis for a simple empirical model for estimating the probability of M ≥6.7 earthquakes in the SFBR. The model preserves the relative magnitude distribution of sources predicted by the Working Group on California Earthquake Probabilities' (WGCEP, 1999; WGCEP, 2002) model of characterized ruptures on SFBR faults and is consistent with the occurrence of the four M ≥6.7 earthquakes in the region since 1838. When the empirical model is extrapolated 30 yr forward from 2002, it gives a probability of 0.42 for one or more M ≥6.7 in the SFBR. This result is lower than the probability of 0.5 estimated by WGCEP (1988), lower than the 30-yr Poisson probability of 0.60 obtained by WGCEP (1999) and WGCEP (2002), and lower than the 30-yr time-dependent probabilities of 0.67, 0.70, and 0.63 obtained by WGCEP (1990), WGCEP (1999), and WGCEP (2002), respectively, for the occurrence of one or more large earthquakes. This lower probability is consistent with the lack of adequate accounting for the 1906 stress-shadow in these earlier reports. The empirical model represents one possible approach toward accounting for the stress-shadow effect of the 1906 earthquake. However, the discrepancy between our result and those obtained with other modeling methods underscores the fact that the physics controlling the timing of earthquakes is not well understood. Hence, we advise against using the empirical model alone (or any other single probability model) for estimating the
Sulis, William H
2017-10-01
Walter Freeman III pioneered the application of nonlinear dynamical systems theories and methodologies in his work on mesoscopic brain dynamics.Sadly, mainstream psychology and psychiatry still cling to linear correlation based data analysis techniques, which threaten to subvert the process of experimentation and theory building. In order to progress, it is necessary to develop tools capable of managing the stochastic complexity of complex biopsychosocial systems, which includes multilevel feedback relationships, nonlinear interactions, chaotic dynamics and adaptability. In addition, however, these systems exhibit intrinsic randomness, non-Gaussian probability distributions, non-stationarity, contextuality, and non-Kolmogorov probabilities, as well as the absence of mean and/or variance and conditional probabilities. These properties and their implications for statistical analysis are discussed. An alternative approach, the Process Algebra approach, is described. It is a generative model, capable of generating non-Kolmogorov probabilities. It has proven useful in addressing fundamental problems in quantum mechanics and in the modeling of developing psychosocial systems.
Discrete Latent Markov Models for Normally Distributed Response Data
ERIC Educational Resources Information Center
Schmittmann, Verena D.; Dolan, Conor V.; van der Maas, Han L. J.; Neale, Michael C.
2005-01-01
Van de Pol and Langeheine (1990) presented a general framework for Markov modeling of repeatedly measured discrete data. We discuss analogical single indicator models for normally distributed responses. In contrast to discrete models, which have been studied extensively, analogical continuous response models have hardly been considered. These…
NASA Astrophysics Data System (ADS)
Murnane, R. J.; Elsner, J.
2012-12-01
Catastrophe risk models (cat models) may be used to estimate loss exceedance probabilities for a set of exposure subject to a hazard. Most cat models are quite complex. Here I describe a simplistic version of a US hurricane cat model and assume the set of exposure is the United States and the hazard is hurricanes striking the Gulf and east coasts of the US. Exceedance probabilities for total US economic loss from landfalling hurricanes are estimated using a wind speed exceedance probability model coupled to a model of total economic loss as a function of wind speed. Wind speed exceedance probabilities are derived from the historical record of maximum winds for hurricanes at landfall. The probabilities can be conditioned on a number of climatic covariates such as the North Atlantic Oscillation, the El Niño-Southern Oscillation, and North Atlantic sea surface temperatures. Loss as a function of wind speed is based on a quantile regression analysis that suggests there is an exponential relationship between hurricane wind speed at landfall in the United States and total economic loss. Within the uncertainty of the quantile regression results, total economic loss increases with wind speed at a rate of ~5% per meter per second. Estimates of loss for different return periods for the US, Gulf coast, and other regions will be presented.
1998-05-01
Coverage Probability with a Random Optimization Procedure: An Artificial Neural Network Approach by Biing T. Guan, George Z. Gertner, and Alan B...Modeling Training Site Vegetation Coverage Probability with a Random Optimizing Procedure: An Artificial Neural Network Approach 6. AUTHOR(S) Biing...coverage based on past coverage. Approach A literature survey was conducted to identify artificial neural network analysis techniques applicable for
Proposal: A Hybrid Dictionary Modelling Approach for Malay Tweet Normalization
NASA Astrophysics Data System (ADS)
Muhamad, Nor Azlizawati Binti; Idris, Norisma; Arshi Saloot, Mohammad
2017-02-01
Malay Twitter message presents a special deviation from the original language. Malay Tweet widely used currently by Twitter users, especially at Malaya archipelago. Thus, it is important to make a normalization system which can translated Malay Tweet language into the standard Malay language. Some researchers have conducted in natural language processing which mainly focuses on normalizing English Twitter messages, while few studies have been done for normalize Malay Tweets. This paper proposes an approach to normalize Malay Twitter messages based on hybrid dictionary modelling methods. This approach normalizes noisy Malay twitter messages such as colloquially language, novel words, and interjections into standard Malay language. This research will be used Language Model and N-grams model.
NASA Astrophysics Data System (ADS)
Matias-Peralta, Hazel Monica; Ghodsi, Alireza; Shitan, Mahendran; Yusoff, Fatimah Md.
Copepods are the most abundant microcrustaceans in the marine waters and are the major food resource for many commercial fish species. In addition, changes in the distribution and population composition of copepods may also serve as an indicator of global climate changes. Therefore, it is important to model the copepod distribution in different ecosystems. Copepod samples were collected from three different ecosystems (seagrass area, cage aquaculture area and coastal waters off shrimp aquaculture farm) along the coastal waters of the Malacca Straits over a one year period. In this study the major statistical analysis consisted of fitting different probability models. This paper highlights the fitting of probability distributions and discusses the adequateness of the fitted models. The usefulness of these fitted models would enable one to make probability statements about the distribution of copepods in three different ecosystems.
van Walraven, Carl
2017-04-01
Diagnostic codes used in administrative databases cause bias due to misclassification of patient disease status. It is unclear which methods minimize this bias. Serum creatinine measures were used to determine severe renal failure status in 50,074 hospitalized patients. The true prevalence of severe renal failure and its association with covariates were measured. These were compared to results for which renal failure status was determined using surrogate measures including the following: (1) diagnostic codes; (2) categorization of probability estimates of renal failure determined from a previously validated model; or (3) bootstrap methods imputation of disease status using model-derived probability estimates. Bias in estimates of severe renal failure prevalence and its association with covariates were minimal when bootstrap methods were used to impute renal failure status from model-based probability estimates. In contrast, biases were extensive when renal failure status was determined using codes or methods in which model-based condition probability was categorized. Bias due to misclassification from inaccurate diagnostic codes can be minimized using bootstrap methods to impute condition status using multivariable model-derived probability estimates. Copyright © 2017 Elsevier Inc. All rights reserved.
Franceschetti, Donald R; Gire, Elizabeth
2013-06-01
Quantum probability theory offers a viable alternative to classical probability, although there are some ambiguities inherent in transferring the quantum formalism to a less determined realm. A number of physicists are now looking at the applicability of quantum ideas to the assessment of physics learning, an area particularly suited to quantum probability ideas.
Not Quite Normal: Consequences of Violating the Assumption of Normality in Regression Mixture Models
ERIC Educational Resources Information Center
Van Horn, M. Lee; Smith, Jessalyn; Fagan, Abigail A.; Jaki, Thomas; Feaster, Daniel J.; Masyn, Katherine; Hawkins, J. David; Howe, George
2012-01-01
Regression mixture models, which have only recently begun to be used in applied research, are a new approach for finding differential effects. This approach comes at the cost of the assumption that error terms are normally distributed within classes. This study uses Monte Carlo simulations to explore the effects of relatively minor violations of…
Not Quite Normal: Consequences of Violating the Assumption of Normality in Regression Mixture Models
ERIC Educational Resources Information Center
Van Horn, M. Lee; Smith, Jessalyn; Fagan, Abigail A.; Jaki, Thomas; Feaster, Daniel J.; Masyn, Katherine; Hawkins, J. David; Howe, George
2012-01-01
Regression mixture models, which have only recently begun to be used in applied research, are a new approach for finding differential effects. This approach comes at the cost of the assumption that error terms are normally distributed within classes. This study uses Monte Carlo simulations to explore the effects of relatively minor violations of…
Wildland fire probabilities estimated from weather model-deduced monthly mean fire danger indices
Haiganoush K. Preisler; Shyh-Chin Chen; Francis Fujioka; John W. Benoit; Anthony L. Westerling
2008-01-01
The National Fire Danger Rating System indices deduced from a regional simulation weather model were used to estimate probabilities and numbers of large fire events on monthly and 1-degree grid scales. The weather model simulations and forecasts are ongoing experimental products from the Experimental Climate Prediction Center at the Scripps Institution of Oceanography...
Marewski, Julian N; Hoffrage, Ulrich
2013-06-01
A lot of research in cognition and decision making suffers from a lack of formalism. The quantum probability program could help to improve this situation, but we wonder whether it would provide even more added value if its presumed focus on outcome models were complemented by process models that are, ideally, informed by ecological analyses and integrated into cognitive architectures.
Effects with Multiple Causes: Evaluating Arguments Using the Subjective Probability Model.
ERIC Educational Resources Information Center
Allen, Mike; Burrell, Nancy; Egan, Tony
2000-01-01
Finds that the subjective probability model continues to provide some degree of prediction for beliefs (of an individual for circumstances of a single event with multiple causes) prior to the exposure to a message, but that after exposure to a persuasive message, the model did not maintain the same level of accuracy of prediction. Offers several…
A Taxonomy of Latent Structure Assumptions for Probability Matrix Decomposition Models.
ERIC Educational Resources Information Center
Meulders, Michel; De Boeck, Paul; Van Mechelen, Iven
2003-01-01
Proposed a taxonomy of latent structure assumptions for probability matrix decomposition (PMD) that includes the original PMD model and a three-way extension of the multiple classification latent class model. Simulation study results show the usefulness of the taxonomy. (SLD)
Construct Reliability of the Probability of Adoption of Change (PAC) Model.
ERIC Educational Resources Information Center
Creamer, E. G.; And Others
1991-01-01
Describes Probability of Adoption of Change (PAC) model, theoretical paradigm for explaining likelihood of successful adoption of planned change initiatives in student affairs. Reports on PAC construct reliability from survey of 62 Chief Student Affairs Officers. Discusses two refinements to the model: merger of leadership and top level support…
Uncovering the Best Skill Multimap by Constraining the Error Probabilities of the Gain-Loss Model
ERIC Educational Resources Information Center
Anselmi, Pasquale; Robusto, Egidio; Stefanutti, Luca
2012-01-01
The Gain-Loss model is a probabilistic skill multimap model for assessing learning processes. In practical applications, more than one skill multimap could be plausible, while none corresponds to the true one. The article investigates whether constraining the error probabilities is a way of uncovering the best skill assignment among a number of…
Uncovering the Best Skill Multimap by Constraining the Error Probabilities of the Gain-Loss Model
ERIC Educational Resources Information Center
Anselmi, Pasquale; Robusto, Egidio; Stefanutti, Luca
2012-01-01
The Gain-Loss model is a probabilistic skill multimap model for assessing learning processes. In practical applications, more than one skill multimap could be plausible, while none corresponds to the true one. The article investigates whether constraining the error probabilities is a way of uncovering the best skill assignment among a number of…
Individual-tree probability of survival model for the Northeastern United States
Richard M. Teck; Donald E. Hilt; Donald E. Hilt
1990-01-01
Describes a distance-independent individual-free probability of survival model for the Northeastern United States. Survival is predicted using a sixparameter logistic function with species-specific coefficients. Coefficients are presented for 28 species groups. The model accounts for variability in annual survival due to species, tree size, site quality, and the tree...
Identification of Probability Weighted Multiple ARX Models and Its Application to Behavior Analysis
NASA Astrophysics Data System (ADS)
Taguchi, Shun; Suzuki, Tatsuya; Hayakawa, Soichiro; Inagaki, Shinkichi
This paper proposes a Probability weighted ARX (PrARX) model wherein the multiple ARX models are composed by the probabilistic weighting functions. As the probabilistic weighting function, a ‘softmax’ function is introduced. Then, the parameter estimation problem for the proposed model is formulated as a single optimization problem. Furthermore, the identified PrARX model can be easily transformed to the corresponding PWARX model with complete partitions between regions. Finally, the proposed model is applied to the modeling of the driving behavior, and the usefulness of the model is verified and discussed.
Diagnostic models of the pre-test probability of stable coronary artery disease: A systematic review
He, Ting; Liu, Xing; Xu, Nana; Li, Ying; Wu, Qiaoyu; Liu, Meilin; Yuan, Hong
2017-01-01
A comprehensive search of PubMed and Embase was performed in January 2015 to examine the available literature on validated diagnostic models of the pre-test probability of stable coronary artery disease and to describe the characteristics of the models. Studies that were designed to develop and validate diagnostic models of pre-test probability for stable coronary artery disease were included. Data regarding baseline patient characteristics, procedural characteristics, modeling methods, metrics of model performance, risk of bias, and clinical usefulness were extracted. Ten studies involving the development of 12 models and two studies focusing on external validation were identified. Seven models were validated internally, and seven models were validated externally. Discrimination varied between studies that were validated internally (C statistic 0.66-0.81) and externally (0.49-0.87). Only one study presented reclassification indices. The majority of better performing models included sex, age, symptoms, diabetes, smoking, and hyperlipidemia as variables. Only two diagnostic models evaluated the effects on clinical decision making processes or patient outcomes. Most diagnostic models of the pre-test probability of stable coronary artery disease have had modest success, and very few present data regarding the effects of these models on clinical decision making processes or patient outcomes. PMID:28355366
He, Ting; Liu, Xing; Xu, Nana; Li, Ying; Wu, Qiaoyu; Liu, Meilin; Yuan, Hong
2017-03-01
A comprehensive search of PubMed and Embase was performed in January 2015 to examine the available literature on validated diagnostic models of the pre-test probability of stable coronary artery disease and to describe the characteristics of the models. Studies that were designed to develop and validate diagnostic models of pre-test probability for stable coronary artery disease were included. Data regarding baseline patient characteristics, procedural characteristics, modeling methods, metrics of model performance, risk of bias, and clinical usefulness were extracted. Ten studies involving the development of 12 models and two studies focusing on external validation were identified. Seven models were validated internally, and seven models were validated externally. Discrimination varied between studies that were validated internally (C statistic 0.66-0.81) and externally (0.49-0.87). Only one study presented reclassification indices. The majority of better performing models included sex, age, symptoms, diabetes, smoking, and hyperlipidemia as variables. Only two diagnostic models evaluated the effects on clinical decision making processes or patient outcomes. Most diagnostic models of the pre-test probability of stable coronary artery disease have had modest success, and very few present data regarding the effects of these models on clinical decision making processes or patient outcomes.
Translating Climate-Change Probabilities into Impact Risks - Overcoming the Impact- Model Bottleneck
NASA Astrophysics Data System (ADS)
Dettinger, M.
2008-12-01
Projections of climate change in response to increasing greenhouse-gas concentrations are uncertain and likely to remain so for the foreseeable future. As more projections become available for analysts, we are increasingly able to characterize the probabilities of obtaining various levels of climate change in current projections. However, the probabilities of most interest in impact assessments are not the probabilities of climate changes, but rather the probabilities (or risks) of various levels and kinds of climate-change impact. These risks can be difficult to estimate even if the climate-change probabilities are well known. The difficulty arises because, frequently, impact models and assessments are computationally demanding or time consuming of hands-on, human expert analyses, so that severe limits are placed on the numbers of climate- change scenarios from which detailed impacts can be assessed. Estimation of risks of various impacts is generally difficult with the few resulting examples. However, real-world examples from the water-resources sector will be used to show that, by applying several different "derived distributions" approaches for estimating the risks of various impacts from known climate-change probabilities to just a few impact-model simulations, risks can be estimated along with indications of how accurate are the impact-risk estimates. The prospects for a priori selection of a few climate-change scenarios (from a larger ensemble of available projections) that will allow the best, most economical estimates of impact risks will be explored with a simple but real-world example.
Transition probabilities of health states for workers in Malaysia using a Markov chain model
NASA Astrophysics Data System (ADS)
Samsuddin, Shamshimah; Ismail, Noriszura
2017-04-01
The aim of our study is to estimate the transition probabilities of health states for workers in Malaysia who contribute to the Employment Injury Scheme under the Social Security Organization Malaysia using the Markov chain model. Our study uses four states of health (active, temporary disability, permanent disability and death) based on the data collected from the longitudinal studies of workers in Malaysia for 5 years. The transition probabilities vary by health state, age and gender. The results show that men employees are more likely to have higher transition probabilities to any health state compared to women employees. The transition probabilities can be used to predict the future health of workers in terms of a function of current age, gender and health state.
The probabilities of one- and multi-track events for modeling radiation-induced cell kill.
Schneider, Uwe; Vasi, Fabiano; Besserer, Jürgen
2017-08-01
In view of the clinical importance of hypofractionated radiotherapy, track models which are based on multi-hit events are currently reinvestigated. These models are often criticized, because it is believed that the probability of multi-track hits is negligible. In this work, the probabilities for one- and multi-track events are determined for different biological targets. The obtained probabilities can be used with nano-dosimetric cluster size distributions to obtain the parameters of track models. We quantitatively determined the probabilities for one- and multi-track events for 100, 500 and 1000 keV electrons, respectively. It is assumed that the single tracks are statistically independent and follow a Poisson distribution. Three different biological targets were investigated: (1) a DNA strand (2 nm scale); (2) two adjacent chromatin fibers (60 nm); and (3) fiber loops (300 nm). It was shown that the probabilities for one- and multi-track events are increasing with energy, size of the sensitive target structure, and dose. For a 2 × 2 × 2 nm(3) target, one-track events are around 10,000 times more frequent than multi-track events. If the size of the sensitive structure is increased to 100-300 nm, the probabilities for one- and multi-track events are of the same order of magnitude. It was shown that target theories can play a role for describing radiation-induced cell death if the targets are of the size of two adjacent chromatin fibers or fiber loops. The obtained probabilities can be used together with the nano-dosimetric cluster size distributions to determine model parameters for target theories.
NASA Astrophysics Data System (ADS)
Pu, H. C.; Lin, C. H.
2016-05-01
To investigate the seismic behavior of crustal deformation, we deployed a dense seismic network at the Hsinchu area of northwestern Taiwan during the period between 2004 and 2006. Based on abundant local micro-earthquakes recorded at this seismic network, we have successfully determined 274 focal mechanisms among ∼1300 seismic events. It is very interesting to see that the dominant energy of both seismic strike-slip and normal faulting mechanisms repeatedly alternated with each other within two years. Also, the strike-slip and normal faulting earthquakes were largely accompanied with the surface slipping along N60°E and uplifting obtained from the continuous GPS data, individually. Those phenomena were probably resulted by the slow uplifts at the mid-crust beneath the northwestern Taiwan area. As the deep slow uplift was active below 10 km in depth along either the boundary fault or blind fault, the push of the uplifting material would simultaneously produce both of the normal faulting earthquakes in the shallow depths (0-10 km) and the slight surface uplifting. As the deep slow uplift was stop, instead, the strike-slip faulting earthquakes would be dominated as usual due to strongly horizontal plate convergence in the Taiwan. Since the normal faulting earthquakes repeatedly dominated in every 6 or 7 months between 2004 and 2006, it may conclude that slow slip events in the mid crust were frequent to release accumulated tectonic stress in the Hsinchu area.
The effect of coupling hydrologic and hydrodynamic models on probable maximum flood estimation
NASA Astrophysics Data System (ADS)
Felder, Guido; Zischg, Andreas; Weingartner, Rolf
2017-07-01
Deterministic rainfall-runoff modelling usually assumes stationary hydrological system, as model parameters are calibrated with and therefore dependant on observed data. However, runoff processes are probably not stationary in the case of a probable maximum flood (PMF) where discharge greatly exceeds observed flood peaks. Developing hydrodynamic models and using them to build coupled hydrologic-hydrodynamic models can potentially improve the plausibility of PMF estimations. This study aims to assess the potential benefits and constraints of coupled modelling compared to standard deterministic hydrologic modelling when it comes to PMF estimation. The two modelling approaches are applied using a set of 100 spatio-temporal probable maximum precipitation (PMP) distribution scenarios. The resulting hydrographs, the resulting peak discharges as well as the reliability and the plausibility of the estimates are evaluated. The discussion of the results shows that coupling hydrologic and hydrodynamic models substantially improves the physical plausibility of PMF modelling, although both modelling approaches lead to PMF estimations for the catchment outlet that fall within a similar range. Using a coupled model is particularly suggested in cases where considerable flood-prone areas are situated within a catchment.
General properties of different models used to predict normal tissue complications due to radiation
Kuperman, V. Y.
2008-11-15
In the current study the author analyzes general properties of three different models used to predict normal tissue complications due to radiation: (1) Surviving fraction of normal cells in the framework of the linear quadratic (LQ) equation for cell kill, (2) the Lyman-Kutcher-Burman (LKB) model for normal tissue complication probability (NTCP), and (3) generalized equivalent uniform dose (gEUD). For all considered cases the author assumes fixed average dose to an organ of interest. The author's goal is to establish whether maximizing dose uniformity in the irradiated normal tissues is radiobiologically beneficial. Assuming that NTCP increases with increasing overall cell kill, it is shown that NTCP in the LQ model is maximized for uniform dose. Conversely, NTCP in the LKB and gEUD models is always smaller for a uniform dose to a normal organ than that for a spatially varying dose if parameter n in these models is small (i.e., n<1). The derived conflicting properties of the considered models indicate the need for more studies before these models can be utilized clinically for plan evaluation and/or optimization of dose distributions. It is suggested that partial-volume irradiation can be used to establish the validity of the considered models.
NASA Astrophysics Data System (ADS)
Peres, David Johnny; Cancelliere, Antonino
2017-04-01
Climate change related to uncontrolled greenhouse gas emissions is expected to modify climate characteristics in a harmful way, increasing the frequency of many precipitation-triggered natural hazards, landslides included. In our study we analyse regional climate model (RCM) projections with the aim of assessing the potential future modifications of rainfall event characteristics linked to shallow landslide triggering, such as: event duration, total depth, and inter-arrival time. Factor of changes of the mean and the variance of these rainfall-event characteristics are exploited to adjust a stochastic rainfall generator aimed at simulating precipitation series likely to occur in the future. Then Monte Carlo simulations - where the stochastic rainfall generator and a physically based hydromechanical model are coupled - are carried out to estimate the probability of landslide triggering for future time horizons, and its changes respect to the current climate conditions. The proposed methodology is applied to the Peloritani region in Sicily, Italy, an area that in the past two decades has experienced several catastrophic shallow and rapidly moving landslide events. Different RCM simulations from the Coordinated regional Climate Downscaling Experiment (CORDEX) initiative are considered in the application, as well as two different emission scenarios, known as Representative Concentration Pathways: intermediate (RCP 4.5) and high-emissions (RCP 8.5). The estimated rainfall event characteristics modifications differ significantly both in magnitude and in direction (increase/decrease) from one model to another. RCMs are concordant only in predicting an increase of the mean of inter-event dry intervals. The variance of rainfall depth exhibits maximum changes (increase or decrease depending on the RCM), and it is the characteristic to which landslide triggering seems to be more sensitive. Some RCMs indicate significant variations of landslide probability due to climate
NASA Technical Reports Server (NTRS)
Deiwert, G. S.; Yoshikawa, K. K.
1975-01-01
A semiclassical model proposed by Pearson and Hansen (1974) for computing collision-induced transition probabilities in diatomic molecules is tested by the direct-simulation Monte Carlo method. Specifically, this model is described by point centers of repulsion for collision dynamics, and the resulting classical trajectories are used in conjunction with the Schroedinger equation for a rigid-rotator harmonic oscillator to compute the rotational energy transition probabilities necessary to evaluate the rotation-translation exchange phenomena. It is assumed that a single, average energy spacing exists between the initial state and possible final states for a given collision.
A stochastic model for tumour control probability that accounts for repair from sublethal damage.
Ponce Bobadilla, Ana Victoria; Maini, Philip K; Byrne, Helen
2017-02-26
The tumour control probability (TCP) is the probability that a treatment regimen of radiation therapy (RT) eradicates all tumour cells in a given tissue. To decrease the toxic effects on healthy cells, RT is usually delivered over a period of weeks in a series of fractions. This allows tumour cells to repair sublethal damage (RSD) caused by radiation. In this article, we introduce a stochastic model for tumour response to radiotherapy which accounts for the effects of RSD. The tumour is subdivided into two cell types: 'affected' cells which have been damaged by RT and 'unaffected' cells which have not. The model is formulated as a birth-death process for which we can derive an explicit formula for the TCP. We apply our model to prostate cancer, and find that the radiosensitivity parameters and the probability of sublethal damage during radiation are the parameters to which the TCP predictions are most sensitive. We compare our TCP predictions to those given by Zaider and Minerbo's one-class model (Zaider & Minerbo, 2000) and Dawson and Hillen's two-class model (Dawson & Hillen, 2006) and find that for low doses of radiation, our model predicts a lower TCP. Finally, we find that when the probability of sublethal damage during radiation is large, the mean field assumption overestimates the TCP. © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
Normal seasonal variations for atmospheric radon concentration: a sinusoidal model.
Hayashi, Koseki; Yasuoka, Yumi; Nagahama, Hiroyuki; Muto, Jun; Ishikawa, Tetsuo; Omori, Yasutaka; Suzuki, Toshiyuki; Homma, Yoshimi; Mukai, Takahiro
2015-01-01
Anomalous radon readings in air have been reported before an earthquake activity. However, careful measurements of atmospheric radon concentrations during a normal period are required to identify anomalous variations in a precursor period. In this study, we obtained radon concentration data for 5 years (2003-2007) that can be considered a normal period and compared it with data from the precursory period of 2008 until March 2011, when the 2011 Tohoku-Oki Earthquake occurred. Then, we established a model for seasonal variation by fitting a sinusoidal model to the radon concentration data during the normal period, considering that the seasonal variation was affected by atmospheric turbulence. By determining the amplitude in the sinusoidal model, the normal variation of the radon concentration can be estimated. Thus, the results of this method can be applied to identify anomalous radon variations before an earthquake.
Bouzas-Mosquera, Alberto; Peteiro, Jesús; Broullón, Francisco J; Constanso, Ignacio P; Rodríguez-Garrido, Jorge L; Martínez, Dolores; Yáñez, Juan C; Bescos, Hildegart; Álvarez-García, Nemesio; Vázquez-Rodríguez, José Manuel
2016-03-01
Patients with suspected acute coronary syndromes and negative cardiac troponin (cTn) levels are deemed at low risk. Our aim was to assess the effect of cTn levels on the frequency of inducible myocardial ischemia and subsequent coronary events in patients with acute chest pain and cTn levels within the normal range. We evaluated 4474 patients with suspected acute coronary syndromes, nondiagnostic electrocardiograms and serial cTnI levels below the diagnostic threshold for myocardial necrosis using a conventional or a sensitive cTnI assay. The end points were the probability of inducible myocardial ischemia and coronary events (i.e., coronary death, myocardial infarction or coronary revascularization within 3 months). The probability of inducible myocardial ischemia was significantly higher in patients with detectable peak cTnI levels (25%) than in those with undetectable concentrations (14.6%, p<0.001). These results were consistent regardless of the type of cTnI assay, the type of stress testing modality, or the timing for cTnI measurement, and remained significant after multivariate adjustment (odds ratio [OR] 1.47, 95% confidence interval [CI] 1.21-1.79, p<0.001). The rate of coronary events at 3 months was also significantly higher in patients with detectable cTnI levels (adjusted OR 2.08, 95% CI 1.64-2.64, p<0.001). Higher cTnI levels within the normal range were associated with a significantly increased probability of inducible myocardial ischemia and coronary events in patients with suspected acute coronary syndromes and seemingly negative cTnI. Copyright © 2015 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
Coupled escape probability for an asymmetric spherical case: Modeling optically thick comets
Gersch, Alan M.; A'Hearn, Michael F.
2014-05-20
We have adapted Coupled Escape Probability, a new exact method of solving radiative transfer problems, for use in asymmetrical spherical situations. Our model is intended specifically for use in modeling optically thick cometary comae, although not limited to such use. This method enables the accurate modeling of comets' spectra even in the potentially optically thick regions nearest the nucleus, such as those seen in Deep Impact observations of 9P/Tempel 1 and EPOXI observations of 103P/Hartley 2.
NASA Technical Reports Server (NTRS)
Courey, Karim; Wright, Clara; Asfour, Shihab; Onar, Arzu; Bayliss, Jon; Ludwig, Larry
2009-01-01
In this experiment, an empirical model to quantify the probability of occurrence of an electrical short circuit from tin whiskers as a function of voltage was developed. This empirical model can be used to improve existing risk simulation models. FIB and TEM images of a tin whisker confirm the rare polycrystalline structure on one of the three whiskers studied. FIB cross-section of the card guides verified that the tin finish was bright tin.
Approximating Multivariate Normal Orthant Probabilities
1990-06-01
limno. ?,oUburgX. PA 15268 !Z8 W,v,.tfcrr Co.ri Departmet of Prycboeot 7:,ms Rxter %J 09753 ftQ3 E. Ditive SL. Dr. Bert Green Champaign. IL 61820 l’oon...249 Batttmom SID .1218 icnerniv Part University of Colorsidoi Pstbirgn. PA 15213 Boulder. CO sw94Z49 Mi’chael Habon DOR%IER GMBH Dr MI lto. S. Katz...UnnerniY of lAno College of Education Layn&K s4 Dr. Ratna Niandatumar Li oria(ia Educaional Studies oaCe r TomJ b Willard Hall. Room Z13E !,a ot A522
Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique
Glosup, J.G.; Axelrod M.C.
1994-11-15
The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of our interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method.
Application of damping mechanism model and stacking fault probability in Fe-Mn alloy
Huang, S.K.; Wen, Y.H.; Li, N. Teng, J.; Ding, S.; Xu, Y.G.
2008-06-15
In this paper, the damping mechanism model of Fe-Mn alloy was analyzed using dislocation theory. Moreover, as an important parameter in Fe-Mn based alloy, the effect of stacking fault probability on the damping capacity of Fe-19.35Mn alloy after deep-cooling or tensile deformation was also studied. The damping capacity was measured using reversal torsion pendulum. The stacking fault probability of {gamma}-austenite and {epsilon}-martensite was determined by means of X-ray diffraction (XRD) profile analysis. The microstructure was observed using scanning electronic microscope (SEM). The results indicated that with the strain amplitude increasing above a critical value, the damping capacity of Fe-19.35Mn alloy increased rapidly which could be explained using the breakaway model of Shockley partial dislocations. Deep-cooling and suitable tensile deformation could improve the damping capacity owning to the increasing of stacking fault probability of Fe-19.35Mn alloy.
A model selection algorithm for a posteriori probability estimation with neural networks.
Arribas, Juan Ignacio; Cid-Sueiro, Jesús
2005-07-01
This paper proposes a novel algorithm to jointly determine the structure and the parameters of a posteriori probability model based on neural networks (NNs). It makes use of well-known ideas of pruning, splitting, and merging neural components and takes advantage of the probabilistic interpretation of these components. The algorithm, so called a posteriori probability model selection (PPMS), is applied to an NN architecture called the generalized softmax perceptron (GSP) whose outputs can be understood as probabilities although results shown can be extended to more general network architectures. Learning rules are derived from the application of the expectation-maximization algorithm to the GSP-PPMS structure. Simulation results show the advantages of the proposed algorithm with respect to other schemes.
NASA Astrophysics Data System (ADS)
Espinoza, G. E.; Arctur, D. K.; Maidment, D. R.; Teng, W. L.
2015-12-01
Anticipating extreme events, whether potential for flooding or drought, becomes more urgent every year, with increased variability in weather and climate. Hydrologic processes are inherently spatiotemporal. Extreme conditions can be identified at a certain period of time in a specific geographic region. These extreme conditions occur when the values of a hydrologic variable are record low or high, or they approach those records. The calculation of the historic probability distributions is essential to understanding when values exceed the thresholds and become extreme. A dense data model in time and space must be used to properly estimate the historic distributions. The purpose of this research is to model the time-dependent probability distributions of hydrologic variables at a national scale. These historic probability distributions are modeled daily, using 35 years of data from the North American Land Data Assimilation System (NLDAS) Noah model, which is a land-surface model with a 1/8 degree grid and hourly values from 1979 to the present. Five hydrologic variables are selected: soil moisture, precipitation, runoff, evapotranspiration, and temperature. The probability distributions are used to compare with the latest results from NLDAS and identify areas where extreme hydrologic conditions are present. The identification of extreme values in hydrologic variables and their inter-correlation improve the assessment and characterization of natural disasters such as floods or droughts. This information is presented through a dynamic web application that shows the latest results from NLDAS and any anomalies.
Results from probability-based, simplified, off-shore Louisiana CSEM hydrocarbon reservoir modeling
NASA Astrophysics Data System (ADS)
Stalnaker, J. L.; Tinley, M.; Gueho, B.
2009-12-01
Perhaps the biggest impediment to the commercial application of controlled-source electromagnetic (CSEM) geophysics marine hydrocarbon exploration is the inefficiency of modeling and data inversion. If an understanding of the typical (in a statistical sense) geometrical and electrical nature of a reservoir can be attained, then it is possible to derive therefrom a simplified yet accurate model of the electromagnetic interactions that produce a measured marine CSEM signal, leading ultimately to efficient modeling and inversion. We have compiled geometric and resistivity measurements from roughly 100 known, producing off-shore Louisiana Gulf of Mexico reservoirs. Recognizing that most reservoirs could be recreated roughly from a sectioned hemi-ellipsoid, we devised a unified, compact reservoir geometry description. Each reservoir was initially fit to the ellipsoid by eye, though we plan in the future to perform a more rigorous least-squares fit. We created, using kernel density estimation, initial probabilistic descriptions of reservoir parameter distributions, with the understanding that additional information would not fundamentally alter our results, but rather increase accuracy. From the probabilistic description, we designed an approximate model consisting of orthogonally oriented current segments distributed across the ellipsoid--enough to define the shape, yet few enough to be resolved during inversion. The moment and length of the currents are mapped to geometry and resistivity of the ellipsoid. The probability density functions (pdfs) derived from reservoir statistics serve as a workbench. We first use the pdfs in a Monte Carlo simulation designed to assess the detectability off-shore Louisiana reservoirs using magnitude versus offset (MVO) anomalies. From the pdfs, many reservoir instances are generated (using rejection sampling) and each normalized MVO response is calculated. The response strength is summarized by numerically computing MVO power, and that
NASA Astrophysics Data System (ADS)
Tian, Chuan; Sun, Di-Hua
2010-12-01
Considering the effects that the probability of traffic interruption and the friction between two lanes have on the car-following behaviour, this paper establishes a new two-lane microscopic car-following model. Based on this microscopic model, a new macroscopic model was deduced by the relevance relation of microscopic and macroscopic scale parameters for the two-lane traffic flow. Terms related to lane change are added into the continuity equations and velocity dynamic equations to investigate the lane change rate. Numerical results verify that the proposed model can be efficiently used to reflect the effect of the probability of traffic interruption on the shock, rarefaction wave and lane change behaviour on two-lane freeways. The model has also been applied in reproducing some complex traffic phenomena caused by traffic accident interruption.
Modelling detection probabilities to evaluate management and control tools for an invasive species
Christy, M.T.; Yackel Adams, A.A.; Rodda, G.H.; Savidge, J.A.; Tyrrell, C.L.
2010-01-01
For most ecologists, detection probability (p) is a nuisance variable that must be modelled to estimate the state variable of interest (i.e. survival, abundance, or occupancy). However, in the realm of invasive species control, the rate of detection and removal is the rate-limiting step for management of this pervasive environmental problem. For strategic planning of an eradication (removal of every individual), one must identify the least likely individual to be removed, and determine the probability of removing it. To evaluate visual searching as a control tool for populations of the invasive brown treesnake Boiga irregularis, we designed a mark-recapture study to evaluate detection probability as a function of time, gender, size, body condition, recent detection history, residency status, searcher team and environmental covariates. We evaluated these factors using 654 captures resulting from visual detections of 117 snakes residing in a 5-ha semi-forested enclosure on Guam, fenced to prevent immigration and emigration of snakes but not their prey. Visual detection probability was low overall (= 0??07 per occasion) but reached 0??18 under optimal circumstances. Our results supported sex-specific differences in detectability that were a quadratic function of size, with both small and large females having lower detection probabilities than males of those sizes. There was strong evidence for individual periodic changes in detectability of a few days duration, roughly doubling detection probability (comparing peak to non-elevated detections). Snakes in poor body condition had estimated mean detection probabilities greater than snakes with high body condition. Search teams with high average detection rates exhibited detection probabilities about twice that of search teams with low average detection rates. Surveys conducted with bright moonlight and strong wind gusts exhibited moderately decreased probabilities of detecting snakes. Synthesis and applications. By
Zhu, Lin; Dai, Zhenxue; Gong, Huili; ...
2015-06-12
Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less
Bailey, Larissa L.; Reid, Janice A.; Forsman, Eric D.; Nichols, James D.
2009-01-01
Barred owls (Strix varia) have recently expanded their range and now encompass the entire range of the northern spotted owl (Strix occidentalis caurina). This expansion has led to two important issues of concern for management of northern spotted owls: (1) possible competitive interactions between the two species that could contribute to population declines of northern spotted owls, and (2) possible changes in vocalization behavior and detection probabilities of northern spotted owls induced by presence of barred owls. We used a two-species occupancy model to investigate whether there was evidence of competitive exclusion between the two species at study locations in Oregon, USA. We simultaneously estimated detection probabilities for both species and determined if the presence of one species influenced the detection of the other species. Model selection results and associated parameter estimates provided no evidence that barred owls excluded spotted owls from territories. We found strong evidence that detection probabilities differed for the two species, with higher probabilities for northern spotted owls that are the object of current surveys. Non-detection of barred owls is very common in surveys for northern spotted owls, and detection of both owl species was negatively influenced by the presence of the congeneric species. Our results suggest that analyses directed at hypotheses of barred owl effects on demographic or occupancy vital rates of northern spotted owls need to deal adequately with imperfect and variable detection probabilities for both species.
Zhu, Lin; Dai, Zhenxue; Gong, Huili; Gable, Carl; Teatini, Pietro
2015-06-12
Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in an accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.
Cool, Geneviève; Lebel, Alexandre; Sadiq, Rehan; Rodriguez, Manuel J
2015-12-01
The regional variability of the probability of occurrence of high total trihalomethane (TTHM) levels was assessed using multilevel logistic regression models that incorporate environmental and infrastructure characteristics. The models were structured in a three-level hierarchical configuration: samples (first level), drinking water utilities (DWUs, second level) and natural regions, an ecological hierarchical division from the Quebec ecological framework of reference (third level). They considered six independent variables: precipitation, temperature, source type, seasons, treatment type and pH. The average probability of TTHM concentrations exceeding the targeted threshold was 18.1%. The probability was influenced by seasons, treatment type, precipitations and temperature. The variance at all levels was significant, showing that the probability of TTHM concentrations exceeding the threshold is most likely to be similar if located within the same DWU and within the same natural region. However, most of the variance initially attributed to natural regions was explained by treatment types and clarified by spatial aggregation on treatment types. Nevertheless, even after controlling for treatment type, there was still significant regional variability of the probability of TTHM concentrations exceeding the threshold. Regional variability was particularly important for DWUs using chlorination alone since they lack the appropriate treatment required to reduce the amount of natural organic matter (NOM) in source water prior to disinfection. Results presented herein could be of interest to authorities in identifying regions with specific needs regarding drinking water quality and for epidemiological studies identifying geographical variations in population exposure to disinfection by-products (DBPs).
A likelihood reformulation method in non-normal random effects models.
Liu, Lei; Yu, Zhangsheng
2008-07-20
In this paper, we propose a practical computational method to obtain the maximum likelihood estimates (MLE) for mixed models with non-normal random effects. By simply multiplying and dividing a standard normal density, we reformulate the likelihood conditional on the non-normal random effects to that conditional on the normal random effects. Gaussian quadrature technique, conveniently implemented in SAS Proc NLMIXED, can then be used to carry out the estimation process. Our method substantially reduces computational time, while yielding similar estimates to the probability integral transformation method (J. Comput. Graphical Stat. 2006; 15:39-57). Furthermore, our method can be applied to more general situations, e.g. finite mixture random effects or correlated random effects from Clayton copula. Simulations and applications are presented to illustrate our method.
Blind Students' Learning of Probability through the Use of a Tactile Model
ERIC Educational Resources Information Center
Vita, Aida Carvalho; Kataoka, Verônica Yumi
2014-01-01
The objective of this paper is to discuss how blind students learn basic concepts of probability using the tactile model proposed by Vita (2012). Among the activities were part of the teaching sequence "Jefferson's Random Walk", in which students built a tree diagram (using plastic trays, foam cards, and toys), and pictograms in 3D…
Blind Students' Learning of Probability through the Use of a Tactile Model
ERIC Educational Resources Information Center
Vita, Aida Carvalho; Kataoka, Verônica Yumi
2014-01-01
The objective of this paper is to discuss how blind students learn basic concepts of probability using the tactile model proposed by Vita (2012). Among the activities were part of the teaching sequence "Jefferson's Random Walk", in which students built a tree diagram (using plastic trays, foam cards, and toys), and pictograms in 3D…
ERIC Educational Resources Information Center
Calvert, Carol Elaine
2014-01-01
This case study relates to distance learning students on open access courses. It demonstrates the use of predictive analytics to generate a model of the probabilities of success and retention at different points, or milestones, in a student journey. A core set of explanatory variables has been established and their varying relative importance at…
ERIC Educational Resources Information Center
Calvert, Carol Elaine
2014-01-01
This case study relates to distance learning students on open access courses. It demonstrates the use of predictive analytics to generate a model of the probabilities of success and retention at different points, or milestones, in a student journey. A core set of explanatory variables has been established and their varying relative importance at…
Modelling psychiatric measures using Skew-Normal distributions
Counsell, N.; Cortina-Borja, M.; Lehtonen, A.; Stein, A.
2011-01-01
Data from psychiatric research frequently exhibit departures from Normality. Methods which utilise the data optimally to model the distribution directly are available. We highlight the issue of modelling skewness, resulting from screening instruments where the majority of respondents are healthy individuals and few participants have a value reflecting particular disorders. PMID:21036551
Carr, Thomas W
2017-02-01
In an SIRS compartment model for a disease we consider the effect of different probability distributions for remaining immune. We show that to first approximation the first three moments of the corresponding probability densities are sufficient to well describe oscillatory solutions corresponding to recurrent epidemics. Specifically, increasing the fraction who lose immunity, increasing the mean immunity time, and decreasing the heterogeneity of the population all favor the onset of epidemics and increase their severity. We consider six different distributions, some symmetric about their mean and some asymmetric, and show that by tuning their parameters such that they have equivalent moments that they all exhibit equivalent dynamical behavior.
Dong, Jing; Mahmassani, Hani S.
2011-01-01
This paper proposes a methodology to produce random flow breakdown endogenously in a mesoscopic operational model, by capturing breakdown probability and duration. Based on previous research findings that probability of flow breakdown can be represented as a function of flow rate and the duration can be characterized by a hazard model. By generating random flow breakdown at various levels and capturing the traffic characteristics at the onset of the breakdown, the stochastic network simulation model provides a tool for evaluating travel time variability. The proposed model can be used for (1) providing reliability related traveler information; (2) designing ITS (intelligent transportation systems) strategies to improve reliability; and (3) evaluating reliability-related performance measures of the system.
Evans, Jason; Sullivan, Jack
2011-01-01
A priori selection of models for use in phylogeny estimation from molecular sequence data is increasingly important as the number and complexity of available models increases. The Bayesian information criterion (BIC) and the derivative decision-theoretic (DT) approaches rely on a conservative approximation to estimate the posterior probability of a given model. Here, we extended the DT method by using reversible jump Markov chain Monte Carlo approaches to directly estimate model probabilities for an extended candidate pool of all 406 special cases of the general time reversible + Γ family. We analyzed 250 diverse data sets in order to evaluate the effectiveness of the BIC approximation for model selection under the BIC and DT approaches. Model choice under DT differed between the BIC approximation and direct estimation methods for 45% of the data sets (113/250), and differing model choice resulted in significantly different sets of trees in the posterior distributions for 26% of the data sets (64/250). The model with the lowest BIC score differed from the model with the highest posterior probability in 30% of the data sets (76/250). When the data indicate a clear model preference, the BIC approximation works well enough to result in the same model selection as with directly estimated model probabilities, but a substantial proportion of biological data sets lack this characteristic, which leads to selection of underparametrized models.
Probability distributions for parameters of the Munson-Dawson salt creep model
Fossum, A.F.; Pfeifle, T.W.; Mellegard, K.D.
1993-12-31
Stress-related probability distribution functions are determined for the random variable material model parameters of the Munson-Dawson multi-mechanism deformation creep model for salt. These functions are obtained indirectly from experimental creep data for clean salt. The parameter distribution functions will form the basis for numerical calculations to generate an appropriate distribution function for room closure. Also included is a table that gives the values of the parameters for individual specimens of clean salt under different stresses.
Austin, Samuel H.; Nelms, David L.
2017-01-01
Climate change raises concern that risks of hydrological drought may be increasing. We estimate hydrological drought probabilities for rivers and streams in the United States (U.S.) using maximum likelihood logistic regression (MLLR). Streamflow data from winter months are used to estimate the chance of hydrological drought during summer months. Daily streamflow data collected from 9,144 stream gages from January 1, 1884 through January 9, 2014 provide hydrological drought streamflow probabilities for July, August, and September as functions of streamflows during October, November, December, January, and February, estimating outcomes 5-11 months ahead of their occurrence. Few drought prediction methods exploit temporal links among streamflows. We find MLLR modeling of drought streamflow probabilities exploits the explanatory power of temporally linked water flows. MLLR models with strong correct classification rates were produced for streams throughout the U.S. One ad hoc test of correct prediction rates of September 2013 hydrological droughts exceeded 90% correct classification. Some of the best-performing models coincide with areas of high concern including the West, the Midwest, Texas, the Southeast, and the Mid-Atlantic. Using hydrological drought MLLR probability estimates in a water management context can inform understanding of drought streamflow conditions, provide warning of future drought conditions, and aid water management decision making.
Emptiness and depletion formation probability in spin models with inverse square interaction
NASA Astrophysics Data System (ADS)
Franchini, Fabio; Kulkarni, Manas
2010-02-01
We calculate the Emptiness Formation Probability (EFP) in the spin-Calogero Model (sCM) and Haldane-Shastry Model (HSM) using their hydrodynamic description. The EFP is the probability that a region of space is completely void of particles in the ground state of a quantum many body system. We calculate this probability in an instanton approach, by considering the more general problem of an arbitrary depletion of particles (DFP). In the limit of large size of depletion region the probability is dominated by a classical configuration in imaginary time that satisfies a set of boundary conditions and the action calculated on such solution gives the EFP/DFP with exponential accuracy. We show that the calculation for sCM can be elegantly performed by representing the gradientless hydrodynamics of spin particles as a sum of two spin-less Calogero collective field theories in auxiliary variables. Interestingly, the result we find for the EFP can be casted in a form reminiscing of spin-charge separation, which should be violated for a non-linear effect such as this. We also highlight the connections between sCM, HSM and λ=2 spin-less Calogero model from a EFP/DFP perspective.
An energy-dependent numerical model for the condensation probability, γj
Kerby, Leslie Marie
2016-12-09
The “condensation” probability, γj, is an important variable in the preequilibrium stage of nuclear spallation reactions. It represents the probability that pj excited nucleons (excitons) will “condense” to form complex particle type j in the excited residual nucleus. In addition, it has a significant impact on the emission width, or probability of emitting fragment type j from the residual nucleus. Previous formulations for γj were energy-independent and valid for fragments up to 4He only. This paper explores the formulation of a new model for γj, one which is energy-dependent and valid for up to 28Mg, and which provides improved fitsmore » compared to experimental fragment spectra.« less
An energy-dependent numerical model for the condensation probability, γj
NASA Astrophysics Data System (ADS)
Kerby, Leslie M.
2017-04-01
The "condensation" probability, γj, is an important variable in the preequilibrium stage of nuclear spallation reactions. It represents the probability that pj excited nucleons (excitons) will "condense" to form complex particle type j in the excited residual nucleus. It has a significant impact on the emission width, or probability of emitting fragment type j from the residual nucleus. Previous formulations for γj were energy-independent and valid for fragments up to 4He only. This paper explores the formulation of a new model for γj, one which is energy-dependent and valid for up to 28Mg, and which provides improved fits compared to experimental fragment spectra.
Normal Type-2 Fuzzy Geometric Curve Modeling: A Literature Review
NASA Astrophysics Data System (ADS)
Adesah, R. S.; Zakaria, R.
2017-09-01
Type-2 Fuzzy Set Theory (T2FST) is widely used for defining uncertainty data points rather than the traditional fuzzy set theory (type-1) since 2001. Recently, T2FST is used in many fields due to its ability to handle complex uncertainty data. In this paper, a review of normal type-2 fuzzy geometric curve modeling methods and techniques is presented. In particular, there have been recent applications of Normal Type-2 Fuzzy Set Theory (NT2FST) in geometric modeling, where it has helped improving results over type-1 fuzzy sets. In this paper, a concise and representative review of the processes in normal type-2 fuzzy geometrical curve modeling such as the fuzzification is presented.
Breather solutions for inhomogeneous FPU models using Birkhoff normal forms
NASA Astrophysics Data System (ADS)
Martínez-Farías, Francisco; Panayotaros, Panayotis
2016-11-01
We present results on spatially localized oscillations in some inhomogeneous nonlinear lattices of Fermi-Pasta-Ulam (FPU) type derived from phenomenological nonlinear elastic network models proposed to study localized protein vibrations. The main feature of the FPU lattices we consider is that the number of interacting neighbors varies from site to site, and we see numerically that this spatial inhomogeneity leads to spatially localized normal modes in the linearized problem. This property is seen in 1-D models, and in a 3-D model with a geometry obtained from protein data. The spectral analysis of these examples suggests some non-resonance assumptions that we use to show the existence of invariant subspaces of spatially localized solutions in quartic Birkhoff normal forms of the FPU systems. The invariant subspaces have an additional symmetry and this fact allows us to compute periodic orbits of the quartic normal form in a relatively simple way.
Assessment of uncertainty in chemical models by Bayesian probabilities: Why, when, how?
NASA Astrophysics Data System (ADS)
Sahlin, Ullrika
2015-07-01
A prediction of a chemical property or activity is subject to uncertainty. Which type of uncertainties to consider, whether to account for them in a differentiated manner and with which methods, depends on the practical context. In chemical modelling, general guidance of the assessment of uncertainty is hindered by the high variety in underlying modelling algorithms, high-dimensionality problems, the acknowledgement of both qualitative and quantitative dimensions of uncertainty, and the fact that statistics offers alternative principles for uncertainty quantification. Here, a view of the assessment of uncertainty in predictions is presented with the aim to overcome these issues. The assessment sets out to quantify uncertainty representing error in predictions and is based on probability modelling of errors where uncertainty is measured by Bayesian probabilities. Even though well motivated, the choice to use Bayesian probabilities is a challenge to statistics and chemical modelling. Fully Bayesian modelling, Bayesian meta-modelling and bootstrapping are discussed as possible approaches. Deciding how to assess uncertainty is an active choice, and should not be constrained by traditions or lack of validated and reliable ways of doing it.
A Statistical Model for the Prediction of Wind-Speed Probabilities in the Atmospheric Surface Layer
NASA Astrophysics Data System (ADS)
Efthimiou, G. C.; Hertwig, D.; Andronopoulos, S.; Bartzis, J. G.; Coceal, O.
2016-11-01
Wind fields in the atmospheric surface layer (ASL) are highly three-dimensional and characterized by strong spatial and temporal variability. For various applications such as wind-comfort assessments and structural design, an understanding of potentially hazardous wind extremes is important. Statistical models are designed to facilitate conclusions about the occurrence probability of wind speeds based on the knowledge of low-order flow statistics. Being particularly interested in the upper tail regions we show that the statistical behaviour of near-surface wind speeds is adequately represented by the Beta distribution. By using the properties of the Beta probability density function in combination with a model for estimating extreme values based on readily available turbulence statistics, it is demonstrated that this novel modelling approach reliably predicts the upper margins of encountered wind speeds. The model's basic parameter is derived from three substantially different calibrating datasets of flow in the ASL originating from boundary-layer wind-tunnel measurements and direct numerical simulation. Evaluating the model based on independent field observations of near-surface wind speeds shows a high level of agreement between the statistically modelled horizontal wind speeds and measurements. The results show that, based on knowledge of only a few simple flow statistics (mean wind speed, wind-speed fluctuations and integral time scales), the occurrence probability of velocity magnitudes at arbitrary flow locations in the ASL can be estimated with a high degree of confidence.
Xue, Xiaofang; Wu, Songfeng; Wang, Zhongsheng; Zhu, Yunping; He, Fuchu
2006-12-01
The calculation of protein probabilities is one of the most intractable problems in large-scale proteomic research. Current available estimating methods, for example, ProteinProphet, PROT_PROBE, Poisson model and two-peptide hits, employ different models trying to resolve this problem. Until now, no efficient method is used for comparative evaluation of the above methods in large-scale datasets. In order to evaluate these various methods, we developed a semi-random sampling model to simulate large-scale proteomic data. In this model, the identified peptides were sampled from the designed proteins and their cross-correlation scores were simulated according to the results from reverse database searching. The simulated result of 18 control proteins was consistent with the experimental one, demonstrating the efficiency of our model. According to the simulated results of human liver sample, ProteinProphet returned slightly higher probabilities and lower specificity than real cases. PROT_PROBE was a more efficient method with higher specificity. Predicted results from a Poisson model roughly coincide with real datasets, and the method of two-peptide hits seems solid but imprecise. However, the probabilities of identified proteins are strongly correlated with several experimental factors including spectra number, database size and protein abundance distribution.
A Statistical Model for the Prediction of Wind-Speed Probabilities in the Atmospheric Surface Layer
NASA Astrophysics Data System (ADS)
Efthimiou, G. C.; Hertwig, D.; Andronopoulos, S.; Bartzis, J. G.; Coceal, O.
2017-05-01
Wind fields in the atmospheric surface layer (ASL) are highly three-dimensional and characterized by strong spatial and temporal variability. For various applications such as wind-comfort assessments and structural design, an understanding of potentially hazardous wind extremes is important. Statistical models are designed to facilitate conclusions about the occurrence probability of wind speeds based on the knowledge of low-order flow statistics. Being particularly interested in the upper tail regions we show that the statistical behaviour of near-surface wind speeds is adequately represented by the Beta distribution. By using the properties of the Beta probability density function in combination with a model for estimating extreme values based on readily available turbulence statistics, it is demonstrated that this novel modelling approach reliably predicts the upper margins of encountered wind speeds. The model's basic parameter is derived from three substantially different calibrating datasets of flow in the ASL originating from boundary-layer wind-tunnel measurements and direct numerical simulation. Evaluating the model based on independent field observations of near-surface wind speeds shows a high level of agreement between the statistically modelled horizontal wind speeds and measurements. The results show that, based on knowledge of only a few simple flow statistics (mean wind speed, wind-speed fluctuations and integral time scales), the occurrence probability of velocity magnitudes at arbitrary flow locations in the ASL can be estimated with a high degree of confidence.
Rate coefficients, binding probabilities, and related quantities for area reactivity models.
Prüstel, Thorsten; Meier-Schellersheim, Martin
2014-11-21
We further develop the general theory of the area reactivity model that describes the diffusion-influenced reaction of an isolated receptor-ligand pair in terms of a generalized Feynman-Kac equation and that provides an alternative to the classical contact reactivity model. Analyzing both the irreversible and reversible reaction, we derive the equation of motion of the survival probability as well as several relationships between single pair quantities and the reactive flux at the encounter distance. Building on these relationships, we derive the equation of motion of the many-particle survival probability for irreversible pseudo-first-order reactions. Moreover, we show that the usual definition of the rate coefficient as the reactive flux is deficient in the area reactivity model. Numerical tests for our findings are provided through Brownian Dynamics simulations. We calculate exact and approximate expressions for the irreversible rate coefficient and show that this quantity behaves differently from its classical counterpart. Furthermore, we derive approximate expressions for the binding probability as well as the average lifetime of the bound state and discuss on- and off-rates in this context. Throughout our approach, we point out similarities and differences between the area reactivity model and its classical counterpart, the contact reactivity model. The presented analysis and obtained results provide a theoretical framework that will facilitate the comparison of experiment and model predictions.
Modeling and simulation of normal and hemiparetic gait
NASA Astrophysics Data System (ADS)
Luengas, Lely A.; Camargo, Esperanza; Sanchez, Giovanni
2015-09-01
Gait is the collective term for the two types of bipedal locomotion, walking and running. This paper is focused on walking. The analysis of human gait is of interest to many different disciplines, including biomechanics, human-movement science, rehabilitation and medicine in general. Here we present a new model that is capable of reproducing the properties of walking, normal and pathological. The aim of this paper is to establish the biomechanical principles that underlie human walking by using Lagrange method. The constraint forces of Rayleigh dissipation function, through which to consider the effect on the tissues in the gait, are included. Depending on the value of the factor present in the Rayleigh dissipation function, both normal and pathological gait can be simulated. First of all, we apply it in the normal gait and then in the permanent hemiparetic gait. Anthropometric data of adult person are used by simulation, and it is possible to use anthropometric data for children but is necessary to consider existing table of anthropometric data. Validation of these models includes simulations of passive dynamic gait that walk on level ground. The dynamic walking approach provides a new perspective of gait analysis, focusing on the kinematics and kinetics of gait. There have been studies and simulations to show normal human gait, but few of them have focused on abnormal, especially hemiparetic gait. Quantitative comparisons of the model predictions with gait measurements show that the model can reproduce the significant characteristics of normal gait.
Schmidt, W.; Niemeyer, J. C.; Ciaraldi-Schoolmann, F.; Roepke, F. K.; Hillebrandt, W.
2010-02-20
The delayed detonation model describes the observational properties of the majority of Type Ia supernovae very well. Using numerical data from a three-dimensional deflagration model for Type Ia supernovae, the intermittency of the turbulent velocity field and its implications on the probability of a deflagration-to-detonation (DDT) transition are investigated. From structure functions of the turbulent velocity fluctuations, we determine intermittency parameters based on the log-normal and the log-Poisson models. The bulk of turbulence in the ash regions appears to be less intermittent than predicted by the standard log-normal model and the She-Leveque model. On the other hand, the analysis of the turbulent velocity fluctuations in the vicinity of the flame front by Roepke suggests a much higher probability of large velocity fluctuations on the grid scale in comparison to the log-normal intermittency model. Following Pan et al., we computed probability density functions for a DDT for the different distributions. The determination of the total number of regions at the flame surface, in which DDTs can be triggered, enables us to estimate the total number of events. Assuming that a DDT can occur in the stirred flame regime, as proposed by Woosley et al., the log-normal model would imply a delayed detonation between 0.7 and 0.8 s after the beginning of the deflagration phase for the multi-spot ignition scenario used in the simulation. However, the probability drops to virtually zero if a DDT is further constrained by the requirement that the turbulent velocity fluctuations reach about 500 km s{sup -1}. Under this condition, delayed detonations are only possible if the distribution of the velocity fluctuations is not log-normal. From our calculations follows that the distribution obtained by Roepke allow for multiple DDTs around 0.8 s after ignition at a transition density close to 1 x 10{sup 7} g cm{sup -3}.
New approach to probability estimate of femoral neck fracture by fall (Slovak regression model).
Wendlova, J
2009-01-01
3,216 Slovak women with primary or secondary osteoporosis or osteopenia, aged 20-89 years, were examined with the bone densitometer DXA (dual energy X-ray absorptiometry, GE, Prodigy - Primo), x = 58.9, 95% C.I. (58.42; 59.38). The values of the following variables for each patient were measured: FSI (femur strength index), T-score total hip left, alpha angle - left, theta angle - left, HAL (hip axis length) left, BMI (body mass index) was calculated from the height and weight of the patients. Regression model determined the following order of independent variables according to the intensity of their influence upon the occurrence of values of dependent FSI variable: 1. BMI, 2. theta angle, 3. T-score total hip, 4. alpha angle, 5. HAL. The regression model equation, calculated from the variables monitored in the study, enables a doctor in praxis to determine the probability magnitude (absolute risk) for the occurrence of pathological value of FSI (FSI < 1) in the femoral neck area, i. e., allows for probability estimate of a femoral neck fracture by fall for Slovak women. 1. The Slovak regression model differs from regression models, published until now, in chosen independent variables and a dependent variable, belonging to biomechanical variables, characterising the bone quality. 2. The Slovak regression model excludes the inaccuracies of other models, which are not able to define precisely the current and past clinical condition of tested patients (e.g., to define the length and dose of exposure to risk factors). 3. The Slovak regression model opens the way to a new method of estimating the probability (absolute risk) or the odds for a femoral neck fracture by fall, based upon the bone quality determination. 4. It is assumed that the development will proceed by improving the methods enabling to measure the bone quality, determining the probability of fracture by fall (Tab. 6, Fig. 3, Ref. 22). Full Text (Free, PDF) www.bmj.sk.
Bayesian models for syndrome- and gene-specific probabilities of novel variant pathogenicity.
Ruklisa, Dace; Ware, James S; Walsh, Roddy; Balding, David J; Cook, Stuart A
2015-01-01
With the advent of affordable and comprehensive sequencing technologies, access to molecular genetics for clinical diagnostics and research applications is increasing. However, variant interpretation remains challenging, and tools that close the gap between data generation and data interpretation are urgently required. Here we present a transferable approach to help address the limitations in variant annotation. We develop a network of Bayesian logistic regression models that integrate multiple lines of evidence to evaluate the probability that a rare variant is the cause of an individual's disease. We present models for genes causing inherited cardiac conditions, though the framework is transferable to other genes and syndromes. Our models report a probability of pathogenicity, rather than a categorisation into pathogenic or benign, which captures the inherent uncertainty of the prediction. We find that gene- and syndrome-specific models outperform genome-wide approaches, and that the integration of multiple lines of evidence performs better than individual predictors. The models are adaptable to incorporate new lines of evidence, and results can be combined with familial segregation data in a transparent and quantitative manner to further enhance predictions. Though the probability scale is continuous, and innately interpretable, performance summaries based on thresholds are useful for comparisons. Using a threshold probability of pathogenicity of 0.9, we obtain a positive predictive value of 0.999 and sensitivity of 0.76 for the classification of variants known to cause long QT syndrome over the three most important genes, which represents sufficient accuracy to inform clinical decision-making. A web tool APPRAISE [http://www.cardiodb.org/APPRAISE] provides access to these models and predictions. Our Bayesian framework provides a transparent, flexible and robust framework for the analysis and interpretation of rare genetic variants. Models tailored to specific
Multistate modeling of habitat dynamics: Factors affecting Florida scrub transition probabilities
Breininger, D.R.; Nichols, J.D.; Duncan, B.W.; Stolen, Eric D.; Carter, G.M.; Hunt, D.K.; Drese, J.H.
2010-01-01
Many ecosystems are influenced by disturbances that create specific successional states and habitat structures that species need to persist. Estimating transition probabilities between habitat states and modeling the factors that influence such transitions have many applications for investigating and managing disturbance-prone ecosystems. We identify the correspondence between multistate capture-recapture models and Markov models of habitat dynamics. We exploit this correspondence by fitting and comparing competing models of different ecological covariates affecting habitat transition probabilities in Florida scrub and flatwoods, a habitat important to many unique plants and animals. We subdivided a large scrub and flatwoods ecosystem along central Florida's Atlantic coast into 10-ha grid cells, which approximated average territory size of the threatened Florida Scrub-Jay (Aphelocoma coerulescens), a management indicator species. We used 1.0-m resolution aerial imagery for 1994, 1999, and 2004 to classify grid cells into four habitat quality states that were directly related to Florida Scrub-Jay source-sink dynamics and management decision making. Results showed that static site features related to fire propagation (vegetation type, edges) and temporally varying disturbances (fires, mechanical cutting) best explained transition probabilities. Results indicated that much of the scrub and flatwoods ecosystem was resistant to moving from a degraded state to a desired state without mechanical cutting, an expensive restoration tool. We used habitat models parameterized with the estimated transition probabilities to investigate the consequences of alternative management scenarios on future habitat dynamics. We recommend this multistate modeling approach as being broadly applicable for studying ecosystem, land cover, or habitat dynamics. The approach provides maximum-likelihood estimates of transition parameters, including precision measures, and can be used to assess
Dependence in probabilistic modeling, Dempster-Shafer theory, and probability bounds analysis.
Oberkampf, William Louis; Tucker, W. Troy; Zhang, Jianzhong; Ginzburg, Lev; Berleant, Daniel J.; Ferson, Scott; Hajagos, Janos; Nelsen, Roger B.
2004-10-01
This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.
A Mathematical Model for Calculating Detection Probability of a Diffusion Target.
1984-09-01
diffusion model. This model assumes that there is a stationary searcher which has a " cookie -cutter" sensor with radius R. In order to construct this model...stationary searcher which has a " cookie -cutter" sensor with radius R. In order to construct this model, a Monte Carlo simulation program is used to generate...of radius R. The dete:tior. probability of a target inside of this disk is 1 and outside is 0. :he searcher thus has a " cookie - cutter" sensor with
Not quite normal: Consequences of violating the assumption of normality in regression mixture models
Lee Van Horn, M.; Smith, Jessalyn; Fagan, Abigail A.; Jaki, Thomas; Feaster, Daniel J.; Masyn, Katherine; Hawkins, J. David; Howe, George
2012-01-01
Regression mixture models are a new approach for finding differential effects which have only recently begun to be used in applied research. This approach comes at the cost of the assumption that error terms are normally distributed within classes. The current study uses Monte Carlo simulations to explore the effects of relatively minor violations of this assumption, the use of an ordered polytomous outcome is then examined as an alternative which makes somewhat weaker assumptions, and finally both approaches are demonstrated with an applied example looking at differences in the effects of family management on the highly skewed outcome of drug use. Results show that violating the assumption of normal errors results in systematic bias in both latent class enumeration and parameter estimates. Additional classes which reflect violations of distributional assumptions are found. Under some conditions it is possible to come to conclusions that are consistent with the effects in the population, but when errors are skewed in both classes the results typically no longer reflect even the pattern of effects in the population. The polytomous regression model performs better under all scenarios examined and comes to reasonable results with the highly skewed outcome in the applied example. We recommend that careful evaluation of model sensitivity to distributional assumptions be the norm when conducting regression mixture models. PMID:22754273
NASA Astrophysics Data System (ADS)
Wei, Robert P.; Harlow, D. Gary
2005-01-01
Life prediction and reliability assessment are essential components for the life-cycle engineering and management (LCEM) of modern engineered systems. These systems can range from microelectronic and bio-medical devices to large machinery and structures. To be effective, the underlying approach to LCEM must be transformed to embody mechanistically based probability modelling, vis-à-vis the more traditional experientially based statistical modelling, for predicting damage evolution and distribution. In this paper, the probability and statistical approaches are compared and differentiated. The process of model development on the basis of mechanistic understanding derived from critical experiments is illustrated through selected examples. The efficacy of this approach is illustrated through an example of the evolution and distribution of corrosion and corrosion fatigue damage in aluminium alloys in relation to aircraft that had been in long-term service.
Universality of the crossing probability for the Potts model for q=1, 2, 3, 4.
Vasilyev, Oleg A
2003-08-01
The universality of the crossing probability pi(hs) of a system to percolate only in the horizontal direction was investigated numerically by a cluster Monte Carlo algorithm for the q-state Potts model for q=2, 3, 4 and for percolation q=1. We check the percolation through Fortuin-Kasteleyn clusters near the critical point on the square lattice by using representation of the Potts model as the correlated site-bond percolation model. It was shown that probability of a system to percolate only in the horizontal direction pi(hs) has the universal form pi(hs)=A(q)Q(z) for q=1,2,3,4 as a function of the scaling variable z=[b(q)L(1/nu(q))[p-p(c)(q,L)
Impact of stray charge on interconnect wire via probability model of double-dot system
NASA Astrophysics Data System (ADS)
Xiangye, Chen; Li, Cai; Qiang, Zeng; Xinqiao, Wang
2016-02-01
The behavior of quantum cellular automata (QCA) under the influence of a stray charge is quantified. A new time-independent switching paradigm, a probability model of the double-dot system, is developed. Superiority in releasing the calculation operation is presented by the probability model compared to previous stray charge analysis utilizing ICHA or full-basis calculation. Simulation results illustrate that there is a 186-nm-wide region surrounding a QCA wire where a stray charge will cause the target cell to switch unsuccessfully. The failure is exhibited by two new states' dominating the target cell. Therefore, a bistable saturation model is no longer applicable for stray charge analysis. Project supported by the National Natural Science Foundation of China (No. 61172043) and the Key Program of Shaanxi Provincial Natural Science for Basic Research (No. 2011JZ015).
A Probability Model of Decompression Sickness at 4.3 Psia after Exercise Prebreathe
NASA Technical Reports Server (NTRS)
Conkin, Johnny; Gernhardt, Michael L.; Powell, Michael R.; Pollock, Neal
2004-01-01
Exercise PB can reduce the risk of decompression sickness on ascent to 4.3 psia when performed at the proper intensity and duration. Data are from seven tests. PB times ranged from 90 to 150 min. High intensity, short duration dual-cycle ergometry was done during the PB. This was done alone, or combined with intermittent low intensity exercise or periods of rest for the remaining PB. Nonambulating men and women performed light exercise from a semi-recumbent position at 4.3 psia for four hrs. The Research Model with age tested the probability that DCS increases with advancing age. The NASA Model with gender hypothesized that the probability of DCS increases if gender is female. Accounting for exercise and rest during PB with a variable half-time compartment for computed tissue N2 pressure advances our probability modeling of hypobaric DCS. Both models show that a small increase in exercise intensity during PB reduces the risk of DCS, and a larger increase in exercise intensity dramatically reduces risk. These models support the hypothesis that aerobic fitness is an important consideration for the risk of hypobaric DCS when exercise is performed during the PB.
Li, Ning; Liu, Xueqin; Xie, Wei; Wu, Jidong; Zhang, Peng
2013-01-01
New features of natural disasters have been observed over the last several years. The factors that influence the disasters' formation mechanisms, regularity of occurrence and main characteristics have been revealed to be more complicated and diverse in nature than previously thought. As the uncertainty involved increases, the variables need to be examined further. This article discusses the importance and the shortage of multivariate analysis of natural disasters and presents a method to estimate the joint probability of the return periods and perform a risk analysis. Severe dust storms from 1990 to 2008 in Inner Mongolia were used as a case study to test this new methodology, as they are normal and recurring climatic phenomena on Earth. Based on the 79 investigated events and according to the dust storm definition with bivariate, the joint probability distribution of severe dust storms was established using the observed data of maximum wind speed and duration. The joint return periods of severe dust storms were calculated, and the relevant risk was analyzed according to the joint probability. The copula function is able to simulate severe dust storm disasters accurately. The joint return periods generated are closer to those observed in reality than the univariate return periods and thus have more value in severe dust storm disaster mitigation, strategy making, program design, and improvement of risk management. This research may prove useful in risk-based decision making. The exploration of multivariate analysis methods can also lay the foundation for further applications in natural disaster risk analysis.
NASA Astrophysics Data System (ADS)
Ong, Zhun-Yong; Zhang, Gang
2015-05-01
The Kapitza or interfacial thermal resistance at the boundary of two different insulating solids depends on the transmission of phonons across the interface and the phonon dispersion of either material. We extend the existing atomistic Green's function (AGF) method to compute the probability for individual phonon modes to be transmitted across the interface. The extended method is based on the concept of the Bloch matrix and allows us to determine the wavelength and polarization dependence of the phonon transmission as well as to analyze efficiently the contribution of individual acoustic and optical phonon modes to interfacial thermal transport. The relationship between the phonon transmission probability and dispersion is explicitly established. A detailed description of the method is given and key formulas are provided. To illustrate the role of the phonon dispersion in interfacial thermal conduction, we apply the method to study phonon transmission and thermal transport at the armchair interface between monolayer graphene and hexagonal boron nitride. We find that the phonon transmission probability is high for longitudinal (LA) and flexural (ZA) acoustic phonons at normal and oblique incidence to the interface. At room temperature, the dominant contribution to interfacial thermal transport comes from the transverse-polarized phonons in graphene (45.5%) and longitudinal-polarized phonons in boron nitride (47.4%).
Missing data approaches for probability regression models with missing outcomes with applications
Qi, Li; Sun, Yanqing
2016-01-01
In this paper, we investigate several well known approaches for missing data and their relationships for the parametric probability regression model Pβ(Y|X) when outcome of interest Y is subject to missingness. We explore the relationships between the mean score method, the inverse probability weighting (IPW) method and the augmented inverse probability weighted (AIPW) method with some interesting findings. The asymptotic distributions of the IPW and AIPW estimators are derived and their efficiencies are compared. Our analysis details how efficiency may be gained from the AIPW estimator over the IPW estimator through estimation of validation probability and augmentation. We show that the AIPW estimator that is based on augmentation using the full set of observed variables is more efficient than the AIPW estimator that is based on augmentation using a subset of observed variables. The developed approaches are applied to Poisson regression model with missing outcomes based on auxiliary outcomes and a validated sample for true outcomes. We show that, by stratifying based on a set of discrete variables, the proposed statistical procedure can be formulated to analyze automated records that only contain summarized information at categorical levels. The proposed methods are applied to analyze influenza vaccine efficacy for an influenza vaccine study conducted in Temple-Belton, Texas during the 2000-2001 influenza season. PMID:26900543
Markov, Krassimir; Schinkel, Colleen; Stavreva, Nadia; Stavrev, Pavel; Weldon, Michael; Fallone, B Gino
2006-09-01
A very important issue in contemporary inverse treatment radiotherapy planning is the specification of proper dose-volume constraints limiting the treatment planning algorithm from delivering high doses to the normal tissue surrounding the tumor. Recently we have proposed a method called reverse mapping of normal tissue complication probabilities (NTCP) onto dose-volume histogram (DVH) space, which allows the calculation of appropriate biologically based dose-volume constraints to be used in the inverse treatment planning. The method of reverse mapping requires random sampling from the functional space of all monotonically decreasing functions in the unit square. We develop, in this paper, a random function generator for the purpose of the reverse mapping. Since the proposed generator is based on the theory of random walk, it is therefore designated in this work, as a random walk DVH generator. It is theoretically determined that the distribution of the number of monotonically decreasing functions passing through a point in the dose volume histogram space follows the hypergeometric distribution. The proposed random walk DVH generator thus simulates, in a random fashion, trajectories of monotonically decreasing functions (finite series) that are situated in the unit square [0, 1] X [1,0] using the hypergeometric distribution. The DVH generator is an important tool in the study of reverse NTCP mapping for the calculation of biologically based dose-volume constraints for inverse treatment planning.
Unit Root Testing and Estimation in Nonlinear ESTAR Models with Normal and Non-Normal Errors
Alamgir; Ali, Amjad; Khan, Dost Muhammad; Khan, Sajjad Ahmad; Khan, Zardad
2016-01-01
Exponential Smooth Transition Autoregressive (ESTAR) models can capture non-linear adjustment of the deviations from equilibrium conditions which may explain the economic behavior of many variables that appear non stationary from a linear viewpoint. Many researchers employ the Kapetanios test which has a unit root as the null and a stationary nonlinear model as the alternative. However this test statistics is based on the assumption of normally distributed errors in the DGP. Cook has analyzed the size of the nonlinear unit root of this test in the presence of heavy-tailed innovation process and obtained the critical values for both finite variance and infinite variance cases. However the test statistics of Cook are oversized. It has been found by researchers that using conventional tests is dangerous though the best performance among these is a HCCME. The over sizing for LM tests can be reduced by employing fixed design wild bootstrap remedies which provide a valuable alternative to the conventional tests. In this paper the size of the Kapetanios test statistic employing hetroscedastic consistent covariance matrices has been derived and the results are reported for various sample sizes in which size distortion is reduced. The properties for estimates of ESTAR models have been investigated when errors are assumed non-normal. We compare the results obtained through the fitting of nonlinear least square with that of the quantile regression fitting in the presence of outliers and the error distribution was considered to be from t-distribution for various sample sizes. PMID:27898702
Gray, David R
2010-12-01
As global trade increases so too does the probability of introduction of alien species to new locations. Estimating the probability of an alien species introduction and establishment following introduction is a necessary step in risk estimation (probability of an event times the consequences, in the currency of choice, of the event should it occur); risk estimation is a valuable tool for reducing the risk of biological invasion with limited resources. The Asian gypsy moth, Lymantria dispar (L.), is a pest species whose consequence of introduction and establishment in North America and New Zealand warrants over US$2 million per year in surveillance expenditure. This work describes the development of a two-dimensional phenology model (GLS-2d) that simulates insect development from source to destination and estimates: (1) the probability of introduction from the proportion of the source population that would achieve the next developmental stage at the destination and (2) the probability of establishment from the proportion of the introduced population that survives until a stable life cycle is reached at the destination. The effect of shipping schedule on the probabilities of introduction and establishment was examined by varying the departure date from 1 January to 25 December by weekly increments. The effect of port efficiency was examined by varying the length of time that invasion vectors (shipping containers and ship) were available for infection. The application of GLS-2d is demonstrated using three common marine trade routes (to Auckland, New Zealand, from Kobe, Japan, and to Vancouver, Canada, from Kobe and from Vladivostok, Russia).
A predictive model to estimate the pretest probability of metastasis in patients with osteosarcoma
Wang, Sisheng; Zheng, Shaoluan; Hu, Kongzu; Sun, Heyan; Zhang, Jinling; Rong, Genxiang; Gao, Jie; Ding, Nan; Gui, Binjie
2017-01-01
Abstract Osteosarcomas (OSs) represent a huge challenge to improve the overall survival, especially in metastatic patients. Increasing evidence indicates that both tumor-associated elements but also on host-associated elements are under a remarkable effect on the prognosis of cancer patients, especially systemic inflammatory response. By analyzing a series prognosis of factors, including age, gender, primary tumor size, tumor location, tumor grade, and histological classification, monocyte ratio, and NLR ratio, a clinical predictive model was established by using stepwise logistic regression involved circulating leukocyte to compute the estimated probabilities of metastases for OS patients. The clinical predictive model was described by the following equations: probability of developing metastases = ex/(1 + ex), x = −2.150 + (1.680 × monocyte ratio) + (1.533 × NLR ratio), where is the base of the natural logarithm, the assignment to each of the 2 variables is 1 if the ratio >1 (otherwise 0). The calculated AUC of the receiver-operating characteristic curve as 0.793 revealed well accuracy of this model (95% CI, 0.740–0.845). The predicted probabilities that we generated with the cross-validation procedure had a similar AUC (0.743; 95% CI, 0.684–0.803). The present model could be used to improve the outcomes of the metastases by developing a predictive model considering circulating leukocyte influence to estimate the pretest probability of developing metastases in patients with OS. PMID:28099353
Salmon, Octavio R; Crokidakis, Nuno; Nobre, Fernando D
2009-02-04
A random-field Ising model that is capable of exhibiting a rich variety of multicritical phenomena, as well as a smearing of such behavior, is investigated. The model consists of an infinite-range-interaction Ising ferromagnet in the presence of a triple Gaussian random magnetic field, which is defined as a superposition of three Gaussian distributions with the same width σ, centered at H = 0 and H = ± H(0), with probabilities p and (1-p)/2, respectively. Such a distribution is very general and recovers, as limiting cases, the trimodal, bimodal and Gaussian probability distributions. In particular, the special case of the random-field Ising model in the presence of a trimodal probability distribution (limit [Formula: see text]) is able to present a rather nontrivial multicritical behavior. It is argued that the triple Gaussian probability distribution is appropriate for a physical description of some diluted antiferromagnets in the presence of a uniform external field, for which the corresponding physical realization consists of an Ising ferromagnet under random fields whose distribution appears to be well represented in terms of a superposition of two parts, namely a trimodal and a continuous contribution. The model is investigated by means of the replica method, and phase diagrams are obtained within the replica-symmetric solution, which is known to be stable for the present system. A rich variety of phase diagrams is presented, with one or two distinct ferromagnetic phases, continuous and first-order transition lines, tricritical, fourth-order, critical end points and many other interesting multicritical phenomena. Additionally, the present model carries the possibility of destroying such multicritical phenomena due to an increase in the randomness, i.e. increasing σ, which represents a very common feature in real systems.
A predictive model to estimate the pretest probability of metastasis in patients with osteosarcoma.
Wang, Sisheng; Zheng, Shaoluan; Hu, Kongzu; Sun, Heyan; Zhang, Jinling; Rong, Genxiang; Gao, Jie; Ding, Nan; Gui, Binjie
2017-01-01
Osteosarcomas (OSs) represent a huge challenge to improve the overall survival, especially in metastatic patients. Increasing evidence indicates that both tumor-associated elements but also on host-associated elements are under a remarkable effect on the prognosis of cancer patients, especially systemic inflammatory response. By analyzing a series prognosis of factors, including age, gender, primary tumor size, tumor location, tumor grade, and histological classification, monocyte ratio, and NLR ratio, a clinical predictive model was established by using stepwise logistic regression involved circulating leukocyte to compute the estimated probabilities of metastases for OS patients. The clinical predictive model was described by the following equations: probability of developing metastases = ex/(1 + ex), x = -2.150 + (1.680 × monocyte ratio) + (1.533 × NLR ratio), where is the base of the natural logarithm, the assignment to each of the 2 variables is 1 if the ratio >1 (otherwise 0). The calculated AUC of the receiver-operating characteristic curve as 0.793 revealed well accuracy of this model (95% CI, 0.740-0.845). The predicted probabilities that we generated with the cross-validation procedure had a similar AUC (0.743; 95% CI, 0.684-0.803). The present model could be used to improve the outcomes of the metastases by developing a predictive model considering circulating leukocyte influence to estimate the pretest probability of developing metastases in patients with OS.
NASA Astrophysics Data System (ADS)
Denzler, Stefan M.; Dacorogna, Michel M.; Muller, Ulrich A.; McNeil, Alexander J.
2005-05-01
Credit risk models like Moody's KMV are now well established in the market and give bond managers reliable default probabilities for individual firms. Until now it has been hard to relate those probabilities to the actual credit spreads observed on the market for corporate bonds. Inspired by the existence of scaling laws in financial markets by Dacorogna et al. 2001 and DiMatteo et al. 2005 deviating from the Gaussian behavior, we develop a model that quantitatively links those default probabilities to credit spreads (market prices). The main input quantities to this study are merely industry yield data of different times to maturity and expected default frequencies (EDFs) of Moody's KMV. The empirical results of this paper clearly indicate that the model can be used to calculate approximate credit spreads (market prices) from EDFs, independent of the time to maturity and the industry sector under consideration. Moreover, the model is effective in an out-of-sample setting, it produces consistent results on the European bond market where data are scarce and can be adequately used to approximate credit spreads on the corporate level.
Normal versus Noncentral Chi-Square Asymptotics of Misspecified Models
ERIC Educational Resources Information Center
Chun, So Yeon; Shapiro, Alexander
2009-01-01
The noncentral chi-square approximation of the distribution of the likelihood ratio (LR) test statistic is a critical part of the methodology in structural equation modeling. Recently, it was argued by some authors that in certain situations normal distributions may give a better approximation of the distribution of the LR test statistic. The main…
Normal versus Noncentral Chi-Square Asymptotics of Misspecified Models
ERIC Educational Resources Information Center
Chun, So Yeon; Shapiro, Alexander
2009-01-01
The noncentral chi-square approximation of the distribution of the likelihood ratio (LR) test statistic is a critical part of the methodology in structural equation modeling. Recently, it was argued by some authors that in certain situations normal distributions may give a better approximation of the distribution of the LR test statistic. The main…
Modeling the presence probability of invasive plant species with nonlocal dispersal.
Strickland, Christopher; Dangelmayr, Gerhard; Shipman, Patrick D
2014-08-01
Mathematical models for the spread of invading plant organisms typically utilize population growth and dispersal dynamics to predict the time-evolution of a population distribution. In this paper, we revisit a particular class of deterministic contact models obtained from a stochastic birth process for invasive organisms. These models were introduced by Mollison (J R Stat Soc 39(3):283, 1977). We derive the deterministic integro-differential equation of a more general contact model and show that the quantity of interest may be interpreted not as population size, but rather as the probability of species occurrence. We proceed to show how landscape heterogeneity can be included in the model by utilizing the concept of statistical habitat suitability models which condense diverse ecological data into a single statistic. As ecologists often deal with species presence data rather than population size, we argue that a model for probability of occurrence allows for a realistic determination of initial conditions from data. Finally, we present numerical results of our deterministic model and compare them to simulations of the underlying stochastic process.
Logit-normal mixed model for Indian monsoon precipitation
NASA Astrophysics Data System (ADS)
Dietz, L. R.; Chatterjee, S.
2014-09-01
Describing the nature and variability of Indian monsoon precipitation is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Four GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data. The logit-normal model was applied to light, moderate, and extreme rainfall. Findings indicated that physical constructs were preserved by the models, and random effects were significant in many cases. We also found GLMM estimation methods were sensitive to tuning parameters and assumptions and therefore, recommend use of multiple methods in applications. This work provides a novel use of GLMM and promotes its addition to the gamut of tools for analysis in studying climate phenomena.
Neale, Michael C.; Clark, Shaunna L.; Dolan, Conor V.; Hunter, Michael D.
2015-01-01
A linear latent growth curve mixture model with regime switching is extended in 2 ways. Previously, the matrix of first-order Markov switching probabilities was specified to be time-invariant, regardless of the pair of occasions being considered. The first extension, time-varying transitions, specifies different Markov transition matrices between each pair of occasions. The second extension is second-order time-invariant Markov transition probabilities, such that the probability of switching depends on the states at the 2 previous occasions. The models are implemented using the R package OpenMx, which facilitates data handling, parallel computation, and further model development. It also enables the extraction and display of relative likelihoods for every individual in the sample. The models are illustrated with previously published data on alcohol use observed on 4 occasions as part of the National Longitudinal Survey of Youth, and demonstrate improved fit to the data. PMID:26924921
A Model-Free Machine Learning Method for Risk Classification and Survival Probability Prediction.
Geng, Yuan; Lu, Wenbin; Zhang, Hao Helen
2014-01-01
Risk classification and survival probability prediction are two major goals in survival data analysis since they play an important role in patients' risk stratification, long-term diagnosis, and treatment selection. In this article, we propose a new model-free machine learning framework for risk classification and survival probability prediction based on weighted support vector machines. The new procedure does not require any specific parametric or semiparametric model assumption on data, and is therefore capable of capturing nonlinear covariate effects. We use numerous simulation examples to demonstrate finite sample performance of the proposed method under various settings. Applications to a glioma tumor data and a breast cancer gene expression survival data are shown to illustrate the new methodology in real data analysis.
Blocking probability in the hose-model optical VPN with different number of wavelengths
NASA Astrophysics Data System (ADS)
Roslyakov, Alexander V.
2017-04-01
Connection setup with guaranteed quality of service (QoS) in the optical virtual private network (OVPN) is a major goal for the network providers. In order to support this we propose a QoS based OVPN connection set up mechanism over WDM network to the end customer. The proposed WDM network model can be specified in terms of QoS parameter such as blocking probability. We estimated this QoS parameter based on the hose-model OVPN. In this mechanism the OVPN connections also can be created or deleted according to the availability of the wavelengths in the optical path. In this paper we have considered the impact of the number of wavelengths on the computation of blocking probability. The goal of the work is to dynamically provide a best OVPN connection during frequent arrival of connection requests with QoS requirements.
Modelling secondary microseismic noise by normal mode summation
NASA Astrophysics Data System (ADS)
Gualtieri, L.; Stutzmann, E.; Capdeville, Y.; Ardhuin, F.; Schimmel, M.; Mangeney, A.; Morelli, A.
2013-06-01
Secondary microseisms recorded by seismic stations are generated in the ocean by the interaction of ocean gravity waves. We present here the theory for modelling secondary microseismic noise by normal mode summation. We show that the noise sources can be modelled by vertical forces and how to derive them from a realistic ocean wave model. We then show how to compute bathymetry excitation effect in a realistic earth model by using normal modes and a comparison with Longuet-Higgins approach. The strongest excitation areas in the oceans depends on the bathymetry and period and are different for each seismic mode. Seismic noise is then modelled by normal mode summation considering varying bathymetry. We derive an attenuation model that enables to fit well the vertical component spectra whatever the station location. We show that the fundamental mode of Rayleigh waves is the dominant signal in seismic noise. There is a discrepancy between real and synthetic spectra on the horizontal components that enables to estimate the amount of Love waves for which a different source mechanism is needed. Finally, we investigate noise generated in all the oceans around Africa and show that most of noise recorded in Algeria (TAM station) is generated in the Northern Atlantic and that there is a seasonal variability of the contribution of each ocean and sea.
A Stochastic Tick-Borne Disease Model: Exploring the Probability of Pathogen Persistence.
Maliyoni, Milliward; Chirove, Faraimunashe; Gaff, Holly D; Govinder, Keshlan S
2017-07-13
We formulate and analyse a stochastic epidemic model for the transmission dynamics of a tick-borne disease in a single population using a continuous-time Markov chain approach. The stochastic model is based on an existing deterministic metapopulation tick-borne disease model. We compare the disease dynamics of the deterministic and stochastic models in order to determine the effect of randomness in tick-borne disease dynamics. The probability of disease extinction and that of a major outbreak are computed and approximated using the multitype Galton-Watson branching process and numerical simulations, respectively. Analytical and numerical results show some significant differences in model predictions between the stochastic and deterministic models. In particular, we find that a disease outbreak is more likely if the disease is introduced by infected deer as opposed to infected ticks. These insights demonstrate the importance of host movement in the expansion of tick-borne diseases into new geographic areas.
A Normalization Model of Attentional Modulation of Single Unit Responses
Lee, Joonyeol; Maunsell, John H. R.
2009-01-01
Although many studies have shown that attention to a stimulus can enhance the responses of individual cortical sensory neurons, little is known about how attention accomplishes this change in response. Here, we propose that attention-based changes in neuronal responses depend on the same response normalization mechanism that adjusts sensory responses whenever multiple stimuli are present. We have implemented a model of attention that assumes that attention works only through this normalization mechanism, and show that it can replicate key effects of attention. The model successfully explains how attention changes the gain of responses to individual stimuli and also why modulation by attention is more robust and not a simple gain change when multiple stimuli are present inside a neuron's receptive field. Additionally, the model accounts well for physiological data that measure separately attentional modulation and sensory normalization of the responses of individual neurons in area MT in visual cortex. The proposal that attention works through a normalization mechanism sheds new light a broad range of observations on how attention alters the representation of sensory information in cerebral cortex. PMID:19247494
Growth mixture modeling with non-normal distributions.
Muthén, Bengt; Asparouhov, Tihomir
2015-03-15
A limiting feature of previous work on growth mixture modeling is the assumption of normally distributed variables within each latent class. With strongly non-normal outcomes, this means that several latent classes are required to capture the observed variable distributions. Being able to relax the assumption of within-class normality has the advantage that a non-normal observed distribution does not necessitate using more than one class to fit the distribution. It is valuable to add parameters representing the skewness and the thickness of the tails. A new growth mixture model of this kind is proposed drawing on recent work in a series of papers using the skew-t distribution. The new method is illustrated using the longitudinal development of body mass index in two data sets. The first data set is from the National Longitudinal Survey of Youth covering ages 12-23 years. Here, the development is related to an antecedent measuring socioeconomic background. The second data set is from the Framingham Heart Study covering ages 25-65 years. Here, the development is related to the concurrent event of treatment for hypertension using a joint growth mixture-survival model.
NASA Astrophysics Data System (ADS)
Zhong, H.; van Overloop, P.-J.; van Gelder, P. H. A. J. M.
2013-07-01
The Lower Rhine Delta, a transitional area between the River Rhine and Meuse and the North Sea, is at risk of flooding induced by infrequent events of a storm surge or upstream flooding, or by more infrequent events of a combination of both. A joint probability analysis of the astronomical tide, the wind induced storm surge, the Rhine flow and the Meuse flow at the boundaries is established in order to produce the joint probability distribution of potential flood events. Three individual joint probability distributions are established corresponding to three potential flooding causes: storm surges and normal Rhine discharges, normal sea levels and high Rhine discharges, and storm surges and high Rhine discharges. For each category, its corresponding joint probability distribution is applied, in order to stochastically simulate a large number of scenarios. These scenarios can be used as inputs to a deterministic 1-D hydrodynamic model in order to estimate the high water level frequency curves at the transitional locations. The results present the exceedance probability of the present design water level for the economically important cities of Rotterdam and Dordrecht. The calculated exceedance probability is evaluated and compared to the governmental norm. Moreover, the impact of climate change on the high water level frequency curves is quantified for the year 2050 in order to assist in decisions regarding the adaptation of the operational water management system and the flood defense system.
Logit-normal mixed model for Indian Monsoon rainfall extremes
NASA Astrophysics Data System (ADS)
Dietz, L. R.; Chatterjee, S.
2014-03-01
Describing the nature and variability of Indian monsoon rainfall extremes is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Several GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data procured from the National Climatic Data Center. The logit-normal model was applied with fixed covariates of latitude, longitude, elevation, daily minimum and maximum temperatures with a random intercept by weather station. In general, the estimation methods concurred in their suggestion of a relationship between the El Niño Southern Oscillation (ENSO) and extreme rainfall variability estimates. This work provides a valuable starting point for extending GLMM to incorporate the intricate dependencies in extreme climate events.
Probability of ventricular fibrillation: allometric model based on the ST deviation
2011-01-01
Background Allometry, in general biology, measures the relative growth of a part in relation to the whole living organism. Using reported clinical data, we apply this concept for evaluating the probability of ventricular fibrillation based on the electrocardiographic ST-segment deviation values. Methods Data collected by previous reports were used to fit an allometric model in order to estimate ventricular fibrillation probability. Patients presenting either with death, myocardial infarction or unstable angina were included to calculate such probability as, VFp = δ + β (ST), for three different ST deviations. The coefficients δ and β were obtained as the best fit to the clinical data extended over observational periods of 1, 6, 12 and 48 months from occurrence of the first reported chest pain accompanied by ST deviation. Results By application of the above equation in log-log representation, the fitting procedure produced the following overall coefficients: Average β = 0.46, with a maximum = 0.62 and a minimum = 0.42; Average δ = 1.28, with a maximum = 1.79 and a minimum = 0.92. For a 2 mm ST-deviation, the full range of predicted ventricular fibrillation probability extended from about 13% at 1 month up to 86% at 4 years after the original cardiac event. Conclusions These results, at least preliminarily, appear acceptable and still call for full clinical test. The model seems promising, especially if other parameters were taken into account, such as blood cardiac enzyme concentrations, ischemic or infarcted epicardial areas or ejection fraction. It is concluded, considering these results and a few references found in the literature, that the allometric model shows good predictive practical value to aid medical decisions. PMID:21226961
Gruber, Susan; Logan, Roger W; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A
2015-01-15
Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However, a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V-fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results.
NASA Astrophysics Data System (ADS)
Karakostas, Vassilis; Papadimitriou, Eleftheria; Gospodinov, Dragomir
2014-04-01
The 2013 January 8 Mw 5.8 North Aegean earthquake sequence took place on one of the ENE-WSW trending parallel dextral strike slip fault branches in this area, in the continuation of 1968 large (M = 7.5) rupture. The source mechanism of the main event indicates predominantly strike slip faulting in agreement with what is expected from regional seismotectonics. It was the largest event to have occurred in the area since the establishment of the Hellenic Unified Seismological Network (HUSN), with an adequate number of stations in close distances and full azimuthal coverage, thus providing the chance of an exhaustive analysis of its aftershock sequence. The main shock was followed by a handful of aftershocks with M ≥ 4.0 and tens with M ≥ 3.0. Relocation was performed by using the recordings from HUSN and a proper crustal model for the area, along with time corrections in each station relative to the model used. Investigation of the spatial and temporal behaviour of seismicity revealed possible triggering of adjacent fault segments. Theoretical static stress changes from the main shock give a preliminary explanation for the aftershock distribution aside from the main rupture. The off-fault seismicity is perfectly explained if μ > 0.5 and B = 0.0, evidencing high fault friction. In an attempt to forecast occurrence probabilities of the strong events (Mw ≥ 5.0), estimations were performed following the Restricted Epidemic Type Aftershock Sequence (RETAS) model. The identified best-fitting MOF model was used to execute 1-d forecasts for such aftershocks and follow the probability evolution in time during the sequence. Forecasting was also implemented on the base of a temporal model of aftershock occurrence, different from the modified Omori formula (the ETAS model), which resulted in probability gain (though small) in strong aftershock forecasting for the beginning of the sequence.
ERIC Educational Resources Information Center
Gugel, John F.
A new method for estimating the parameters of the normal ogive three-parameter model for multiple-choice test items--the normalized direct (NDIR) procedure--is examined. The procedure is compared to a more commonly used estimation procedure, Lord's LOGIST, using computer simulations. The NDIR procedure uses the normalized (mid-percentile)…
A cellular automata model of traffic flow with variable probability of randomization
NASA Astrophysics Data System (ADS)
Zheng, Wei-Fan; Zhang, Ji-Ye
2015-05-01
Research on the stochastic behavior of traffic flow is important to understand the intrinsic evolution rules of a traffic system. By introducing an interactional potential of vehicles into the randomization step, an improved cellular automata traffic flow model with variable probability of randomization is proposed in this paper. In the proposed model, the driver is affected by the interactional potential of vehicles before him, and his decision-making process is related to the interactional potential. Compared with the traditional cellular automata model, the modeling is more suitable for the driver’s random decision-making process based on the vehicle and traffic situations in front of him in actual traffic. From the improved model, the fundamental diagram (flow-density relationship) is obtained, and the detailed high-density traffic phenomenon is reproduced through numerical simulation. Project supported by the National Natural Science Foundation of China (Grant Nos. 11172247, 61273021, 61373009, and 61100118).
Probability of detection model for the non-destructive inspection of steam generator tubes of PWRs
NASA Astrophysics Data System (ADS)
Yusa, N.
2017-06-01
This study proposes a probability of detection (POD) model to discuss the capability of non-destructive testing methods for the detection of stress corrosion cracks appearing in the steam generator tubes of pressurized water reactors. Three-dimensional finite element simulations were conducted to evaluate eddy current signals due to stress corrosion cracks. The simulations consider an absolute type pancake probe and model a stress corrosion crack as a region with a certain electrical conductivity inside to account for eddy currents flowing across a flaw. The probabilistic nature of a non-destructive test is simulated by varying the electrical conductivity of the modelled stress corrosion cracking. A two-dimensional POD model, which provides the POD as a function of the depth and length of a flaw, is presented together with a conventional POD model characterizing a flaw using a single parameter. The effect of the number of the samples on the PODs is also discussed.
A spatial model of bird abundance as adjusted for detection probability
Gorresen, P.M.; Mcmillan, G.P.; Camp, R.J.; Pratt, T.K.
2009-01-01
Modeling the spatial distribution of animals can be complicated by spatial and temporal effects (i.e. spatial autocorrelation and trends in abundance over time) and other factors such as imperfect detection probabilities and observation-related nuisance variables. Recent advances in modeling have demonstrated various approaches that handle most of these factors but which require a degree of sampling effort (e.g. replication) not available to many field studies. We present a two-step approach that addresses these challenges to spatially model species abundance. Habitat, spatial and temporal variables were handled with a Bayesian approach which facilitated modeling hierarchically structured data. Predicted abundance was subsequently adjusted to account for imperfect detection and the area effectively sampled for each species. We provide examples of our modeling approach for two endemic Hawaiian nectarivorous honeycreepers: 'i'iwi Vestiaria coccinea and 'apapane Himatione sanguinea. ?? 2009 Ecography.
Stirk, Emily R; Lythe, Grant; van den Berg, Hugo A; Hurst, Gareth A D; Molina-París, Carmen
2010-04-01
The limiting conditional probability distribution (LCD) has been much studied in the field of mathematical biology, particularly in the context of epidemiology and the persistence of epidemics. However, it has not yet been applied to the immune system. One of the characteristic features of the T cell repertoire is its diversity. This diversity declines in old age, whence the concepts of extinction and persistence are also relevant to the immune system. In this paper we model T cell repertoire maintenance by means of a continuous-time birth and death process on the positive integers, where the origin is an absorbing state. We show that eventual extinction is guaranteed. The late-time behaviour of the process before extinction takes place is modelled by the LCD, which we prove always exists for the process studied here. In most cases, analytic expressions for the LCD cannot be computed but the probability distribution may be approximated by means of the stationary probability distributions of two related processes. We show how these approximations are related to the LCD of the original process and use them to study the LCD in two special cases. We also make use of the large N expansion to derive a further approximation to the LCD. The accuracy of the various approximations is then analysed. (c) 2009 Elsevier Inc. All rights reserved.
Heglund, P.J.; Nichols, J.D.; Hines, J.E.; Sauer, J.; Fallon, J.; Fallon, F.; Field, Rebecca; Warren, Robert J.; Okarma, Henryk; Sievert, Paul R.
2001-01-01
Point counts are a controversial sampling method for bird populations because the counts are not censuses, and the proportion of birds missed during counting generally is not estimated. We applied a double-observer approach to estimate detection rates of birds from point counts in Maryland, USA, and test whether detection rates differed between point counts conducted in field habitats as opposed to wooded habitats. We conducted 2 analyses. The first analysis was based on 4 clusters of counts (routes) surveyed by a single pair of observers. A series of models was developed with differing assumptions about sources of variation in detection probabilities and fit using program SURVIV. The most appropriate model was selected using Akaike's Information Criterion. The second analysis was based on 13 routes (7 woods and 6 field routes) surveyed by various observers in which average detection rates were estimated by route and compared using a t-test. In both analyses, little evidence existed for variation in detection probabilities in relation to habitat. Double-observer methods provide a reasonable means of estimating detection probabilities and testing critical assumptions needed for analysis of point counts.
NASA Astrophysics Data System (ADS)
Li, Zhanling; Li, Zhanjie; Li, Chengcheng
2014-05-01
Probability modeling of hydrological extremes is one of the major research areas in hydrological science. Most basins in humid and semi-humid south and east of China are concerned for probability modeling analysis of high flow extremes. While, for the inland river basin which occupies about 35% of the country area, there is a limited presence of such studies partly due to the limited data availability and a relatively low mean annual flow. The objective of this study is to carry out probability modeling of high flow extremes in the upper reach of Heihe River basin, the second largest inland river basin in China, by using the peak over threshold (POT) method and Generalized Pareto Distribution (GPD), in which the selection of threshold and inherent assumptions for POT series are elaborated in details. For comparison, other widely used probability distributions including generalized extreme value (GEV), Lognormal, Log-logistic and Gamma are employed as well. Maximum likelihood estimate is used for parameter estimations. Daily flow data at Yingluoxia station from 1978 to 2008 are used. Results show that, synthesizing the approaches of mean excess plot, stability features of model parameters, return level plot and the inherent independence assumption of POT series, an optimum threshold of 340m3/s is finally determined for high flow extremes in Yingluoxia watershed. The resulting POT series is proved to be stationary and independent based on Mann-Kendall test, Pettitt test and autocorrelation test. In terms of Kolmogorov-Smirnov test, Anderson-Darling test and several graphical diagnostics such as quantile and cumulative density function plots, GPD provides the best fit to high flow extremes in the study area. The estimated high flows for long return periods demonstrate that, as the return period increasing, the return level estimates are probably more uncertain. The frequency of high flow extremes exhibits a very slight but not significant decreasing trend from 1978 to
Greis, Tillman; Helmholz, Kathrin; Schöniger, Hans Matthias; Haarstrick, Andreas
2012-06-01
In this study, a 3D urban groundwater model is presented which serves for calculation of multispecies contaminant transport in the subsurface on the regional scale. The total model consists of two submodels, the groundwater flow and reactive transport model, and is validated against field data. The model equations are solved applying finite element method. A sensitivity analysis is carried out to perform parameter identification of flow, transport and reaction processes. Coming from the latter, stochastic variation of flow, transport, and reaction input parameters and Monte Carlo simulation are used in calculating probabilities of pollutant occurrence in the domain. These probabilities could be part of determining future spots of contamination and their measure of damages. Application and validation is exemplarily shown for a contaminated site in Braunschweig (Germany), where a vast plume of chlorinated ethenes pollutes the groundwater. With respect to field application, the methods used for modelling reveal feasible and helpful tools to assess natural attenuation (MNA) and the risk that might be reduced by remediation actions.
Protein single-model quality assessment by feature-based probability density functions.
Cao, Renzhi; Cheng, Jianlin
2016-04-04
Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method-Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob.
Halliwell, J. J.
2009-12-15
In the quantization of simple cosmological models (minisuperspace models) described by the Wheeler-DeWitt equation, an important step is the construction, from the wave function, of a probability distribution answering various questions of physical interest, such as the probability of the system entering a given region of configuration space at any stage in its entire history. A standard but heuristic procedure is to use the flux of (components of) the wave function in a WKB approximation. This gives sensible semiclassical results but lacks an underlying operator formalism. In this paper, we address the issue of constructing probability distributions linked to the Wheeler-DeWitt equation using the decoherent histories approach to quantum theory. The key step is the construction of class operators characterizing questions of physical interest. Taking advantage of a recent decoherent histories analysis of the arrival time problem in nonrelativistic quantum mechanics, we show that the appropriate class operators in quantum cosmology are readily constructed using a complex potential. The class operator for not entering a region of configuration space is given by the S matrix for scattering off a complex potential localized in that region. We thus derive the class operators for entering one or more regions in configuration space. The class operators commute with the Hamiltonian, have a sensible classical limit, and are closely related to an intersection number operator. The definitions of class operators given here handle the key case in which the underlying classical system has multiple crossings of the boundaries of the regions of interest. We show that oscillatory WKB solutions to the Wheeler-DeWitt equation give approximate decoherence of histories, as do superpositions of WKB solutions, as long as the regions of configuration space are sufficiently large. The corresponding probabilities coincide, in a semiclassical approximation, with standard heuristic procedures
P-wave complexity in normal subjects and computer models.
Potse, Mark; Lankveld, Theo A R; Zeemering, Stef; Dagnelie, Pieter C; Stehouwer, Coen D A; Henry, Ronald M; Linnenbank, André C; Kuijpers, Nico H L; Schotten, Ulrich
2016-01-01
P waves reported in electrocardiology literature uniformly appear smooth. Computer simulation and signal analysis studies have shown much more complex shapes. We systematically investigated P-wave complexity in normal volunteers using high-fidelity electrocardiographic techniques without filtering. We recorded 5-min multichannel ECGs in 16 healthy volunteers. Noise and interference were reduced by averaging over 300 beats per recording. In addition, normal P waves were simulated with a realistic model of the human atria. Measured P waves had an average of 4.1 peaks (range 1-10) that were reproducible between recordings. Simulated P waves demonstrated similar complexity, which was related to structural discontinuities in the computer model of the atria. The true shape of the P wave is very irregular and is best seen in ECGs averaged over many beats. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Finite element model updating of concrete structures based on imprecise probability
NASA Astrophysics Data System (ADS)
Biswal, S.; Ramaswamy, A.
2017-09-01
Imprecise probability based methods are developed in this study for the parameter estimation, in finite element model updating for concrete structures, when the measurements are imprecisely defined. Bayesian analysis using Metropolis Hastings algorithm for parameter estimation is generalized to incorporate the imprecision present in the prior distribution, in the likelihood function, and in the measured responses. Three different cases are considered (i) imprecision is present in the prior distribution and in the measurements only, (ii) imprecision is present in the parameters of the finite element model and in the measurement only, and (iii) imprecision is present in the prior distribution, in the parameters of the finite element model, and in the measurements. Procedures are also developed for integrating the imprecision in the parameters of the finite element model, in the finite element software Abaqus. The proposed methods are then verified against reinforced concrete beams and prestressed concrete beams tested in our laboratory as part of this study.
Lavé, Thierry; Caruso, Antonello; Parrott, Neil; Walz, Antje
In this review we present ways in which translational PK/PD modeling can address opportunities to enhance probability of success in drug discovery and early development. This is achieved by impacting efficacy and safety-driven attrition rates, through increased focus on the quantitative understanding and modeling of translational PK/PD. Application of the proposed principles early in the discovery and development phases is anticipated to bolster confidence of successfully evaluating proof of mechanism in humans and ultimately improve Phase II success. The present review is centered on the application of predictive modeling and simulation approaches during drug discovery and early development, and more specifically of mechanism-based PK/PD modeling. Case studies are presented, focused on the relevance of M&S contributions to real-world questions and the impact on decision making.
A formalism to generate probability distributions for performance-assessment modeling
Kaplan, P.G.
1990-12-31
A formalism is presented for generating probability distributions of parameters used in performance-assessment modeling. The formalism is used when data are either sparse or nonexistent. The appropriate distribution is a function of the known or estimated constraints and is chosen to maximize a quantity known as Shannon`s informational entropy. The formalism is applied to a parameter used in performance-assessment modeling. The functional form of the model that defines the parameter, data from the actual field site, and natural analog data are analyzed to estimate the constraints. A beta probability distribution of the example parameter is generated after finding four constraints. As an example of how the formalism is applied to the site characterization studies of Yucca Mountain, the distribution is generated for an input parameter in a performance-assessment model currently used to estimate compliance with disposal of high-level radioactive waste in geologic repositories, 10 CFR 60.113(a)(2), commonly known as the ground water travel time criterion. 8 refs., 2 figs.
Modeling and estimation of stage-specific daily survival probabilities of nests
Stanley, T.R.
2000-01-01
In studies of avian nesting success, it is often of interest to estimate stage-specific daily survival probabilities of nests. When data can be partitioned by nesting stage (e.g., incubation stage, nestling stage), piecewise application of the Mayfield method or Johnsona??s method is appropriate. However, when the data contain nests where the transition from one stage to the next occurred during the interval between visits, piecewise approaches are inappropriate. In this paper, I present a model that allows joint estimation of stage-specific daily survival probabilities even when the time of transition between stages is unknown. The model allows interval lengths between visits to nests to vary, and the exact time of failure of nests does not need to be known. The performance of the model at various sample sizes and interval lengths between visits was investigated using Monte Carlo simulations, and it was found that the model performed quite well: bias was small and confidence-interval coverage was at the nominal 95% rate. A SAS program for obtaining maximum likelihood estimates of parameters, and their standard errors, is provided in the Appendix.
Modeling the probability of arsenic in groundwater in New England as a tool for exposure assessment
Ayotte, J.D.; Nolan, B.T.; Nuckols, J.R.; Cantor, K.P.; Robinson, G.R.; Baris, D.; Hayes, L.; Karagas, M.; Bress, W.; Silverman, D.T.; Lubin, J.H.
2006-01-01
We developed a process-based model to predict the probability of arsenic exceeding 5 ??g/L in drinking water wells in New England bedrock aquifers. The model is being used for exposure assessment in an epidemiologic study of bladder cancer. One important study hypothesis that may explain increased bladder cancer risk is elevated concentrations of inorganic arsenic in drinking water. In eastern New England, 20-30% of private wells exceed the arsenic drinking water standard of 10 micrograms per liter. Our predictive model significantly improves the understanding of factors associated with arsenic contamination in New England. Specific rock types, high arsenic concentrations in stream sediments, geochemical factors related to areas of Pleistocene marine inundation and proximity to intrusive granitic plutons, and hydrologic and landscape variables relating to groundwater residence time increase the probability of arsenic occurrence in groundwater. Previous studies suggest that arsenic in bedrock groundwater may be partly from past arsenical pesticide use. Variables representing historic agricultural inputs do not improve the model, indicating that this source does not significantly contribute to current arsenic concentrations. Due to the complexity of the fractured bedrock aquifers in the region, well depth and related variables also are not significant predictors. ?? 2006 American Chemical Society.
Empirical probability model of cold plasma environment in the Jovian magnetosphere
NASA Astrophysics Data System (ADS)
Futaana, Yoshifumi; Wang, Xiao-Dong; Barabash, Stas; Roussos, Elias; Truscott, Pete
2015-04-01
We analyzed the Galileo PLS dataset to produce a new cold plasma environment model for the Jovian magneto- sphere. Although there exist many sophisticated radiation models, treating energetic plasma (e.g. JOSE, GIRE, or Salammbo), only a limited number of simple models has been utilized for cold plasma environment. By extend- ing the existing cold plasma models toward the probability domain, we can predict the extreme periods of Jovian environment by specifying the percentile of the environmental parameters. The new model was produced in the following procedure. We first referred to the existing cold plasma models of Divine and Garrett, 1983 (DG83) or Bagenal and Delamere 2011 (BD11). These models are scaled to fit the statistical median of the parameters obtained from Galileo PLS data. The scaled model (also called as "mean model") indicates the median environment of Jovian magnetosphere. Then, assuming that the deviations in the Galileo PLS parameters are purely due to variations in the environment, we extended the mean model toward the percentile domain. The input parameter of the model is simply the position of the spacecraft (distance, magnetic longitude and lati- tude) and the specific percentile (e.g. 0.5 for the mean model). All the parameters in the model are described in mathematical forms; therefore the needed computational resources are quite low. The new model can be used for assessing the JUICE mission profile. The spatial extent of the model covers the main phase of the JUICE mission; namely from the Europa orbit to 40 Rj (where Rj is the radius of Jupiter). In addition, theoretical extensions toward the latitudinal direction are also included in the model to support the high latitude orbit of the JUICE spacecraft.
Probability Distributions for U.S. Climate Change Using Multi-Model Ensembles
NASA Astrophysics Data System (ADS)
Preston, B. L.
2004-12-01
Projections of future climate change vary considerable among different atmosphere ocean general circulation models (AOGCMs) and climate forcing scenarios, and thus understanding of future climate change and its consequences is highly dependent upon the range of models and scenarios taken into consideration. To compensate for this limitation, a number of authors have proposed using multi-model ensembles to develop mean or probabilistic projections of future climate conditions. Here, a simple climate model (MAGICC/SCENGEN) was used to project future seasonal and annual changes in coterminous U.S. temperature and precipitation in 2025, 2050, and 2100 using seven AOGCMs (CSIRO, CSM, ECHM4, GFDL, HADCM2, HADCM3, PCM) and the Intergovernmental Panel on Climate Change's six SRES marker scenarios. Model results were used to calculate cumulative probability distributions for temperature and precipitation changes. Different weighting schemes were applied to the AOGCM results reflecting different assumptions about the relative likelihood of different models and forcing scenarios. EQUAL results were unweighted, while SENS and REA results were weighted by climate sensitivity and model performance, respectively. For each of these assumptions, additional results were also generated using weighted forcing scenarios (SCENARIO), for a total of six probability distributions for each season and time period. Average median temperature and precipitation changes in 2100 among the probability distributions were +3.4° C (1.6-6.6° C) and +2.4% (-1.3-10%), respectively. Greater warming was projected for June, July, and August (JJA) relative to other seasons, and modest decreases in precipitation were projected for JJA while modest increases were projected for other seasons. The EQUAL and REA distributions were quite similar, while REA distributions were significantly constrained in comparison. Weighting of forcing scenarios reduced the upper 95% confidence limit for temperature and
Modelling the probability of ionospheric irregularity occurrence over African low latitude region
NASA Astrophysics Data System (ADS)
Mungufeni, Patrick; Jurua, Edward; Bosco Habarulema, John; Anguma Katrini, Simon
2015-06-01
This study presents models of geomagnetically quiet time probability of occurrence of ionospheric irregularities over the African low latitude region. GNSS-derived ionospheric total electron content data from Mbarara, Uganda (0.60°S, 30.74°E, geographic, 10.22°S, magnetic) and Libreville, Gabon (0.35°N, 9.68°E, geographic, 8.05°S, magnetic) during the period 2001-2012 were used. First, we established the rate of change of total electron content index (ROTI) value associated with background ionospheric irregularity over the region. This was done by analysing GNSS carrier-phases at L-band frequencies L1 and L2 with the aim of identifying cycle slip events associated with ionospheric irregularities. We identified at both stations a total of 699 events of cycle slips. The corresponding median ROTI value at the epochs of the cycle slip events was 0.54 TECU/min. The probability of occurrence of ionospheric irregularities associated with ROTI ≥ 0.5 TECU / min was then modelled by fitting cubic B-splines to the data. The aspects the model captured included diurnal, seasonal, and solar flux dependence patterns of the probability of occurrence of ionospheric irregularities. The model developed over Mbarara was validated with data over Mt. Baker, Uganda (0.35°N, 29.90°E, geographic, 9.25°S, magnetic), Kigali, Rwanda (1.94°S, 30.09°E, geographic, 11.62°S, magnetic), and Kampala, Uganda (0.34°N, 32.60°E, geographic, 9.29°S, magnetic). For the period validated at Mt. Baker (approximately, 137.64 km, north west), Kigali (approximately, 162.42 km, south west), and Kampala (approximately, 237.61 km, north east) the percentages of the number of errors (difference between the observed and the modelled probability of occurrence of ionospheric irregularity) less than 0.05 are 97.3, 89.4, and 81.3, respectively.
Bigaussian Wavefront Model for Normal and Keratoconic Eyes.
Rozema, Jos J; Rodríguez, Pablo; Navarro, Rafael; Koppen, Carina
2017-06-01
To report bigaussian multivariate wavefront models capable of stochastically generating an unlimited amount of plausible wavefront data for either normal or keratoconic eyes. The models use centroid wavefront data measured previously with an iTrace in 330 healthy right eyes and 122 keratoconic right eyes. These centroids were fitted to an 11th-order Zernike series, followed by principal component analysis to reduce dimensionality and remove correlations. The remaining parameters were then fitted to a sum of two multivariate Gaussian distributions. This fit then forms the core of the stochastic model, which may be used to generate synthetic data. Finally, the agreement between the original and synthetic data was tested using two one-sided t tests. For normal eyes, the first eigenvectors mostly represent pure Zernike polynomials, with a decreasing degree of purity with increasing order. For keratoconic eyes, eigenvector purity was considerably lower than for normal eyes. Depending on the data set, series of 22 to 29 eigenvectors were found sufficient for accurate wavefront reconstruction (i.e., root-mean-square errors below 0.05 μm). These eigenvectors were then used as a base for the stochastic models. In all models and all Zernike coefficients, the mean of the synthetic data was significantly equal to that of the original data (two one-sided t test, P > .05/75), but the variability of the synthetic data is often significantly lower (F test, P < .05/75). This synthetic wavefront model may be safely used in calculations as an alternative to actual measurements should such data not be available.
Arnold, Nina R; Bayen, Ute J; Kuhlmann, Beatrice G; Vaterrodt, Bianca
2013-04-01
According to the probability-matching account of source guessing (Spaniol & Bayen, Journal of Experimental Psychology: Learning, Memory, and Cognition 28:631-651, 2002), when people do not remember the source of an item in a source-monitoring task, they match the source-guessing probabilities to the perceived contingencies between sources and item types. In a source-monitoring experiment, half of the items presented by each of two sources were consistent with schematic expectations about this source, whereas the other half of the items were consistent with schematic expectations about the other source. Participants' source schemas were activated either at the time of encoding or just before the source-monitoring test. After test, the participants judged the contingency of the item type and source. Individual parameter estimates of source guessing were obtained via beta-multinomial processing tree modeling (beta-MPT; Smith & Batchelder, Journal of Mathematical Psychology 54:167-183, 2010). We found a significant correlation between the perceived contingency and source guessing, as well as a correlation between the deviation of the guessing bias from the true contingency and source memory when participants did not receive the schema information until retrieval. These findings support the probability-matching account.
Fixation Probability in a Two-Locus Model by the Ancestral Recombination–Selection Graph
Lessard, Sabin; Kermany, Amir R.
2012-01-01
We use the ancestral influence graph (AIG) for a two-locus, two-allele selection model in the limit of a large population size to obtain an analytic approximation for the probability of ultimate fixation of a single mutant allele A. We assume that this new mutant is introduced at a given locus into a finite population in which a previous mutant allele B is already segregating with a wild type at another linked locus. We deduce that the fixation probability increases as the recombination rate increases if allele A is either in positive epistatic interaction with B and allele B is beneficial or in no epistatic interaction with B and then allele A itself is beneficial. This holds at least as long as the recombination fraction and the selection intensity are small enough and the population size is large enough. In particular this confirms the Hill–Robertson effect, which predicts that recombination renders more likely the ultimate fixation of beneficial mutants at different loci in a population in the presence of random genetic drift even in the absence of epistasis. More importantly, we show that this is true from weak negative epistasis to positive epistasis, at least under weak selection. In the case of deleterious mutants, the fixation probability decreases as the recombination rate increases. This supports Muller’s ratchet mechanism to explain the accumulation of deleterious mutants in a population lacking recombination. PMID:22095080
NASA Astrophysics Data System (ADS)
Zhong, Rumian; Zong, Zhouhong; Niu, Jie; Liu, Qiqi; Zheng, Peijuan
2016-05-01
Modeling and simulation are routinely implemented to predict the behavior of complex structures. These tools powerfully unite theoretical foundations, numerical models and experimental data which include associated uncertainties and errors. A new methodology for multi-scale finite element (FE) model validation is proposed in this paper. The method is based on two-step updating method, a novel approach to obtain coupling parameters in the gluing sub-regions of a multi-scale FE model, and upon Probability Box (P-box) theory that can provide a lower and upper bound for the purpose of quantifying and transmitting the uncertainty of structural parameters. The structural health monitoring data of Guanhe Bridge, a composite cable-stayed bridge with large span, and Monte Carlo simulation were used to verify the proposed method. The results show satisfactory accuracy, as the overlap ratio index of each modal frequency is over 89% without the average absolute value of relative errors, and the CDF of normal distribution has a good coincidence with measured frequencies of Guanhe Bridge. The validated multiscale FE model may be further used in structural damage prognosis and safety prognosis.
Syntactic error modeling and scoring normalization in speech recognition
NASA Technical Reports Server (NTRS)
Olorenshaw, Lex
1991-01-01
The objective was to develop the speech recognition system to be able to detect speech which is pronounced incorrectly, given that the text of the spoken speech is known to the recognizer. Research was performed in the following areas: (1) syntactic error modeling; (2) score normalization; and (3) phoneme error modeling. The study into the types of errors that a reader makes will provide the basis for creating tests which will approximate the use of the system in the real world. NASA-Johnson will develop this technology into a 'Literacy Tutor' in order to bring innovative concepts to the task of teaching adults to read.
Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.
2010-01-01
Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867
Potish, R.A.; Boen, J.; Jones, T.K. Jr.; Levitt, S.H.
1981-07-01
In order to predict radiation-related enteric damage, 92 women were studied who had received identical radiation doses for cancer of the ovary from 1970 through 1977. A logistic model was used to predict the probability of complication as a function of number of laparotomies, hypertension, and thin physique. The utility and limitations of such probability models are presented.
Investigation of probability theory on Ising models with different four-spin interactions
NASA Astrophysics Data System (ADS)
Yang, Yuming; Teng, Baohua; Yang, Hongchun; Cui, Haijuan
2017-10-01
Based on probability theory, two types of three-dimensional Ising models with different four-spin interactions are studied. Firstly the partition function of the system is calculated by considering the local correlation of spins in a given configuration, and then the properties of the phase transition are quantitatively discussed with series expansion technique and numerical method. Meanwhile the rounding errors in this calculation is analyzed so that the possibly source of the error in the calculation based on the mean field theory is pointed out.
Duffy, Stephen
2013-09-09
This project will implement inelastic constitutive models that will yield the requisite stress-strain information necessary for graphite component design. Accurate knowledge of stress states (both elastic and inelastic) is required to assess how close a nuclear core component is to failure. Strain states are needed to assess deformations in order to ascertain serviceability issues relating to failure, e.g., whether too much shrinkage has taken place for the core to function properly. Failure probabilities, as opposed to safety factors, are required in order to capture the bariability in failure strength in tensile regimes. The current stress state is used to predict the probability of failure. Stochastic failure models will be developed that can accommodate possible material anisotropy. This work will also model material damage (i.e., degradation of mechanical properties) due to radiation exposure. The team will design tools for components fabricated from nuclear graphite. These tools must readily interact with finite element software--in particular, COMSOL, the software algorithm currently being utilized by the Idaho National Laboratory. For the eleastic response of graphite, the team will adopt anisotropic stress-strain relationships available in COMSO. Data from the literature will be utilized to characterize the appropriate elastic material constants.
Exact results for the probability and stochastic dynamics of fixation in the Wright-Fisher model.
Shafiey, Hassan; Waxman, David
2017-10-07
In this work we consider fixation of an allele in a population. Fixation is key to understanding the way long-term evolutionary change occurs at the gene and molecular levels. Two basic aspects of fixation are: (i) the chance it occurs and (ii) the way the gene frequency progresses to fixation. We present exact results for both aspects of fixation for the Wright-Fisher model. We give the exact fixation probability for some different schemes of frequency-dependent selection. We also give the corresponding exact stochastic difference equation that generates frequency trajectories which ultimately fix. Exactness of the results means selection need not be weak. There are possible applications of this work to data analysis, modelling, and tests of approximations. The methodology employed illustrates that knowledge of the fixation probability, for all initial frequencies, fully characterises the dynamics of the Wright-Fisher model. The stochastic equations for fixing trajectories allow insight into the way fixation occurs. They provide the alternative picture that fixation is driven by the injection of one carrier of the fixing allele into the population each generation. The stochastic equations allow explicit calculation of some properties of fixing trajectories and their efficient simulation. The results are illustrated and tested with simulations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Void probability as a function of the void's shape and scale-invariant models
NASA Technical Reports Server (NTRS)
Elizalde, E.; Gaztanaga, E.
1991-01-01
The dependence of counts in cells on the shape of the cell for the large scale galaxy distribution is studied. A very concrete prediction can be done concerning the void distribution for scale invariant models. The prediction is tested on a sample of the CfA catalog, and good agreement is found. It is observed that the probability of a cell to be occupied is bigger for some elongated cells. A phenomenological scale invariant model for the observed distribution of the counts in cells, an extension of the negative binomial distribution, is presented in order to illustrate how this dependence can be quantitatively determined. An original, intuitive derivation of this model is presented.
NASA Astrophysics Data System (ADS)
Li, Qi-Lang; Wong, S. C.; Min, Jie; Tian, Shuo; Wang, Bing-Hong
2016-08-01
This study examines the cellular automata traffic flow model, which considers the heterogeneity of vehicle acceleration and the delay probability of vehicles. Computer simulations are used to identify three typical phases in the model: free-flow, synchronized flow, and wide moving traffic jam. In the synchronized flow region of the fundamental diagram, the low and high velocity vehicles compete with each other and play an important role in the evolution of the system. The analysis shows that there are two types of bistable phases. However, in the original Nagel and Schreckenberg cellular automata traffic model, there are only two kinds of traffic conditions, namely, free-flow and traffic jams. The synchronized flow phase and bistable phase have not been found.
NASA Astrophysics Data System (ADS)
Kondoh, Hiroshi; Matsushita, Mitsugu
1986-10-01
Diffusion-limited aggregation (DLA) model with anisotropic sticking probability Ps is computer-simulated on two dimensional square lattice. The cluster grows from a seed particle at the origin in the positive y area with the absorption-type boundary along x-axis. The cluster is found to grow anisotropically as R//˜Nν// and R\\bot˜Nν\\bot, where R\\bot and R// are the radii of gyration of the cluster along x- and y-axes, respectively, and N is the particle number constituting the cluster. The two exponents are shown to become assymptotically ν//{=}2/3, ν\\bot{=}1/3 whenever the sticking anisotropy exists. It is also found that the present model is fairly consistent with Hack’s law of river networks, suggesting that it is a good candidate of a prototype model for the evolution of the river network.
Transition probability estimates for non-Markov multi-state models.
Titman, Andrew C
2015-12-01
Non-parametric estimation of the transition probabilities in multi-state models is considered for non-Markov processes. Firstly, a generalization of the estimator of Pepe et al., (1991) (Statistics in Medicine) is given for a class of progressive multi-state models based on the difference between Kaplan-Meier estimators. Secondly, a general estimator for progressive or non-progressive models is proposed based upon constructed univariate survival or competing risks processes which retain the Markov property. The properties of the estimators and their associated standard errors are investigated through simulation. The estimators are demonstrated on datasets relating to survival and recurrence in patients with colon cancer and prothrombin levels in liver cirrhosis patients.
Normal Versus Noncentral Chi-square Asymptotics of Misspecified Models.
Chun, So Yeon; Shapiro, Alexander
2009-11-30
The noncentral chi-square approximation of the distribution of the likelihood ratio (LR) test statistic is a critical part of the methodology in structural equation modeling. Recently, it was argued by some authors that in certain situations normal distributions may give a better approximation of the distribution of the LR test statistic. The main goal of this article is to evaluate the validity of employing these distributions in practice. Monte Carlo simulation results indicate that the noncentral chi-square distribution describes behavior of the LR test statistic well under small, moderate, and even severe misspecifications regardless of the sample size (as long as it is sufficiently large), whereas the normal distribution, with a bias correction, gives a slightly better approximation for extremely severe misspecifications. However, neither the noncentral chi-square distribution nor the theoretical normal distributions give a reasonable approximation of the LR test statistics under extremely severe misspecifications. Of course, extremely misspecified models are not of much practical interest. We also use the Thurstone data ( Thurstone & Thurstone, 1941 ) from a classic study of mental ability for our illustration.
Classical signal model reproducing quantum probabilities for single and coincidence detections
NASA Astrophysics Data System (ADS)
Khrennikov, Andrei; Nilsson, Börje; Nordebo, Sven
2012-05-01
We present a simple classical (random) signal model reproducing Born's rule. The crucial point of our approach is that the presence of detector's threshold and calibration procedure have to be treated not as simply experimental technicalities, but as the basic counterparts of the theoretical model. We call this approach threshold signal detection model (TSD). The experiment on coincidence detection which was done by Grangier in 1986 [22] played a crucial role in rejection of (semi-)classical field models in favour of quantum mechanics (QM): impossibility to resolve the wave-particle duality in favour of a purely wave model. QM predicts that the relative probability of coincidence detection, the coefficient g(2) (0), is zero (for one photon states), but in (semi-)classical models g(2)(0) >= 1. In TSD the coefficient g(2)(0) decreases as 1/ɛ2d, where ɛd > 0 is the detection threshold. Hence, by increasing this threshold an experimenter can make the coefficient g(2) (0) essentially less than 1. The TSD-prediction can be tested experimentally in new Grangier type experiments presenting a detailed monitoring of dependence of the coefficient g(2)(0) on the detection threshold. Structurally our model has some similarity with the prequantum model of Grossing et al. Subquantum stochasticity is composed of the two counterparts: a stationary process in the space of internal degrees of freedom and the random walk type motion describing the temporal dynamics.
A new probability distribution model of turbulent irradiance based on Born perturbation theory
NASA Astrophysics Data System (ADS)
Wang, Hongxing; Liu, Min; Hu, Hao; Wang, Qian; Liu, Xiguo
2010-10-01
The subject of the PDF (Probability Density Function) of the irradiance fluctuations in a turbulent atmosphere is still unsettled. Theory reliably describes the behavior in the weak turbulence regime, but theoretical description in the strong and whole turbulence regimes are still controversial. Based on Born perturbation theory, the physical manifestations and correlations of three typical PDF models (Rice-Nakagami, exponential-Bessel and negative-exponential distribution) were theoretically analyzed. It is shown that these models can be derived by separately making circular-Gaussian, strong-turbulence and strong-turbulence-circular-Gaussian approximations in Born perturbation theory, which denies the viewpoint that the Rice-Nakagami model is only applicable in the extremely weak turbulence regime and provides theoretical arguments for choosing rational models in practical applications. In addition, a common shortcoming of the three models is that they are all approximations. A new model, called the Maclaurin-spread distribution, is proposed without any approximation except for assuming the correlation coefficient to be zero. So, it is considered that the new model can exactly reflect the Born perturbation theory. Simulated results prove the accuracy of this new model.
NASA Astrophysics Data System (ADS)
James, P.
2011-12-01
With a growing need for housing in the U.K., the government has proposed increased development of brownfield sites. However, old mine workings and natural cavities represent a potential hazard before, during and after construction on such sites, and add further complication to subsurface parameters. Cavities are hence a limitation to certain redevelopment and their detection is an ever important consideration. The current standard technique for cavity detection is a borehole grid, which is intrusive, non-continuous, slow and expensive. A new robust investigation standard in the detection of cavities is sought and geophysical techniques offer an attractive alternative. Geophysical techniques have previously been utilised successfully in the detection of cavities in various geologies, but still has an uncertain reputation in the engineering industry. Engineers are unsure of the techniques and are inclined to rely on well known techniques than utilise new technologies. Bad experiences with geophysics are commonly due to the indiscriminate choice of particular techniques. It is imperative that a geophysical survey is designed with the specific site and target in mind at all times, and the ability and judgement to rule out some, or all, techniques. To this author's knowledge no comparative software exists to aid technique choice. Also, previous modelling software limit the shapes of bodies and hence typical cavity shapes are not represented. Here, we introduce 3D modelling software (Matlab) which computes and compares the response to various cavity targets from a range of techniques (gravity, gravity gradient, magnetic, magnetic gradient and GPR). Typical near surface cavity shapes are modelled including shafts, bellpits, various lining and capping materials, and migrating voids. The probability of cavity detection is assessed in typical subsurface and noise conditions across a range of survey parameters. Techniques can be compared and the limits of detection distance
Simulation of reactive nanolaminates using reduced models: II. Normal propagation
Salloum, Maher; Knio, Omar M.
2010-03-15
Transient normal flame propagation in reactive Ni/Al multilayers is analyzed computationally. Two approaches are implemented, based on generalization of earlier methodology developed for axial propagation, and on extension of the model reduction formalism introduced in Part I. In both cases, the formulation accommodates non-uniform layering as well as the presence of inert layers. The equations of motion for the reactive system are integrated using a specially-tailored integration scheme, that combines extended-stability, Runge-Kutta-Chebychev (RKC) integration of diffusion terms with exact treatment of the chemical source term. The detailed and reduced models are first applied to the analysis of self-propagating fronts in uniformly-layered materials. Results indicate that both the front velocities and the ignition threshold are comparable for normal and axial propagation. Attention is then focused on analyzing the effect of a gap composed of inert material on reaction propagation. In particular, the impacts of gap width and thermal conductivity are briefly addressed. Finally, an example is considered illustrating reaction propagation in reactive composites combining regions corresponding to two bilayer widths. This setup is used to analyze the effect of the layering frequency on the velocity of the corresponding reaction fronts. In all cases considered, good agreement is observed between the predictions of the detailed model and the reduced model, which provides further support for adoption of the latter. (author)
The coupon collector urn model with unequal probabilities in ecology and evolution.
Zoroa, N; Lesigne, E; Fernández-Sáez, M J; Zoroa, P; Casas, J
2017-02-01
The sequential sampling of populations with unequal probabilities and with replacement in a closed population is a recurrent problem in ecology and evolution. Examples range from biodiversity sampling, epidemiology to the estimation of signal repertoire in animal communication. Many of these questions can be reformulated as urn problems, often as special cases of the coupon collector problem, most simply expressed as the number of coupons that must be collected to have a complete set. We aimed to apply the coupon collector model in a comprehensive manner to one example-hosts (balls) being searched (draws) and parasitized (ball colour change) by parasitic wasps-to evaluate the influence of differences in sampling probabilities between items on collection speed. Based on the model of a complete multinomial process over time, we define the distribution, distribution function, expectation and variance of the number of hosts parasitized after a given time, as well as the inverse problem, estimating the sampling effort. We develop the relationship between the risk distribution on the set of hosts and the speed of parasitization and propose a more elegant proof of the weak stochastic dominance among speeds of parasitization, using the concept of Schur convexity and the 'Robin Hood transfer' numerical operation. Numerical examples are provided and a conjecture about strong dominance-an ordering characteristic of random variables-is proposed. The speed at which new items are discovered is a function of the entire shape of the sampling probability distribution. The sole comparison of values of variances is not sufficient to compare speeds associated with different distributions, as generally assumed in ecological studies. © 2017 The Author(s).
The coupon collector urn model with unequal probabilities in ecology and evolution
Lesigne, E.; Fernández-Sáez, M. J.; Zoroa, P.; Casas, J.
2017-01-01
The sequential sampling of populations with unequal probabilities and with replacement in a closed population is a recurrent problem in ecology and evolution. Examples range from biodiversity sampling, epidemiology to the estimation of signal repertoire in animal communication. Many of these questions can be reformulated as urn problems, often as special cases of the coupon collector problem, most simply expressed as the number of coupons that must be collected to have a complete set. We aimed to apply the coupon collector model in a comprehensive manner to one example—hosts (balls) being searched (draws) and parasitized (ball colour change) by parasitic wasps—to evaluate the influence of differences in sampling probabilities between items on collection speed. Based on the model of a complete multinomial process over time, we define the distribution, distribution function, expectation and variance of the number of hosts parasitized after a given time, as well as the inverse problem, estimating the sampling effort. We develop the relationship between the risk distribution on the set of hosts and the speed of parasitization and propose a more elegant proof of the weak stochastic dominance among speeds of parasitization, using the concept of Schur convexity and the ‘Robin Hood transfer’ numerical operation. Numerical examples are provided and a conjecture about strong dominance—an ordering characteristic of random variables—is proposed. The speed at which new items are discovered is a function of the entire shape of the sampling probability distribution. The sole comparison of values of variances is not sufficient to compare speeds associated with different distributions, as generally assumed in ecological studies. PMID:28179550
Predicting Mortality in Low-Income Country ICUs: The Rwanda Mortality Probability Model (R-MPM)
Kiviri, Willy; Fowler, Robert A.; Mueller, Ariel; Novack, Victor; Banner-Goodspeed, Valerie M.; Weinkauf, Julia L.; Talmor, Daniel S.; Twagirumugabe, Theogene
2016-01-01
Introduction Intensive Care Unit (ICU) risk prediction models are used to compare outcomes for quality improvement initiatives, benchmarking, and research. While such models provide robust tools in high-income countries, an ICU risk prediction model has not been validated in a low-income country where ICU population characteristics are different from those in high-income countries, and where laboratory-based patient data are often unavailable. We sought to validate the Mortality Probability Admission Model, version III (MPM0-III) in two public ICUs in Rwanda and to develop a new Rwanda Mortality Probability Model (R-MPM) for use in low-income countries. Methods We prospectively collected data on all adult patients admitted to Rwanda’s two public ICUs between August 19, 2013 and October 6, 2014. We described demographic and presenting characteristics and outcomes. We assessed the discrimination and calibration of the MPM0-III model. Using stepwise selection, we developed a new logistic model for risk prediction, the R-MPM, and used bootstrapping techniques to test for optimism in the model. Results Among 427 consecutive adults, the median age was 34 (IQR 25–47) years and mortality was 48.7%. Mechanical ventilation was initiated for 85.3%, and 41.9% received vasopressors. The MPM0-III predicted mortality with area under the receiver operating characteristic curve of 0.72 and Hosmer-Lemeshow chi-square statistic p = 0.024. We developed a new model using five variables: age, suspected or confirmed infection within 24 hours of ICU admission, hypotension or shock as a reason for ICU admission, Glasgow Coma Scale score at ICU admission, and heart rate at ICU admission. Using these five variables, the R-MPM predicted outcomes with area under the ROC curve of 0.81 with 95% confidence interval of (0.77, 0.86), and Hosmer-Lemeshow chi-square statistic p = 0.154. Conclusions The MPM0-III has modest ability to predict mortality in a population of Rwandan ICU patients. The R
SAR amplitude probability density function estimation based on a generalized Gaussian model.
Moser, Gabriele; Zerubia, Josiane; Serpico, Sebastiano B
2006-06-01
In the context of remotely sensed data analysis, an important problem is the development of accurate models for the statistics of the pixel intensities. Focusing on synthetic aperture radar (SAR) data, this modeling process turns out to be a crucial task, for instance, for classification or for denoising purposes. In this paper, an innovative parametric estimation methodology for SAR amplitude data is proposed that adopts a generalized Gaussian (GG) model for the complex SAR backscattered signal. A closed-form expression for the corresponding amplitude probability density function (PDF) is derived and a specific parameter estimation algorithm is developed in order to deal with the proposed model. Specifically, the recently proposed "method-of-log-cumulants" (MoLC) is applied, which stems from the adoption of the Mellin transform (instead of the usual Fourier transform) in the computation of characteristic functions and from the corresponding generalization of the concepts of moment and cumulant. For the developed GG-based amplitude model, the resulting MoLC estimates turn out to be numerically feasible and are also analytically proved to be consistent. The proposed parametric approach was validated by using several real ERS-1, XSAR, E-SAR, and NASA/JPL airborne SAR images, and the experimental results prove that the method models the amplitude PDF better than several previously proposed parametric models for backscattering phenomena.
Royle, J. Andrew; Chandler, Richard B.; Yackulic, Charles; Nichols, James D.
2012-01-01
1. Understanding the factors affecting species occurrence is a pre-eminent focus of applied ecological research. However, direct information about species occurrence is lacking for many species. Instead, researchers sometimes have to rely on so-called presence-only data (i.e. when no direct information about absences is available), which often results from opportunistic, unstructured sampling. MAXENT is a widely used software program designed to model and map species distribution using presence-only data. 2. We provide a critical review of MAXENT as applied to species distribution modelling and discuss how it can lead to inferential errors. A chief concern is that MAXENT produces a number of poorly defined indices that are not directly related to the actual parameter of interest – the probability of occurrence (ψ). This focus on an index was motivated by the belief that it is not possible to estimate ψ from presence-only data; however, we demonstrate that ψ is identifiable using conventional likelihood methods under the assumptions of random sampling and constant probability of species detection. 3. The model is implemented in a convenient r package which we use to apply the model to simulated data and data from the North American Breeding Bird Survey. We demonstrate that MAXENT produces extreme under-predictions when compared to estimates produced by logistic regression which uses the full (presence/absence) data set. We note that MAXENT predictions are extremely sensitive to specification of the background prevalence, which is not objectively estimated using the MAXENT method. 4. As with MAXENT, formal model-based inference requires a random sample of presence locations. Many presence-only data sets, such as those based on museum records and herbarium collections, may not satisfy this assumption. However, when sampling is random, we believe that inference should be based on formal methods that facilitate inference about interpretable ecological quantities
Rooney, Katherine E; Wallace, Lane J
2015-11-01
Dopamine in the striatum signals the saliency of current environmental input and is involved in learned formation of appropriate responses. The regular baseline-firing rate of dopaminergic neurons suggests that baseline dopamine is essential for proper brain function. The first goal of the study was to estimate the likelihood of full exocytotic dopamine release associated with each firing event under baseline conditions. A computer model of extracellular space associated with a single varicosity was developed using the program MCell to estimate kinetics of extracellular dopamine. Because the literature provides multiple kinetic values for dopamine uptake depending on the system tested, simulations were run using different kinetic parameters. With all sets of kinetic parameters evaluated, at most, 25% of a single vesicle per varicosity would need to be released per firing event to maintain a 5-10 nM extracellular dopamine concentration, the level reported by multiple microdialysis experiments. The second goal was to estimate the fraction of total amount of stored dopamine released during a highly stimulated condition. This was done using the same model system to simulate published measurements of extracellular dopamine following electrical stimulation of striatal slices in vitro. The results suggest the amount of dopamine release induced by a single electrical stimulation may be as large as the contents of two vesicles per varicosity. We conclude that dopamine release probability at any particular varicosity is low. This suggests that factors capable of increasing release probability could have a powerful effect on sculpting dopamine signals.
Zitnick-Anderson, Kimberly K; Norland, Jack E; Del Río Mendoza, Luis E; Fortuna, Ann-Marie; Nelson, Berlin D
2017-04-06
Associations between soil properties and Pythium groups on soybean roots were investigated in 83 commercial soybean fields in North Dakota. A data set containing 2877 isolates of Pythium which included 26 known spp. and 1 unknown spp. and 13 soil properties from each field were analyzed. A Pearson correlation analysis was performed with all soil properties to observe any significant correlation between properties. Hierarchical clustering, indicator spp., and multi-response permutation procedures were used to identify groups of Pythium. Logistic regression analysis using stepwise selection was employed to calculate probability models for presence of groups based on soil properties. Three major Pythium groups were identified and three soil properties were associated with these groups. Group 1, characterized by P. ultimum, was associated with zinc levels; as zinc increased, the probability of group 1 being present increased (α = 0.05). Pythium group 2, characterized by Pythium kashmirense and an unknown Pythium sp., was associated with cation exchange capacity (CEC) (α < 0.05); as CEC increased, these spp. increased. Group 3, characterized by Pythium heterothallicum and Pythium irregulare, were associated with CEC and calcium carbonate exchange (CCE); as CCE increased and CEC decreased, these spp. increased (α = 0.05). The regression models may have value in predicting pathogenic Pythium spp. in soybean fields in North Dakota and adjacent states.
Huang, Yangxin; Chen, Jiaqing; Yin, Ping
2017-02-01
It is a common practice to analyze longitudinal data frequently arisen in medical studies using various mixed-effects models in the literature. However, the following issues may standout in longitudinal data analysis: (i) In clinical practice, the profile of each subject's response from a longitudinal study may follow a "broken stick" like trajectory, indicating multiple phases of increase, decline and/or stable in response. Such multiple phases (with changepoints) may be an important indicator to help quantify treatment effect and improve management of patient care. To estimate changepoints, the various mixed-effects models become a challenge due to complicated structures of model formulations; (ii) an assumption of homogeneous population for models may be unrealistically obscuring important features of between-subject and within-subject variations; (iii) normality assumption for model errors may not always give robust and reliable results, in particular, if the data exhibit non-normality; and (iv) the response may be missing and the missingness may be non-ignorable. In the literature, there has been considerable interest in accommodating heterogeneity, non-normality or missingness in such models. However, there has been relatively little work concerning all of these features simultaneously. There is a need to fill up this gap as longitudinal data do often have these characteristics. In this article, our objectives are to study simultaneous impact of these data features by developing a Bayesian mixture modeling approach-based Finite Mixture of Changepoint (piecewise) Mixed-Effects (FMCME) models with skew distributions, allowing estimates of both model parameters and class membership probabilities at population and individual levels. Simulation studies are conducted to assess the performance of the proposed method, and an AIDS clinical data example is analyzed to demonstrate the proposed methodologies and to compare modeling results of potential mixture models
Physical models for the normal YORP and diurnal Yarkovsky effects
NASA Astrophysics Data System (ADS)
Golubov, O.; Kravets, Y.; Krugly, Yu. N.; Scheeres, D. J.
2016-06-01
We propose an analytic model for the normal Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) and diurnal Yarkovsky effects experienced by a convex asteroid. Both the YORP torque and the Yarkovsky force are expressed as integrals of a universal function over the surface of an asteroid. Although in general this function can only be calculated numerically from the solution of the heat conductivity equation, approximate solutions can be obtained in quadratures for important limiting cases. We consider three such simplified models: Rubincam's approximation (zero heat conductivity), low thermal inertia limit (including the next order correction and thus valid for small heat conductivity), and high thermal inertia limit (valid for large heat conductivity). All three simplified models are compared with the exact solution.
Neurophysiological model of the normal and abnormal human pupil
NASA Technical Reports Server (NTRS)
Krenz, W.; Robin, M.; Barez, S.; Stark, L.
1985-01-01
Anatomical, experimental, and computer simulation studies were used to determine the structure of the neurophysiological model of the pupil size control system. The computer simulation of this model demonstrates the role played by each of the elements in the neurological pathways influencing the size of the pupil. Simulations of the effect of drugs and common abnormalities in the system help to illustrate the workings of the pathways and processes involved. The simulation program allows the user to select pupil condition (normal or an abnormality), specific site along the neurological pathway (retina, hypothalamus, etc.) drug class input (barbiturate, narcotic, etc.), stimulus/response mode, display mode, stimulus type and input waveform, stimulus or background intensity and frequency, the input and output conditions, and the response at the neuroanatomical site. The model can be used as a teaching aid or as a tool for testing hypotheses regarding the system.
Neurophysiological model of the normal and abnormal human pupil
NASA Technical Reports Server (NTRS)
Krenz, W.; Robin, M.; Barez, S.; Stark, L.
1985-01-01
Anatomical, experimental, and computer simulation studies were used to determine the structure of the neurophysiological model of the pupil size control system. The computer simulation of this model demonstrates the role played by each of the elements in the neurological pathways influencing the size of the pupil. Simulations of the effect of drugs and common abnormalities in the system help to illustrate the workings of the pathways and processes involved. The simulation program allows the user to select pupil condition (normal or an abnormality), specific site along the neurological pathway (retina, hypothalamus, etc.) drug class input (barbiturate, narcotic, etc.), stimulus/response mode, display mode, stimulus type and input waveform, stimulus or background intensity and frequency, the input and output conditions, and the response at the neuroanatomical site. The model can be used as a teaching aid or as a tool for testing hypotheses regarding the system.
Fakir, Hatim; Hlatky, Lynn; Li, Huamin; Sachs, Rainer
2013-12-15
Purpose: Optimal treatment planning for fractionated external beam radiation therapy requires inputs from radiobiology based on recent thinking about the “five Rs” (repopulation, radiosensitivity, reoxygenation, redistribution, and repair). The need is especially acute for the newer, often individualized, protocols made feasible by progress in image guided radiation therapy and dose conformity. Current stochastic tumor control probability (TCP) models incorporating tumor repopulation effects consider “stem-like cancer cells” (SLCC) to be independent, but the authors here propose that SLCC-SLCC interactions may be significant. The authors present a new stochastic TCP model for repopulating SLCC interacting within microenvironmental niches. Our approach is meant mainly for comparing similar protocols. It aims at practical generalizations of previous mathematical models. Methods: The authors consider protocols with complete sublethal damage repair between fractions. The authors use customized open-source software and recent mathematical approaches from stochastic process theory for calculating the time-dependent SLCC number and thereby estimating SLCC eradication probabilities. As specific numerical examples, the authors consider predicted TCP results for a 2 Gy per fraction, 60 Gy protocol compared to 64 Gy protocols involving early or late boosts in a limited volume to some fractions. Results: In sample calculations with linear quadratic parameters α = 0.3 per Gy, α/β = 10 Gy, boosting is predicted to raise TCP from a dismal 14.5% observed in some older protocols for advanced NSCLC to above 70%. This prediction is robust as regards: (a) the assumed values of parameters other than α and (b) the choice of models for intraniche SLCC-SLCC interactions. However, α = 0.03 per Gy leads to a prediction of almost no improvement when boosting. Conclusions: The predicted efficacy of moderate boosts depends sensitively on α. Presumably, the larger values of α are
Mu, Chun-sun; Zhang, Ping; Kong, Chun-yan; Li, Yang-ning
2015-09-01
To study the application of Bayes probability model in differentiating yin and yang jaundice syndromes in neonates. Totally 107 jaundice neonates who admitted to hospital within 10 days after birth were assigned to two groups according to syndrome differentiation, 68 in the yang jaundice syndrome group and 39 in the yin jaundice syndrome group. Data collected for neonates were factors related to jaundice before, during and after birth. Blood routines, liver and renal functions, and myocardial enzymes were tested on the admission day or the next day. Logistic regression model and Bayes discriminating analysis were used to screen factors important for yin and yang jaundice syndrome differentiation. Finally, Bayes probability model for yin and yang jaundice syndromes was established and assessed. Factors important for yin and yang jaundice syndrome differentiation screened by Logistic regression model and Bayes discriminating analysis included mothers' age, mother with gestational diabetes mellitus (GDM), gestational age, asphyxia, or ABO hemolytic diseases, red blood cell distribution width (RDW-SD), platelet-large cell ratio (P-LCR), serum direct bilirubin (DBIL), alkaline phosphatase (ALP), cholinesterase (CHE). Bayes discriminating analysis was performed by SPSS to obtain Bayes discriminant function coefficient. Bayes discriminant function was established according to discriminant function coefficients. Yang jaundice syndrome: y1= -21. 701 +2. 589 x mother's age + 1. 037 x GDM-17. 175 x asphyxia + 13. 876 x gestational age + 6. 303 x ABO hemolytic disease + 2.116 x RDW-SD + 0. 831 x DBIL + 0. 012 x ALP + 1. 697 x LCR + 0. 001 x CHE; Yin jaundice syndrome: y2= -33. 511 + 2.991 x mother's age + 3.960 x GDM-12. 877 x asphyxia + 11. 848 x gestational age + 1. 820 x ABO hemolytic disease +2. 231 x RDW-SD +0. 999 x DBIL +0. 023 x ALP +1. 916 x LCR +0. 002 x CHE. Bayes discriminant function was hypothesis tested and got Wilks' λ =0. 393 (P =0. 000). So Bayes
Fakir, Hatim; Hlatky, Lynn; Li, Huamin; Sachs, Rainer
2013-12-01
Optimal treatment planning for fractionated external beam radiation therapy requires inputs from radiobiology based on recent thinking about the "five Rs" (repopulation, radiosensitivity, reoxygenation, redistribution, and repair). The need is especially acute for the newer, often individualized, protocols made feasible by progress in image guided radiation therapy and dose conformity. Current stochastic tumor control probability (TCP) models incorporating tumor repopulation effects consider "stem-like cancer cells" (SLCC) to be independent, but the authors here propose that SLCC-SLCC interactions may be significant. The authors present a new stochastic TCP model for repopulating SLCC interacting within microenvironmental niches. Our approach is meant mainly for comparing similar protocols. It aims at practical generalizations of previous mathematical models. The authors consider protocols with complete sublethal damage repair between fractions. The authors use customized open-source software and recent mathematical approaches from stochastic process theory for calculating the time-dependent SLCC number and thereby estimating SLCC eradication probabilities. As specific numerical examples, the authors consider predicted TCP results for a 2 Gy per fraction, 60 Gy protocol compared to 64 Gy protocols involving early or late boosts in a limited volume to some fractions. In sample calculations with linear quadratic parameters α = 0.3 per Gy, α∕β = 10 Gy, boosting is predicted to raise TCP from a dismal 14.5% observed in some older protocols for advanced NSCLC to above 70%. This prediction is robust as regards: (a) the assumed values of parameters other than α and (b) the choice of models for intraniche SLCC-SLCC interactions. However, α = 0.03 per Gy leads to a prediction of almost no improvement when boosting. The predicted efficacy of moderate boosts depends sensitively on α. Presumably, the larger values of α are the ones appropriate for individualized
Fakir, Hatim; Hlatky, Lynn; Li, Huamin; Sachs, Rainer
2013-12-15
Purpose: Optimal treatment planning for fractionated external beam radiation therapy requires inputs from radiobiology based on recent thinking about the “five Rs” (repopulation, radiosensitivity, reoxygenation, redistribution, and repair). The need is especially acute for the newer, often individualized, protocols made feasible by progress in image guided radiation therapy and dose conformity. Current stochastic tumor control probability (TCP) models incorporating tumor repopulation effects consider “stem-like cancer cells” (SLCC) to be independent, but the authors here propose that SLCC-SLCC interactions may be significant. The authors present a new stochastic TCP model for repopulating SLCC interacting within microenvironmental niches. Our approach is meant mainly for comparing similar protocols. It aims at practical generalizations of previous mathematical models. Methods: The authors consider protocols with complete sublethal damage repair between fractions. The authors use customized open-source software and recent mathematical approaches from stochastic process theory for calculating the time-dependent SLCC number and thereby estimating SLCC eradication probabilities. As specific numerical examples, the authors consider predicted TCP results for a 2 Gy per fraction, 60 Gy protocol compared to 64 Gy protocols involving early or late boosts in a limited volume to some fractions. Results: In sample calculations with linear quadratic parameters α = 0.3 per Gy, α/β = 10 Gy, boosting is predicted to raise TCP from a dismal 14.5% observed in some older protocols for advanced NSCLC to above 70%. This prediction is robust as regards: (a) the assumed values of parameters other than α and (b) the choice of models for intraniche SLCC-SLCC interactions. However, α = 0.03 per Gy leads to a prediction of almost no improvement when boosting. Conclusions: The predicted efficacy of moderate boosts depends sensitively on α. Presumably, the larger values of α are
Modeling secondary microseismic noise by normal mode summation
NASA Astrophysics Data System (ADS)
Gualtieri, Lucia; Stutzmann, Eleonore; Capdeville, Yann; Ardhuin, Fabrice; Schimmel, Martin; Mangenay, Anne; Morelli, Andrea
2013-04-01
Seismic noise is the continuous oscillation of the ground recorded by seismic stations in the period band 5-20s. In particular, secondary microseisms occur in the period band 5-12s and are generated in the ocean by the interaction of ocean gravity waves. We present the theory for modeling secondary microseismic noise by normal mode summation. We show that the noise sources can be modeled by vertical forces and how to derive them from a realistic ocean wave model. During the computation we take into account the bathymetry. We show how to compute bathymetry excitation effect in a realistic Earth model using normal modes and a comparison with Longuet-Higgins (1950) approach. The strongest excitation areas in the oceans depends on the bathymetry and period and are different for each seismic mode. We derive an attenuation model than enables to fit well the vertical component spectra whatever the station location. We show that the fundamental mode of Rayleigh wave is the dominant signal in seismic noise and it is sufficient to reproduce the main features of noise spectra amplitude. We also model horizontal components. There is a discrepancy between real and synthetic spectra on the horizontal components that enables to estimate the amount of Love waves for which a different source mechanism is needed. Finally, we investigate noise generated in all the oceans around Africa and show that most of noise recorded in Algeria (TAM station) is generated in the Northern Atlantic and that there is a seasonal variability of the contribution of each ocean and sea. Moreover, we also show that the Mediterranean Sea contributes significantly to the short period noise in winter.
Inferring Pairwise Interactions from Biological Data Using Maximum-Entropy Probability Models
Stein, Richard R.; Marks, Debora S.; Sander, Chris
2015-01-01
Maximum entropy-based inference methods have been successfully used to infer direct interactions from biological datasets such as gene expression data or sequence ensembles. Here, we review undirected pairwise maximum-entropy probability models in two categories of data types, those with continuous and categorical random variables. As a concrete example, we present recently developed inference methods from the field of protein contact prediction and show that a basic set of assumptions leads to similar solution strategies for inferring the model parameters in both variable types. These parameters reflect interactive couplings between observables, which can be used to predict global properties of the biological system. Such methods are applicable to the important problems of protein 3-D structure prediction and association of gene–gene networks, and they enable potential applications to the analysis of gene alteration patterns and to protein design. PMID:26225866
Normality Index of Ventricular Contraction Based on a Statistical Model from FADS
Jiménez-Ángeles, Luis; Valdés-Cristerna, Raquel; Vallejo, Enrique; Bialostozky, David; Medina-Bañuelos, Verónica
2013-01-01
Radionuclide-based imaging is an alternative to evaluate ventricular function and synchrony and may be used as a tool for the identification of patients that could benefit from cardiac resynchronization therapy (CRT). In a previous work, we used Factor Analysis of Dynamic Structures (FADS) to analyze the contribution and spatial distribution of the 3 most significant factors (3-MSF) present in a dynamic series of equilibrium radionuclide angiography images. In this work, a probability density function model of the 3-MSF extracted from FADS for a control group is presented; also an index, based on the likelihood between the control group's contraction model and a sample of normal subjects is proposed. This normality index was compared with those computed for two cardiopathic populations, satisfying the clinical criteria to be considered as candidates for a CRT. The proposed normality index provides a measure, consistent with the phase analysis currently used in clinical environment, sensitive enough to show contraction differences between normal and abnormal groups, which suggests that it can be related to the degree of severity in the ventricular contraction dyssynchrony, and therefore shows promise as a follow-up procedure for patients under CRT. PMID:23634177
Normality index of ventricular contraction based on a statistical model from FADS.
Jiménez-Ángeles, Luis; Valdés-Cristerna, Raquel; Vallejo, Enrique; Bialostozky, David; Medina-Bañuelos, Verónica
2013-01-01
Radionuclide-based imaging is an alternative to evaluate ventricular function and synchrony and may be used as a tool for the identification of patients that could benefit from cardiac resynchronization therapy (CRT). In a previous work, we used Factor Analysis of Dynamic Structures (FADS) to analyze the contribution and spatial distribution of the 3 most significant factors (3-MSF) present in a dynamic series of equilibrium radionuclide angiography images. In this work, a probability density function model of the 3-MSF extracted from FADS for a control group is presented; also an index, based on the likelihood between the control group's contraction model and a sample of normal subjects is proposed. This normality index was compared with those computed for two cardiopathic populations, satisfying the clinical criteria to be considered as candidates for a CRT. The proposed normality index provides a measure, consistent with the phase analysis currently used in clinical environment, sensitive enough to show contraction differences between normal and abnormal groups, which suggests that it can be related to the degree of severity in the ventricular contraction dyssynchrony, and therefore shows promise as a follow-up procedure for patients under CRT.
NASA Astrophysics Data System (ADS)
Bakosi, J.; Franzese, P.; Boybeyi, Z.
2007-11-01
Dispersion of a passive scalar from concentrated sources in fully developed turbulent channel flow is studied with the probability density function (PDF) method. The joint PDF of velocity, turbulent frequency and scalar concentration is represented by a large number of Lagrangian particles. A stochastic near-wall PDF model combines the generalized Langevin model of Haworth and Pope [Phys. Fluids 29, 387 (1986)] with Durbin's [J. Fluid Mech. 249, 465 (1993)] method of elliptic relaxation to provide a mathematically exact treatment of convective and viscous transport with a nonlocal representation of the near-wall Reynolds stress anisotropy. The presence of walls is incorporated through the imposition of no-slip and impermeability conditions on particles without the use of damping or wall-functions. Information on the turbulent time scale is supplied by the gamma-distribution model of van Slooten et al. [Phys. Fluids 10, 246 (1998)]. Two different micromixing models are compared that incorporate the effect of small scale mixing on the transported scalar: the widely used interaction by exchange with the mean and the interaction by exchange with the conditional mean model. Single-point velocity and concentration statistics are compared to direct numerical simulation and experimental data at Reτ=1080 based on the friction velocity and the channel half width. The joint model accurately reproduces a wide variety of conditional and unconditional statistics in both physical and composition space.
NASA Technical Reports Server (NTRS)
Smith, N. S. A.; Frolov, S. M.; Bowman, C. T.
1996-01-01
Two types of mixing sub-models are evaluated in connection with a joint-scalar probability density function method for turbulent nonpremixed combustion. Model calculations are made and compared to simulation results for homogeneously distributed methane-air reaction zones mixing and reacting in decaying turbulence within a two-dimensional enclosed domain. The comparison is arranged to ensure that both the simulation and model calculations a) make use of exactly the same chemical mechanism, b) do not involve non-unity Lewis number transport of species, and c) are free from radiation loss. The modified Curl mixing sub-model was found to provide superior predictive accuracy over the simple relaxation-to-mean submodel in the case studied. Accuracy to within 10-20% was found for global means of major species and temperature; however, nitric oxide prediction accuracy was lower and highly dependent on the choice of mixing sub-model. Both mixing submodels were found to produce non-physical mixing behavior for mixture fractions removed from the immediate reaction zone. A suggestion for a further modified Curl mixing sub-model is made in connection with earlier work done in the field.
Time Series Modeling of Pathogen-Specific Disease Probabilities with Subsampled Data
Fisher, Leigh; Wakefield, Jon; Bauer, Cici; Self, Steve
2016-01-01
Summary Many diseases arise due to exposure to one of multiple possible pathogens. We consider the situation in which disease counts are available over time from a study region, along with a measure of clinical disease severity, for example, mild or severe. In addition, we suppose a subset of the cases are lab tested in order to determine the pathogen responsible for disease. In such a context, we focus interest on modeling the probabilities of disease incidence given pathogen type. The time course of these probabilities is of great interest as is the association with time-varying covariates such as meteorological variables. In this set up, a natural Bayesian approach would be based on imputation of the unsampled pathogen information using Markov Chain Monte Carlo but this is computationally challenging. We describe a practically implementable approach to inference in which we use an empirical Bayes procedure in a first step to estimate summary statistics. We then treat these summary statistics as the observed data and develop a Bayesian generalized additive model. We analyze data on hand, foot and mouth disease (HFMD) in China in which there are two pathogens of primary interest, enterovirus 71 (EV71) and Coxackie A16 (CA16). We find that both EV71 and CA16 are associated with temperature, relative humidity and wind speed, with reasonably similar functional forms for both pathogens. The important issue of confounding by time is modeled using a penalized B-spline model with a random effects representation. The level of smoothing is addressed by a careful choice of the prior on the tuning variance. PMID:27378138
Time series modeling of pathogen-specific disease probabilities with subsampled data.
Fisher, Leigh; Wakefield, Jon; Bauer, Cici; Self, Steve
2017-03-01
Many diseases arise due to exposure to one of multiple possible pathogens. We consider the situation in which disease counts are available over time from a study region, along with a measure of clinical disease severity, for example, mild or severe. In addition, we suppose a subset of the cases are lab tested in order to determine the pathogen responsible for disease. In such a context, we focus interest on modeling the probabilities of disease incidence given pathogen type. The time course of these probabilities is of great interest as is the association with time-varying covariates such as meteorological variables. In this set up, a natural Bayesian approach would be based on imputation of the unsampled pathogen information using Markov Chain Monte Carlo but this is computationally challenging. We describe a practical approach to inference that is easy to implement. We use an empirical Bayes procedure in a first step to estimate summary statistics. We then treat these summary statistics as the observed data and develop a Bayesian generalized additive model. We analyze data on hand, foot, and mouth disease (HFMD) in China in which there are two pathogens of primary interest, enterovirus 71 (EV71) and Coxackie A16 (CA16). We find that both EV71 and CA16 are associated with temperature, relative humidity, and wind speed, with reasonably similar functional forms for both pathogens. The important issue of confounding by time is modeled using a penalized B-spline model with a random effects representation. The level of smoothing is addressed by a careful choice of the prior on the tuning variance. © 2016, The International Biometric Society.
Schindler, Dirk; Grebhan, Karin; Albrecht, Axel; Schönborn, Jochen
2009-11-01
The wind damage probability (P (DAM)) in the forests in the federal state of Baden-Wuerttemberg (Southwestern Germany) was calculated using weights of evidence (WofE) methodology and a logistic regression model (LRM) after the winter storm 'Lothar' in December 1999. A geographic information system (GIS) was used for the area-wide spatial prediction and mapping of P (DAM). The combination of the six evidential themes forest type, soil type, geology, soil moisture, soil acidification, and the 'Lothar' maximum gust field predicted wind damage best and was used to map P (DAM) in a 50 x 50 m resolution grid. GIS software was utilised to produce probability maps, which allowed the identification of areas of low, moderate, and high P (DAM) across the study area. The highest P (DAM) values were calculated for coniferous forest growing on acidic, fresh to moist soils on bunter sandstone formations-provided that 'Lothar' maximum gust speed exceeded 35 m s(-1) in the areas in question. One of the most significant benefits associated with the results of this study is that, for the first time, there is a GIS-based area-wide quantification of P (DAM) in the forests in Southwestern Germany. In combination with the experience and expert knowledge of local foresters, the probability maps produced can be used as an important tool for decision support with respect to future silvicultural activities aimed at reducing wind damage. One limitation of the P (DAM)-predictions is that they are based on only one major storm event. At the moment it is not possible to relate storm event intensity to the amount of wind damage in forests due to the lack of comprehensive long-term tree and stand damage data across the study area.
Estimating the probability for a protein to have a new fold: A statistical computational model
Portugaly, Elon; Linial, Michal
2000-01-01
Structural genomics aims to solve a large number of protein structures that represent the protein space. Currently an exhaustive solution for all structures seems prohibitively expensive, so the challenge is to define a relatively small set of proteins with new, currently unknown folds. This paper presents a method that assigns each protein with a probability of having an unsolved fold. The method makes extensive use of protomap, a sequence-based classification, and scop, a structure-based classification. According to protomap, the protein space encodes the relationship among proteins as a graph whose vertices correspond to 13,354 clusters of proteins. A representative fold for a cluster with at least one solved protein is determined after superposition of all scop (release 1.37) folds onto protomap clusters. Distances within the protomap graph are computed from each representative fold to the neighboring folds. The distribution of these distances is used to create a statistical model for distances among those folds that are already known and those that have yet to be discovered. The distribution of distances for solved/unsolved proteins is significantly different. This difference makes it possible to use Bayes' rule to derive a statistical estimate that any protein has a yet undetermined fold. Proteins that score the highest probability to represent a new fold constitute the target list for structural determination. Our predicted probabilities for unsolved proteins correlate very well with the proportion of new folds among recently solved structures (new scop 1.39 records) that are disjoint from our original training set. PMID:10792051
Analytical expression for the exit probability of the q -voter model in one dimension
NASA Astrophysics Data System (ADS)
Timpanaro, André M.; Galam, Serge
2015-07-01
We present in this paper an approximation that is able to give an analytical expression for the exit probability of the q -voter model in one dimension. This expression gives a better fit for the more recent data about simulations in large networks [A. M. Timpanaro and C. P. C. do Prado, Phys. Rev. E 89, 052808 (2014), 10.1103/PhysRevE.89.052808] and as such departs from the expression ρ/qρq+(1-ρ ) q found in papers that investigated small networks only [R. Lambiotte and S. Redner, Europhys. Lett. 82, 18007 (2008), 10.1209/0295-5075/82/18007; P. Przybyła et al., Phys. Rev. E 84, 031117 (2011), 10.1103/PhysRevE.84.031117; F. Slanina et al., Europhys. Lett. 82, 18006 (2008), 10.1209/0295-5075/82/18006]. The approximation consists in assuming a large separation on the time scales at which active groups of agents convince inactive ones and the time taken in the competition between active groups. Some interesting findings are that for q =2 we still have ρ/2ρ2+(1-ρ ) 2 as the exit probability and for q >2 we can obtain a lower-order approximation of the form ρ/sρs+(1-ρ ) s with s varying from q for low values of q to q -1/2 for large values of q . As such, this work can also be seen as a deduction for why the exit probability ρ/qρq+(1-ρ ) q gives a good fit, without relying on mean-field arguments or on the assumption that only the first step is nondeterministic, as q and q -1/2 will give very similar results when q →∞ .
Modeling normal shock velocity curvature relations for heterogeneous explosives
NASA Astrophysics Data System (ADS)
Yoo, Sunhee; Crochet, Michael; Pemberton, Steven
2017-01-01
The theory of Detonation Shock Dynamics (DSD) is, in part, an asymptotic method to model a functional form of the relation between the shock normal, its time rate and shock curvature κ. In addition, the shock polar analysis provides a relation between shock angle θ and the detonation velocity Dn that is dependent on the equations of state (EOS) of two adjacent materials. For the axial detonation of an explosive material confined by a cylinder, the shock angle is defined as the angle between the shock normal and the normal to the cylinder liner, located at the intersection of the shock front and cylinder inner wall. Therefore, given an ideal explosive such as PBX-9501 with two functional models determined, a unique, smooth detonation front shape ψ can be determined that approximates the steady state detonation shock front of the explosive. However, experimental measurements of the Dn(κ) relation for heterogeneous explosives such as PBXN-111 [D. K. Kennedy, 2000] are challenging due to the non-smoothness and asymmetry usually observed in the experimental streak records of explosion fronts. Out of many possibilities the asymmetric character may be attributed to the heterogeneity of the explosives; here, material heterogeneity refers to compositions with multiple components and having a grain morphology that can be modeled statistically. Therefore in extending the formulation of DSD to modern novel explosives, we pose two questions: (1) is there any simple hydrodynamic model that can simulate such an asymmetric shock evolution, and (2) what statistics can be derived for the asymmetry using simulations with defined structural heterogeneity in the unreacted explosive? Saenz, Taylor and Stewart [1] studied constitutive models for derivation of the Dn(κ) relation for porous homogeneous explosives and carried out simulations in a spherical coordinate frame. In this paper we extend their model to account for heterogeneity and present shock evolutions in heterogeneous
NASA Astrophysics Data System (ADS)
Blessent, Daniela; Therrien, René; Lemieux, Jean-Michel
2011-12-01
This paper presents numerical simulations of a series of hydraulic interference tests conducted in crystalline bedrock at Olkiluoto (Finland), a potential site for the disposal of the Finnish high-level nuclear waste. The tests are in a block of crystalline bedrock of about 0.03 km3 that contains low-transmissivity fractures. Fracture density, orientation, and fracture transmissivity are estimated from Posiva Flow Log (PFL) measurements in boreholes drilled in the rock block. On the basis of those data, a geostatistical approach relying on a transitional probability and Markov chain models is used to define a conceptual model based on stochastic fractured rock facies. Four facies are defined, from sparsely fractured bedrock to highly fractured bedrock. Using this conceptual model, three-dimensional groundwater flow is then simulated to reproduce interference pumping tests in either open or packed-off boreholes. Hydraulic conductivities of the fracture facies are estimated through automatic calibration using either hydraulic heads or both hydraulic heads and PFL flow rates as targets for calibration. The latter option produces a narrower confidence interval for the calibrated hydraulic conductivities, therefore reducing the associated uncertainty and demonstrating the usefulness of the measured PFL flow rates. Furthermore, the stochastic facies conceptual model is a suitable alternative to discrete fracture network models to simulate fluid flow in fractured geological media.
Probability based remaining capacity estimation using data-driven and neural network model
NASA Astrophysics Data System (ADS)
Wang, Yujie; Yang, Duo; Zhang, Xu; Chen, Zonghai
2016-05-01
Since large numbers of lithium-ion batteries are composed in pack and the batteries are complex electrochemical devices, their monitoring and safety concerns are key issues for the applications of battery technology. An accurate estimation of battery remaining capacity is crucial for optimization of the vehicle control, preventing battery from over-charging and over-discharging and ensuring the safety during its service life. The remaining capacity estimation of a battery includes the estimation of state-of-charge (SOC) and state-of-energy (SOE). In this work, a probability based adaptive estimator is presented to obtain accurate and reliable estimation results for both SOC and SOE. For the SOC estimation, an n ordered RC equivalent circuit model is employed by combining an electrochemical model to obtain more accurate voltage prediction results. For the SOE estimation, a sliding window neural network model is proposed to investigate the relationship between the terminal voltage and the model inputs. To verify the accuracy and robustness of the proposed model and estimation algorithm, experiments under different dynamic operation current profiles are performed on the commercial 1665130-type lithium-ion batteries. The results illustrate that accurate and robust estimation can be obtained by the proposed method.
An investigation of a quantum probability model for the constructive effect of affective evaluation.
White, Lee C; Barqué-Duran, Albert; Pothos, Emmanuel M
2016-01-13
The idea that choices can have a constructive effect has received a great deal of empirical support. The act of choosing appears to influence subsequent preferences for the options available. Recent research has proposed a cognitive model based on quantum probability (QP), which suggests that whether or not a participant provides an affective evaluation for a positively or negatively valenced stimulus can also be constructive and so, for example, influence the affective evaluation of a second oppositely valenced stimulus. However, there are some outstanding methodological questions in relation to this previous research. This paper reports the results of three experiments designed to resolve these questions. Experiment 1, using a binary response format, provides partial support for the interaction predicted by the QP model; and Experiment 2, which controls for the length of time participants have to respond, fully supports the QP model. Finally, Experiment 3 sought to determine whether the key effect can generalize beyond affective judgements about visual stimuli. Using judgements about the trustworthiness of well-known people, the predictions of the QP model were confirmed. Together, these three experiments provide further support for the QP model of the constructive effect of simple evaluations.
PHOTOMETRIC REDSHIFTS AND QUASAR PROBABILITIES FROM A SINGLE, DATA-DRIVEN GENERATIVE MODEL
Bovy, Jo; Hogg, David W.; Weaver, Benjamin A.; Myers, Adam D.; Hennawi, Joseph F.; McMahon, Richard G.; Schiminovich, David; Sheldon, Erin S.; Brinkmann, Jon; Schneider, Donald P.
2012-04-10
We describe a technique for simultaneously classifying and estimating the redshift of quasars. It can separate quasars from stars in arbitrary redshift ranges, estimate full posterior distribution functions for the redshift, and naturally incorporate flux uncertainties, missing data, and multi-wavelength photometry. We build models of quasars in flux-redshift space by applying the extreme deconvolution technique to estimate the underlying density. By integrating this density over redshift, one can obtain quasar flux densities in different redshift ranges. This approach allows for efficient, consistent, and fast classification and photometric redshift estimation. This is achieved by combining the speed obtained by choosing simple analytical forms as the basis of our density model with the flexibility of non-parametric models through the use of many simple components with many parameters. We show that this technique is competitive with the best photometric quasar classification techniques-which are limited to fixed, broad redshift ranges and high signal-to-noise ratio data-and with the best photometric redshift techniques when applied to broadband optical data. We demonstrate that the inclusion of UV and NIR data significantly improves photometric quasar-star separation and essentially resolves all of the redshift degeneracies for quasars inherent to the ugriz filter system, even when included data have a low signal-to-noise ratio. For quasars spectroscopically confirmed by the SDSS 84% and 97% of the objects with Galaxy Evolution Explorer UV and UKIDSS NIR data have photometric redshifts within 0.1 and 0.3, respectively, of the spectroscopic redshift; this amounts to about a factor of three improvement over ugriz-only photometric redshifts. Our code to calculate quasar probabilities and redshift probability distributions is publicly available.
Photometric redshifts and quasar probabilities from a single, data-driven generative model
Bovy, Jo; Myers, Adam D.; Hennawi, Joseph F.; Hogg, David W.; McMahon, Richard G.; Schiminovich, David; Sheldon, Erin S.; Brinkmann, Jon; Schneider, Donald P.; Weaver, Benjamin A.
2012-03-20
We describe a technique for simultaneously classifying and estimating the redshift of quasars. It can separate quasars from stars in arbitrary redshift ranges, estimate full posterior distribution functions for the redshift, and naturally incorporate flux uncertainties, missing data, and multi-wavelength photometry. We build models of quasars in flux-redshift space by applying the extreme deconvolution technique to estimate the underlying density. By integrating this density over redshift, one can obtain quasar flux densities in different redshift ranges. This approach allows for efficient, consistent, and fast classification and photometric redshift estimation. This is achieved by combining the speed obtained by choosing simple analytical forms as the basis of our density model with the flexibility of non-parametric models through the use of many simple components with many parameters. We show that this technique is competitive with the best photometric quasar classification techniques—which are limited to fixed, broad redshift ranges and high signal-to-noise ratio data—and with the best photometric redshift techniques when applied to broadband optical data. We demonstrate that the inclusion of UV and NIR data significantly improves photometric quasar-star separation and essentially resolves all of the redshift degeneracies for quasars inherent to the ugriz filter system, even when included data have a low signal-to-noise ratio. For quasars spectroscopically confirmed by the SDSS 84% and 97% of the objects with Galaxy Evolution Explorer UV and UKIDSS NIR data have photometric redshifts within 0.1 and 0.3, respectively, of the spectroscopic redshift; this amounts to about a factor of three improvement over ugriz-only photometric redshifts. Our code to calculate quasar probabilities and redshift probability distributions is publicly available.
NASA Astrophysics Data System (ADS)
Cheng, Rongjun; Ge, Hongxia; Wang, Jufeng
2017-08-01
Due to the maximum velocity and safe headway distance of the different vehicles are not exactly the same, an extended macro model of traffic flow with the consideration of multiple optimal velocity functions with probabilities is proposed in this paper. By means of linear stability theory, the new model's linear stability condition considering multiple probabilities optimal velocity is obtained. The KdV-Burgers equation is derived to describe the propagating behavior of traffic density wave near the neutral stability line through nonlinear analysis. The numerical simulations of influences of multiple maximum velocities and multiple safety distances on model's stability and traffic capacity are carried out. The cases of two different kinds of maximum speeds with same safe headway distance, two different types of safe headway distances with same maximum speed and two different max velocities and two different time-gaps are all explored by numerical simulations. First cases demonstrate that when the proportion of vehicles with a larger vmax increase, the traffic tends to unstable, which also means that jerk and brakes is not conducive to traffic stability and easier to result in stop and go phenomenon. Second cases show that when the proportion of vehicles with greater safety spacing increases, the traffic tends to be unstable, which also means that too cautious assumptions or weak driving skill is not conducive to traffic stability. Last cases indicate that increase of maximum speed is not conducive to traffic stability, while reduction of the safe headway distance is conducive to traffic stability. Numerical simulation manifests that the mixed driving and traffic diversion does not have effect on the traffic capacity when traffic density is low or heavy. Numerical results also show that mixed driving should be chosen to increase the traffic capacity when the traffic density is lower, while the traffic diversion should be chosen to increase the traffic capacity when
McClure, Meredith L; Burdett, Christopher L; Farnsworth, Matthew L; Lutman, Mark W; Theobald, David M; Riggs, Philip D; Grear, Daniel A; Miller, Ryan S
2015-01-01
Wild pigs (Sus scrofa), also known as wild swine, feral pigs, or feral hogs, are one of the most widespread and successful invasive species around the world. Wild pigs have been linked to extensive and costly agricultural damage and present a serious threat to plant and animal communities due to their rooting behavior and omnivorous diet. We modeled the current distribution of wild pigs in the United States to better understand the physiological and ecological factors that may determine their invasive potential and to guide future study and eradication efforts. Using national-scale wild pig occurrence data reported between 1982 and 2012 by wildlife management professionals, we estimated the probability of wild pig occurrence across the United States using a logistic discrimination function and environmental covariates hypothesized to influence the distribution of the species. Our results suggest the distribution of wild pigs in the U.S. was most strongly limited by cold temperatures and availability of water, and that they were most likely to occur where potential home ranges had higher habitat heterogeneity, providing access to multiple key resources including water, forage, and cover. High probability of occurrence was also associated with frequent high temperatures, up to a high threshold. However, this pattern is driven by pigs' historic distribution in warm climates of the southern U.S. Further study of pigs' ability to persist in cold northern climates is needed to better understand whether low temperatures actually limit their distribution. Our model highlights areas at risk of invasion as those with habitat conditions similar to those found in pigs' current range that are also near current populations. This study provides a macro-scale approach to generalist species distribution modeling that is applicable to other generalist and invasive species.
McClure, Meredith L.; Burdett, Christopher L.; Farnsworth, Matthew L.; Lutman, Mark W.; Theobald, David M.; Riggs, Philip D.; Grear, Daniel A.; Miller, Ryan S.
2015-01-01
Wild pigs (Sus scrofa), also known as wild swine, feral pigs, or feral hogs, are one of the most widespread and successful invasive species around the world. Wild pigs have been linked to extensive and costly agricultural damage and present a serious threat to plant and animal communities due to their rooting behavior and omnivorous diet. We modeled the current distribution of wild pigs in the United States to better understand the physiological and ecological factors that may determine their invasive potential and to guide future study and eradication efforts. Using national-scale wild pig occurrence data reported between 1982 and 2012 by wildlife management professionals, we estimated the probability of wild pig occurrence across the United States using a logistic discrimination function and environmental covariates hypothesized to influence the distribution of the species. Our results suggest the distribution of wild pigs in the U.S. was most strongly limited by cold temperatures and availability of water, and that they were most likely to occur where potential home ranges had higher habitat heterogeneity, providing access to multiple key resources including water, forage, and cover. High probability of occurrence was also associated with frequent high temperatures, up to a high threshold. However, this pattern is driven by pigs’ historic distribution in warm climates of the southern U.S. Further study of pigs’ ability to persist in cold northern climates is needed to better understand whether low temperatures actually limit their distribution. Our model highlights areas at risk of invasion as those with habitat conditions similar to those found in pigs’ current range that are also near current populations. This study provides a macro-scale approach to generalist species distribution modeling that is applicable to other generalist and invasive species. PMID:26267266
Fixation probability and the crossing time in the Wright-Fisher multiple alleles model
NASA Astrophysics Data System (ADS)
Gill, Wonpyong
2009-08-01
The fixation probability and crossing time in the Wright-Fisher multiple alleles model, which describes a finite haploid population, were calculated by switching on an asymmetric sharply-peaked landscape with a positive asymmetric parameter, r, such that the reversal allele of the optimal allele has higher fitness than the optimal allele. The fixation probability, which was evaluated as the ratio of the first arrival time at the reversal allele to the origination time, was double the selective advantage of the reversal allele compared with the optimal allele in the strong selection region, where the fitness parameter, k, is much larger than the critical fitness parameter, kc. The crossing time in a finite population for r>0 and k
LaBudde, Robert A.; Harnly, James M.
2013-01-01
A qualitative botanical identification method (BIM) is an analytical procedure that returns a binary result (1 = Identified, 0 = Not Identified). A BIM may be used by a buyer, manufacturer or regulator to determine whether a botanical material being tested is the same as the target (desired) material, or whether it contains excessive non-target (undesirable) material. The report describes the development and validation of studies for a BIM based on the proportion of replicates identified, or probability of identification (POI), as the basic observed statistic. The statistical procedures proposed for data analysis follow closely those of the probability of detection, and harmonize the statistical concepts and parameters between quantitative and qualitative method validation. Use of POI statistics also harmonizes statistical concepts for botanical, microbiological, toxin, and other analyte identification methods that produce binary results. The POI statistical model provides a tool for graphical representation of response curves for qualitative methods, reporting of descriptive statistics, and application of performance requirements. Single collaborator and multicollaborative study examples are given. PMID:22468371
NASA Astrophysics Data System (ADS)
Neupauer, Roseanna M.; Lin, Ranhao; O'Shea, Heather
2007-11-01
If contamination is observed in an aquifer, a backward probability model can be used to obtain information about the former position of the observed contamination. A backward location probability density function (PDF) describes the possible former positions of the observed contaminant particle at a specified time in the past. If the source release time is known or can be estimated, the backward location PDF can be used to identify possible source locations. For sorbing solutes, the location PDF depends on the phase (aqueous or sorbed) of the observed contamination and on the phase of the contamination at the source. These PDFs are related to adjoint states of aqueous and sorbed phase concentrations. The adjoint states, however, do not take into account the measured concentrations. Neupauer and Lin (2006) presented an approach for conditioning backward location PDFs on measured concentrations of non-reactive solutes. In this paper, we present a related conditioning method to identify the location of an instantaneous point source of a solute that exhibits first-order decay and linear equilibrium or non-equilibrium sorption. We derive the conditioning equations and present an illustrative example to demonstrate important features of the technique. Finally, we illustrate the use of the conditioned location PDF to identify possible sources of contamination by using data from a trichloroethylene plume at the Massachusetts Military Reservation.
Lura, Derek; Wernke, Matthew; Alqasemi, Redwan; Carey, Stephanie; Dubey, Rajiv
2012-01-01
This paper presents the probability density based gradient projection (GP) of the null space of the Jacobian for a 25 degree of freedom bilateral robotic human body model (RHBM). This method was used to predict the inverse kinematics of the RHBM and maximize the similarity between predicted inverse kinematic poses and recorded data of 10 subjects performing activities of daily living. The density function was created for discrete increments of the workspace. The number of increments in each direction (x, y, and z) was varied from 1 to 20. Performance of the method was evaluated by finding the root mean squared (RMS) of the difference between the predicted joint angles relative to the joint angles recorded from motion capture. The amount of data included in the creation of the probability density function was varied from 1 to 10 subjects, creating sets of for subjects included and excluded from the density function. The performance of the GP method for subjects included and excluded from the density function was evaluated to test the robustness of the method. Accuracy of the GP method varied with amount of incremental division of the workspace, increasing the number of increments decreased the RMS error of the method, with the error of average RMS error of included subjects ranging from 7.7° to 3.7°. However increasing the number of increments also decreased the robustness of the method.
LaBudde, Robert A; Harnly, James M
2012-01-01
A qualitative botanical identification method (BIM) is an analytical procedure that returns a binary result (1 = Identified, 0 = Not Identified). A BIM may be used by a buyer, manufacturer, or regulator to determine whether a botanical material being tested is the same as the target (desired) material, or whether it contains excessive nontarget (undesirable) material. The report describes the development and validation of studies for a BIM based on the proportion of replicates identified, or probability of identification (POI), as the basic observed statistic. The statistical procedures proposed for data analysis follow closely those of the probability of detection, and harmonize the statistical concepts and parameters between quantitative and qualitative method validation. Use of POI statistics also harmonizes statistical concepts for botanical, microbiological, toxin, and other analyte identification methods that produce binary results. The POI statistical model provides a tool for graphical representation of response curves for qualitative methods, reporting of descriptive statistics, and application of performance requirements. Single collaborator and multicollaborative study examples are given.
Pharmacokinetic modeling of ascorbate diffusion through normal and tumor tissue.
Kuiper, Caroline; Vissers, Margreet C M; Hicks, Kevin O
2014-12-01
Ascorbate is delivered to cells via the vasculature, but its ability to penetrate into tissues remote from blood vessels is unknown. This is particularly relevant to solid tumors, which often contain regions with dysfunctional vasculature, with impaired oxygen and nutrient delivery, resulting in upregulation of the hypoxic response and also the likely depletion of essential plasma-derived biomolecules, such as ascorbate. In this study, we have utilized a well-established multicell-layered, three-dimensional pharmacokinetic model to measure ascorbate diffusion and transport parameters through dense tissue in vitro. Ascorbate was found to penetrate the tissue at a slightly lower rate than mannitol and to travel via the paracellular route. Uptake parameters into the cells were also determined. These data were fitted to the diffusion model, and simulations of ascorbate pharmacokinetics in normal tissue and in hypoxic tumor tissue were performed with varying input concentrations, ranging from normal dietary plasma levels (10-100 μM) to pharmacological levels (>1 mM) as seen with intravenous infusion. The data and simulations demonstrate heterogeneous distribution of ascorbate in tumor tissue at physiological blood levels and provide insight into the range of plasma ascorbate concentrations and exposure times needed to saturate all regions of a tumor. The predictions suggest that supraphysiological plasma ascorbate concentrations (>100 μM) are required to achieve effective delivery of ascorbate to poorly vascularized tumor tissue.
A normalized statistical metric space for hidden Markov models.
Lu, Chen; Schwier, Jason M; Craven, Ryan M; Yu, Lu; Brooks, Richard R; Griffin, Christopher
2013-06-01
In this paper, we present a normalized statistical metric space for hidden Markov models (HMMs). HMMs are widely used to model real-world systems. Like graph matching, some previous approaches compare HMMs by evaluating the correspondence, or goodness of match, between every pair of states, concentrating on the structure of the models instead of the statistics of the process being observed. To remedy this, we present a new metric space that compares the statistics of HMMs within a given level of statistical significance. Compared with the Kullback-Leibler divergence, which is another widely used approach for measuring model similarity, our approach is a true metric, can always return an appropriate distance value, and provides a confidence measure on the metric value. Experimental results are given for a sample application, which quantify the similarity of HMMs of network traffic in the Tor anonymization system. This application is interesting since it considers models extracted from a system that is intentionally trying to obfuscate its internal workings. In the conclusion, we discuss applications in less-challenging domains, such as data mining.
Modeling and forecasting foreign exchange daily closing prices with normal inverse Gaussian
NASA Astrophysics Data System (ADS)
Teneng, Dean
2013-09-01
We fit the normal inverse Gaussian(NIG) distribution to foreign exchange closing prices using the open software package R and select best models by Käärik and Umbleja (2011) proposed strategy. We observe that daily closing prices (12/04/2008 - 07/08/2012) of CHF/JPY, AUD/JPY, GBP/JPY, NZD/USD, QAR/CHF, QAR/EUR, SAR/CHF, SAR/EUR, TND/CHF and TND/EUR are excellent fits while EGP/EUR and EUR/GBP are good fits with a Kolmogorov-Smirnov test p-value of 0.062 and 0.08 respectively. It was impossible to estimate normal inverse Gaussian parameters (by maximum likelihood; computational problem) for JPY/CHF but CHF/JPY was an excellent fit. Thus, while the stochastic properties of an exchange rate can be completely modeled with a probability distribution in one direction, it may be impossible the other way around. We also demonstrate that foreign exchange closing prices can be forecasted with the normal inverse Gaussian (NIG) Lévy process, both in cases where the daily closing prices can and cannot be modeled by NIG distribution.
Flint, Alexander C; Rao, Vivek A; Chan, Sheila L; Cullen, Sean P; Faigeles, Bonnie S; Smith, Wade S; Bath, Philip M; Wahlgren, Nils; Ahmed, Niaz; Donnan, Geoff A; Johnston, S Claiborne
2015-08-01
The Totaled Health Risks in Vascular Events (THRIVE) score is a previously validated ischemic stroke outcome prediction tool. Although simplified scoring systems like the THRIVE score facilitate ease-of-use, when computers or devices are available at the point of care, a more accurate and patient-specific estimation of outcome probability should be possible by computing the logistic equation with patient-specific continuous variables. We used data from 12 207 subjects from the Virtual International Stroke Trials Archive and the Safe Implementation of Thrombolysis in Stroke - Monitoring Study to develop and validate the performance of a model-derived estimation of outcome probability, the THRIVE-c calculation. Models were built with logistic regression using the underlying predictors from the THRIVE score: age, National Institutes of Health Stroke Scale score, and the Chronic Disease Scale (presence of hypertension, diabetes mellitus, or atrial fibrillation). Receiver operator characteristics analysis was used to assess model performance and compare the THRIVE-c model to the traditional THRIVE score, using a two-tailed Chi-squared test. The THRIVE-c model performed similarly in the randomly chosen development cohort (n = 6194, area under the curve = 0·786, 95% confidence interval 0·774-0·798) and validation cohort (n = 6013, area under the curve = 0·784, 95% confidence interval 0·772-0·796) (P = 0·79). Similar performance was also seen in two separate external validation cohorts. The THRIVE-c model (area under the curve = 0·785, 95% confidence interval 0·777-0·793) had superior performance when compared with the traditional THRIVE score (area under the curve = 0·746, 95% confidence interval 0·737-0·755) (P < 0·001). By computing the logistic equation with patient-specific continuous variables in the THRIVE-c calculation, outcomes at the individual patient level are more accurately estimated. Given the widespread
Kukla, G.; Gavin, J.
1994-05-01
This report was prepared at the Lamont-Doherty Geological Observatory of Columbia University at Palisades, New York, under subcontract to Pacific Northwest Laboratory it is a part of a larger project of global climate studies which supports site characterization work required for the selection of a potential high-level nuclear waste repository and forms part of the Performance Assessment Scientific Support (PASS) Program at PNL. The work under the PASS Program is currently focusing on the proposed site at Yucca Mountain, Nevada, and is under the overall direction of the Yucca Mountain Project Office US Department of Energy, Las Vegas, Nevada. The final results of the PNL project will provide input to global atmospheric models designed to test specific climate scenarios which will be used in the site specific modeling work of others. The primary purpose of the data bases compiled and of the astronomic predictive models is to aid in the estimation of the probabilities of future climate states. The results will be used by two other teams working on the global climate study under contract to PNL. They are located at and the University of Maine in Orono, Maine, and the Applied Research Corporation in College Station, Texas. This report presents the results of the third year`s work on the global climate change models and the data bases describing past climates.
A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities
Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.
1999-01-01
A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.
3D model retrieval using probability density-based shape descriptors.
Akgül, Ceyhun Burak; Sankur, Bülent; Yemez, Yücel; Schmitt, Francis
2009-06-01
We address content-based retrieval of complete 3D object models by a probabilistic generative description of local shape properties. The proposed shape description framework characterizes a 3D object with sampled multivariate probability density functions of its local surface features. This density-based descriptor can be efficiently computed via kernel density estimation (KDE) coupled with fast Gauss transform. The non-parametric KDE technique allows reliable characterization of a diverse set of shapes and yields descriptors which remain relatively insensitive to small shape perturbations and mesh resolution. Density-based characterization also induces a permutation property which can be used to guarantee invariance at the shape matching stage. As proven by extensive retrieval experiments on several 3D databases, our framework provides state-of-the-art discrimination over a broad and heterogeneous set of shape categories.
Fisicaro, E; Braibanti, A; Sambasiva Rao, R; Compari, C; Ghiozzi, A; Nageswara Rao, G
1998-04-01
An algorithm is proposed for the estimation of binding parameters for the interaction of biologically important macromolecules with smaller ones from electrometric titration data. The mathematical model is based on the representation of equilibria in terms of probability concepts of statistical molecular thermodynamics. The refinement of equilibrium concentrations of the components and estimation of binding parameters (log site constant and cooperativity factor) is performed using singular value decomposition, a chemometric technique which overcomes the general obstacles due to near singularity. The present software is validated with a number of biochemical systems of varying number of sites and cooperativity factors. The effect of random errors of realistic magnitude in experimental data is studied using the simulated primary data for some typical systems. The safe area within which approximate binding parameters ensure convergence has been reported for the non-self starting optimization algorithms.
A generative probability model of joint label fusion for multi-atlas based brain segmentation.
Wu, Guorong; Wang, Qian; Zhang, Daoqiang; Nie, Feiping; Huang, Heng; Shen, Dinggang
2014-08-01
Automated labeling of anatomical structures in medical images is very important in many neuroscience studies. Recently, patch-based labeling has been widely investigated to alleviate the possible mis-alignment when registering atlases to the target image. However, the weights used for label fusion from the registered atlases are generally computed independently and thus lack the capability of preventing the ambiguous atlas patches from contributing to the label fusion. More critically, these weights are often calculated based only on the simple patch similarity, thus not necessarily providing optimal solution for label fusion. To address these limitations, we propose a generative probability model to describe the procedure of label fusion in a multi-atlas scenario, for the goal of labeling each point in the target image by the best representative atlas patches that also have the largest labeling unanimity in labeling the underlying point correctly. Specifically, sparsity constraint is imposed upon label fusion weights, in order to select a small number of atlas patches that best represent the underlying target patch, thus reducing the risks of including the misleading atlas patches. The labeling unanimity among atlas patches is achieved by exploring their dependencies, where we model these dependencies as the joint probability of each pair of atlas patches in correctly predicting the labels, by analyzing the correlation of their morphological error patterns and also the labeling consensus among atlases. The patch dependencies will be further recursively updated based on the latest labeling results to correct the possible labeling errors, which falls to the Expectation Maximization (EM) framework. To demonstrate the labeling performance, we have comprehensively evaluated our patch-based labeling method on the whole brain parcellation and hippocampus segmentation. Promising labeling results have been achieved with comparison to the conventional patch-based labeling
Sildenafil normalizes bowel transit in preclinical models of constipation
Sharman, Sarah K.; Islam, Bianca N.; Hou, Yali; Usry, Margaux; Bridges, Allison; Singh, Nagendra; Sridhar, Subbaramiah; Rao, Satish
2017-01-01
Guanylyl cyclase-C (GC-C) agonists increase cGMP levels in the intestinal epithelium to promote secretion. This process underlies the utility of exogenous GC-C agonists such as linaclotide for the treatment of chronic idiopathic constipation (CIC) and irritable bowel syndrome with constipation (IBS-C). Because GC-C agonists have limited use in pediatric patients, there is a need for alternative cGMP-elevating agents that are effective in the intestine. The present study aimed to determine whether the PDE-5 inhibitor sildenafil has similar effects as linaclotide on preclinical models of constipation. Oral administration of sildenafil caused increased cGMP levels in mouse intestinal epithelium demonstrating that blocking cGMP-breakdown is an alternative approach to increase cGMP in the gut. Both linaclotide and sildenafil reduced proliferation and increased differentiation in colon mucosa, indicating common target pathways. The homeostatic effects of cGMP required gut turnover since maximal effects were observed after 3 days of treatment. Neither linaclotide nor sildenafil treatment affected intestinal transit or water content of fecal pellets in healthy mice. To test the effectiveness of cGMP elevation in a functional motility disorder model, mice were treated with dextran sulfate sodium (DSS) to induce colitis and were allowed to recover for several weeks. The recovered animals exhibited slower transit, but increased fecal water content. An acute dose of sildenafil was able to normalize transit and fecal water content in the DSS-recovery animal model, and also in loperamide-induced constipation. The higher fecal water content in the recovered animals was due to a compromised epithelial barrier, which was normalized by sildenafil treatment. Taken together our results show that sildenafil can have similar effects as linaclotide on the intestine, and may have therapeutic benefit to patients with CIC, IBS-C, and post-infectious IBS. PMID:28448580
Target normal sheath acceleration analytical modeling, comparative study and developments
Perego, C.; Batani, D.; Zani, A.; Passoni, M.
2012-02-15
Ultra-intense laser interaction with solid targets appears to be an extremely promising technique to accelerate ions up to several MeV, producing beams that exhibit interesting properties for many foreseen applications. Nowadays, most of all the published experimental results can be theoretically explained in the framework of the target normal sheath acceleration (TNSA) mechanism proposed by Wilks et al. [Phys. Plasmas 8(2), 542 (2001)]. As an alternative to numerical simulation various analytical or semi-analytical TNSA models have been published in the latest years, each of them trying to provide predictions for some of the ion beam features, given the initial laser and target parameters. However, the problem of developing a reliable model for the TNSA process is still open, which is why the purpose of this work is to enlighten the present situation of TNSA modeling and experimental results, by means of a quantitative comparison between measurements and theoretical predictions of the maximum ion energy. Moreover, in the light of such an analysis, some indications for the future development of the model proposed by Passoni and Lontano [Phys. Plasmas 13(4), 042102 (2006)] are then presented.
The international normalized ratio and uncertainty. Validation of a probabilistic model.
Critchfield, G C; Bennett, S T
1994-07-01
The motivation behind the creation of the International Normalized Ratio (INR) was to improve interlaboratory comparison for patients on anticoagulation therapy. In principle, a laboratory that reports the prothrombin time (PT) as an INR can standardize its PT measurements to an international reference thromboplastin. Using probability theory, the authors derived the equation for the probability distribution of the INR based on the PT, the International Sensitivity Index (ISI), and the geometric mean PT of the reference population. With Monte Carlo and numeric integration techniques, the model is validated on data from three different laboratories. The model allows computation of confidence intervals for the INR as a function of PT, ISI, and reference mean. The probabilistic model illustrates that confidence in INR measurements degrades for higher INR values. This occurs primarily as a result of amplification of between-run measurement errors in the PT, which is inherent in the mathematical transformation from the PT to the INR. The probabilistic model can be used by any laboratory to study the reliability of its own INR for any measured PT. This framework provides better insight into the problems of monitoring oral anticoagulation.
NASA Astrophysics Data System (ADS)
Kim, Shaun Sang Ho; Hughes, Justin Douglas; Chen, Jie; Dutta, Dushmanta; Vaze, Jai
2015-11-01
A calibration method is presented that uses a sub-period resampling method to estimate probability distributions of performance for different parameter sets. Where conventional calibration methods implicitly identify the best performing parameterisations on average, the new method looks at the consistency of performance during sub-periods. The method is implemented with the conceptual river reach algorithms within the Australian Water Resources Assessments River (AWRA-R) model in the Murray-Darling Basin, Australia. The new method is tested for 192 reaches in a cross-validation scheme and results are compared to a traditional split-sample calibration-validation implementation. This is done to evaluate the new technique's ability to predict daily streamflow outside the calibration period. The new calibration method produced parameterisations that performed better in validation periods than optimum calibration parameter sets for 103 reaches and produced the same parameterisations for 35 reaches. The method showed a statistically significant improvement to predictive performance and potentially provides more rational flux terms over traditional split-sample calibration methods. Particular strengths of the proposed calibration method is that it avoids extra weighting towards rare periods of good agreement and also prevents compensating biases through time. The method can be used as a diagnostic tool to evaluate stochasticity of modelled systems and used to determine suitable model structures of different time-series models. Although the method is demonstrated using a hydrological model, the method is not limited to the field of hydrology and could be adopted for many different time-series modelling applications.
NASA Astrophysics Data System (ADS)
Zhao, Tongtiegang; Schepen, Andrew; Wang, Q. J.
2016-10-01
The Bayesian joint probability (BJP) modelling approach is used operationally to produce seasonal (three-month-total) ensemble streamflow forecasts in Australia. However, water resource managers are calling for more informative sub-seasonal forecasts. Taking advantage of BJP's capability of handling multiple predictands, ensemble forecasting of sub-seasonal to seasonal streamflows is investigated for 23 catchments around Australia. Using antecedent streamflow and climate indices as predictors, monthly forecasts are developed for the three-month period ahead. Forecast reliability and skill are evaluated for the period 1982-2011 using a rigorous leave-five-years-out cross validation strategy. BJP ensemble forecasts of monthly streamflow volumes are generally reliable in ensemble spread. Forecast skill, relative to climatology, is positive in 74% of cases in the first month, decreasing to 57% and 46% respectively for streamflow forecasts for the final two months of the season. As forecast skill diminishes with increasing lead time, the monthly forecasts approach climatology. Seasonal forecasts accumulated from monthly forecasts are found to be similarly skilful to forecasts from BJP models based on seasonal totals directly. The BJP modelling approach is demonstrated to be a viable option for producing ensemble time-series sub-seasonal to seasonal streamflow forecasts.
Towards smart prosthetic hand: Adaptive probability based skeletan muscle fatigue model.
Kumar, Parmod; Sebastian, Anish; Potluri, Chandrasekhar; Urfer, Alex; Naidu, D; Schoen, Marco P
2010-01-01
Skeletal muscle force can be estimated using surface electromyographic (sEMG) signals. Usually, the surface location for the sensors is near the respective muscle motor unit points. Skeletal muscles generate a spatial EMG signal, which causes cross talk between different sEMG signal sensors. In this study, an array of three sEMG sensors is used to capture the information of muscle dynamics in terms of sEMG signals. The recorded sEMG signals are filtered utilizing optimized nonlinear Half-Gaussian Bayesian filters parameters, and the muscle force signal using a Chebyshev type-II filter. The filter optimization is accomplished using Genetic Algorithms. Three discrete time state-space muscle fatigue models are obtained using system identification and modal transformation for three sets of sensors for single motor unit. The outputs of these three muscle fatigue models are fused with a probabilistic Kullback Information Criterion (KIC) for model selection. The final fused output is estimated with an adaptive probability of KIC, which provides improved force estimates.
Moment-Based Probability Modeling and Extreme Response Estimation, The FITS Routine Version 1.2
MANUEL,LANCE; KASHEF,TINA; WINTERSTEIN,STEVEN R.
1999-11-01
This report documents the use of the FITS routine, which provides automated fits of various analytical, commonly used probability models from input data. It is intended to complement the previously distributed FITTING routine documented in RMS Report 14 (Winterstein et al., 1994), which implements relatively complex four-moment distribution models whose parameters are fit with numerical optimization routines. Although these four-moment fits can be quite useful and faithful to the observed data, their complexity can make them difficult to automate within standard fitting algorithms. In contrast, FITS provides more robust (lower moment) fits of simpler, more conventional distribution forms. For each database of interest, the routine estimates the distribution of annual maximum response based on the data values and the duration, T, over which they were recorded. To focus on the upper tails of interest, the user can also supply an arbitrary lower-bound threshold, {chi}{sub low}, above which a shifted distribution model--exponential or Weibull--is fit.
Adaptive Phase I Clinical Trial Design Using Markov Models for Conditional Probability of Toxicity
Fernandes, Laura L.; Taylor, Jeremy M.G.; Murray, Susan
2016-01-01
Many phase I trials in oncology involve multiple dose administrations on the same patient over multiple cycles, with a typical cycle lasting three weeks and having about six cycles per patient with a goal to find the maximum tolerated dose (MTD) and study the dose-toxicity relationship. A patient's dose is unchanged over the cycles and the data is reduced to a binary end point, the occurrence of a toxicity and analyzed either by considering the toxicity from the first dose or from any cycle on the study. In this paper an alternative approach allowing an assessment of toxicity from each cycle and dose variations for patient over cycles is presented. A Markov model for the conditional probability of toxicity on any cycle given no toxicity in previous cycles is formulated as a function of the current and previous doses. The extra information from each cycle provides more precise estimation of the dose-toxicity relationship. Simulation results demonstrating gains in using the Markov model as compared to analyses of a single binary outcome are presented. Methods for utilizing the Markov model to conduct a phase I study, including choices for selecting doses for the next cycle for each patient, are developed and presented via simulation. PMID:26098782
Probability-changing cluster algorithm for two-dimensional XY and clock models
NASA Astrophysics Data System (ADS)
Tomita, Yusuke; Okabe, Yutaka
2002-05-01
We extend the newly proposed probability-changing cluster (PCC) Monte Carlo algorithm to the study of systems with the vector order parameter. Wolff's idea of the embedded cluster formalism is used for assigning clusters. The Kosterlitz-Thouless (KT) transitions for the two-dimensional (2D) XY and q-state clock models are studied by using the PCC algorithm. Combined with the finite-size scaling analysis based on the KT form of the correlation length, ξ~exp(c/(T/TKT-1)), we determine the KT transition temperature and the decay exponent η as TKT=0.8933(6) and η=0.243(4) for the 2D XY model. We investigate two transitions of the KT type for the 2D q-state clock models with q=6,8,12 and confirm the prediction of η=4/q2 at T1, the low-temperature critical point between the ordered and XY-like phases, systematically.
NASA Astrophysics Data System (ADS)
Merdan, Ziya; Karakuş, Özlem
2016-11-01
The six dimensional Ising model with nearest-neighbor pair interactions has been simulated and verified numerically on the Creutz Cellular Automaton by using five bit demons near the infinite-lattice critical temperature with the linear dimensions L=4,6,8,10. The order parameter probability distribution for six dimensional Ising model has been calculated at the critical temperature. The constants of the analytical function have been estimated by fitting to probability function obtained numerically at the finite size critical point.
Stacey, W.M.
1992-12-01
A new computational model for neutral particle transport in the outer regions of a diverted tokamak plasma chamber is presented. The model is based on the calculation of transmission and escape probabilities using first-flight integral transport theory and the balancing of fluxes across the surfaces bounding the various regions. The geometrical complexity of the problem is included in precomputed probabilities which depend only on the mean free path of the region.
Stacey, W.M.
1992-12-01
A new computational model for neutral particle transport in the outer regions of a diverted tokamak plasma chamber is presented. The model is based on the calculation of transmission and escape probabilities using first-flight integral transport theory and the balancing of fluxes across the surfaces bounding the various regions. The geometrical complexity of the problem is included in precomputed probabilities which depend only on the mean free path of the region.
Kausar, A S M Zahid; Reza, Ahmed Wasif; Wo, Lau Chun; Ramiah, Harikrishnan
2014-01-01
Although ray tracing based propagation prediction models are popular for indoor radio wave propagation characterization, most of them do not provide an integrated approach for achieving the goal of optimum coverage, which is a key part in designing wireless network. In this paper, an accelerated technique of three-dimensional ray tracing is presented, where rough surface scattering is included for making a more accurate ray tracing technique. Here, the rough surface scattering is represented by microfacets, for which it becomes possible to compute the scattering field in all possible directions. New optimization techniques, like dual quadrant skipping (DQS) and closest object finder (COF), are implemented for fast characterization of wireless communications and making the ray tracing technique more efficient. In conjunction with the ray tracing technique, probability based coverage optimization algorithm is accumulated with the ray tracing technique to make a compact solution for indoor propagation prediction. The proposed technique decreases the ray tracing time by omitting the unnecessary objects for ray tracing using the DQS technique and by decreasing the ray-object intersection time using the COF technique. On the other hand, the coverage optimization algorithm is based on probability theory, which finds out the minimum number of transmitters and their corresponding positions in order to achieve optimal indoor wireless coverage. Both of the space and time complexities of the proposed algorithm surpass the existing algorithms. For the verification of the proposed ray tracing technique and coverage algorithm, detailed simulation results for different scattering factors, different antenna types, and different operating frequencies are presented. Furthermore, the proposed technique is verified by the experimental results.
Modeling Normal Shock Velocity Curvature Relation for Heterogeneous Explosives
NASA Astrophysics Data System (ADS)
Yoo, Sunhee; Crochet, Michael; Pemberton, Steve
2015-06-01
The normal shock velocity and curvature, Dn(κ) , relation on a detonation shock surface has been an important functional quantity to measure to understand the shock strength exerted against the material interface between a main explosive charge and the case of an explosive munition. The Dn(κ) relation is considered an intrinsic property of an explosive, and can be experimentally deduced by rate stick tests at various charge diameters. However, experimental measurements of the Dn(κ) relation for heterogeneous explosives such as PBXN-111 are challenging due to the non-smoothness and asymmetry usually observed in the experimental streak records of explosion fronts. Out of the many possibilities, the asymmetric character may be attributed to the heterogeneity of the explosives, a hypothesis which begs two questions: (1) is there any simple hydrodynamic model that can explain such an asymmetric shock evolution, and (2) what statistics can be derived for the asymmetry using simulations with defined structural heterogeneity in the unreacted explosive? Saenz, Taylor and Stewart studied constitutive models for derivation of the Dn(κ) relation on porous `homogeneous' explosives and carried out simulations in a spherical coordinate frame. In this paper, we extend their model to account for `heterogeneity' and present shock evolutions in heterogeneous explosives using 2-D hydrodynamic simulations with some statistical examination. (96TW-2015-0004)
A radiation damage repair model for normal tissues
NASA Astrophysics Data System (ADS)
Partridge, Mike
2008-07-01
A cellular Monte Carlo model describing radiation damage and repair in normal epithelial tissues is presented. The deliberately simplified model includes cell cycling, cell motility and radiation damage response (cell cycle arrest and cell death) only. Results demonstrate that the model produces a stable equilibrium system for mean cell cycle times in the range 24-96 h. Simulated irradiation of these stable equilibrium systems produced a range of responses that are shown to be consistent with experimental and clinical observation, including (i) re-epithelialization of radiation-induced lesions by a mixture of cell migration into the wound and repopulation at the periphery; (ii) observed radiosensitivity that is quantitatively consistent with both rate of induction of irreparable DNA lesions and, independently, with the observed acute oral and pharyngeal mucosal reactions to radiotherapy; (iii) an observed time between irradiation and maximum toxicity that is consistent with experimental data for skin; (iv) quantitatively accurate predictions of low-dose hyper-radiosensitivity; (v) Gomperzian repopulation for very small lesions (~2000 cells) and (vi) a linear rate of re-epithelialization of 5-10 µm h-1 for large lesions (>15 000 cells).
Bansal, Sonal; Miao, Xijiang; Adams, Michael W. W.; Prestegard, James H.; Valafar, Homayoun
2009-01-01
A method of identifying the best structural model for a protein of unknown structure from a list of structural candidates using unassigned 15N-1H residual dipolar coupling (RDC) data and probability density profile analysis (PDPA) is described. Ten candidate structures have been obtained for the structural genomics target protein PF2048.1 using ROBETTA. 15N-1H residual dipolar couplings have been measured from NMR spectra of the protein in two alignment media and these data have been analyzed using PDPA to rank the models in terms of their ability to represent the actual structure. A number of advantages in using this method to characterize a protein structure become apparent. RDCs can easily and rapidly be acquired, and without the need for assignment, the cost and duration of data acquisition is greatly reduced. The approach is quite robust with respect to imprecise and missing data. In the case of PF2048.1, a 79 residue protein, only 58 and 55 of the total RDC data were observed. The method can accelerate structure determination at higher resolution using traditional NMR spectroscopy by providing a starting point for the addition of NOEs and other NMR structural data. PMID:18321742
Bansal, Sonal; Miao, Xijiang; Adams, Michael W W; Prestegard, James H; Valafar, Homayoun
2008-05-01
A method of identifying the best structural model for a protein of unknown structure from a list of structural candidates using unassigned 15N1H residual dipolar coupling (RDC) data and probability density profile analysis (PDPA) is described. Ten candidate structures have been obtained for the structural genomics target protein PF2048.1 using ROBETTA. 15N1H residual dipolar couplings have been measured from NMR spectra of the protein in two alignment media and these data have been analyzed using PDPA to rank the models in terms of their ability to represent the actual structure. A number of advantages in using this method to characterize a protein structure become apparent. RDCs can easily and rapidly be acquired, and without the need for assignment, the cost and duration of data acquisition is greatly reduced. The approach is quite robust with respect to imprecise and missing data. In the case of PF2048.1, a 79 residue protein, only 58 and 55 of the total RDC data were observed. The method can accelerate structure determination at higher resolution using traditional NMR spectroscopy by providing a starting point for the addition of NOEs and other NMR structural data.
NASA Astrophysics Data System (ADS)
Bansal, Sonal; Miao, Xijiang; Adams, Michael W. W.; Prestegard, James H.; Valafar, Homayoun
2008-05-01
A method of identifying the best structural model for a protein of unknown structure from a list of structural candidates using unassigned 15N sbnd 1H residual dipolar coupling (RDC) data and probability density profile analysis ( PDPA) is described. Ten candidate structures have been obtained for the structural genomics target protein PF2048.1 using ROBETTA. 15N sbnd 1H residual dipolar couplings have been measured from NMR spectra of the protein in two alignment media and these data have been analyzed using PDPA to rank the models in terms of their ability to represent the actual structure. A number of advantages in using this method to characterize a protein structure become apparent. RDCs can easily and rapidly be acquired, and without the need for assignment, the cost and duration of data acquisition is greatly reduced. The approach is quite robust with respect to imprecise and missing data. In the case of PF2048.1, a 79 residue protein, only 58 and 55 of the total RDC data were observed. The method can accelerate structure determination at higher resolution using traditional NMR spectroscopy by providing a starting point for the addition of NOEs and other NMR structural data.
Carruthers, Robert L; Chitnis, Tanuja; Healy, Brian C
2014-05-01
JCV serologic status is used to determine PML risk in natalizumab-treated patients. Given two cases of natalizumab-associated PML in JCV sero-negative patients and two publications that question the false negative rate of the JCV serologic test, clinicians may question whether our understanding of PML risk is adequate. Given that there is no gold standard for diagnosing previous JCV exposure, the test characteristics of the JCV serologic test are unknowable. We propose a model of PML risk in JCV sero-negative natalizumab patients. Using the numbers of JCV sero-positive and -negative patients from a study of PML risk by JCV serologic status (sero-positive: 13,950 and sero-negative: 11,414), we apply a range of sensitivities and specificities in order calculate the number of JCV-exposed but JCV sero-negative patients (false negatives). We then apply a range of rates of developing PML in sero-negative patients to calculate the expected number of PML cases. By using the binomial function, we calculate the probability of a given number of JCV sero-negative PML cases. With this model, one has a means to establish a threshold number of JCV sero-negative natalizumab-associated PML cases at which it is improbable that our understanding of PML risk in JCV sero-negative patients is adequate.
Arterberry, Martha E; Bornstein, Marc H; Haynes, O Maurice
2011-04-01
Two analytical procedures for identifying young children as categorizers, the Monte Carlo Simulation and the Probability Estimate Model, were compared. Using a sequential touching method, children aged 12, 18, 24, and 30 months were given seven object sets representing different levels of categorical classification. From their touching performance, the probability that children were categorizing was then determined independently using Monte Carlo Simulation and the Probability Estimate Model. The two analytical procedures resulted in different percentages of children being classified as categorizers. Results using the Monte Carlo Simulation were more consistent with group-level analyses than results using the Probability Estimate Model. These findings recommend using the Monte Carlo Simulation for determining individual categorizer classification. Copyright © 2011 Elsevier Inc. All rights reserved.
A Mechanistic Beta-Binomial Probability Model for mRNA Sequencing Data.
Smith, Gregory R; Birtwistle, Marc R
2016-01-01
A main application for mRNA sequencing (mRNAseq) is determining lists of differentially-expressed genes (DEGs) between two or more conditions. Several software packages exist to produce DEGs from mRNAseq data, but they typically yield different DEGs, sometimes markedly so. The underlying probability model used to describe mRNAseq data is central to deriving DEGs, and not surprisingly most softwares use different models and assumptions to analyze mRNAseq data. Here, we propose a mechanistic justification to model mRNAseq as a binomial process, with data from technical replicates given by a binomial distribution, and data from biological replicates well-described by a beta-binomial distribution. We demonstrate good agreement of this model with two large datasets. We show that an emergent feature of the beta-binomial distribution, given parameter regimes typical for mRNAseq experiments, is the well-known quadratic polynomial scaling of variance with the mean. The so-called dispersion parameter controls this scaling, and our analysis suggests that the dispersion parameter is a continually decreasing function of the mean, as opposed to current approaches that impose an asymptotic value to the dispersion parameter at moderate mean read counts. We show how this leads to current approaches overestimating variance for moderately to highly expressed genes, which inflates false negative rates. Describing mRNAseq data with a beta-binomial distribution thus may be preferred since its parameters are relatable to the mechanistic underpinnings of the technique and may improve the consistency of DEG analysis across softwares, particularly for moderately to highly expressed genes.
Model assisted probability of detection for a guided waves based SHM technique
NASA Astrophysics Data System (ADS)
Memmolo, V.; Ricci, F.; Maio, L.; Boffa, N. D.; Monaco, E.
2016-04-01
Guided wave (GW) Structural Health Monitoring (SHM) allows to assess the health of aerostructures thanks to the great sensitivity to delamination and/or debondings appearance. Due to the several complexities affecting wave propagation in composites, an efficient GW SHM system requires its effective quantification associated to a rigorous statistical evaluation procedure. Probability of Detection (POD) approach is a commonly accepted measurement method to quantify NDI results and it can be effectively extended to an SHM context. However, it requires a very complex setup arrangement and many coupons. When a rigorous correlation with measurements is adopted, Model Assisted POD (MAPOD) is an efficient alternative to classic methods. This paper is concerned with the identification of small emerging delaminations in composite structural components. An ultrasonic GW tomography focused to impact damage detection in composite plate-like structures recently developed by authors is investigated, getting the bases for a more complex MAPOD analysis. Experimental tests carried out on a typical wing composite structure demonstrated the effectiveness of modeling approach in order to detect damages with the tomographic algorithm. Environmental disturbances, which affect signal waveforms and consequently damage detection, are considered simulating a mathematical noise in the modeling stage. A statistical method is used for an effective making decision procedure. A Damage Index approach is implemented as metric to interpret the signals collected from a distributed sensor network and a subsequent graphic interpolation is carried out to reconstruct the damage appearance. A model validation and first reliability assessment results are provided, in view of performance system quantification and its optimization as well.
de Mul, Frits F M; Blaauw, Judith; Aarnoudse, Jan G; Smit, Andries J; Rakhorst, Gerhard
2007-01-01
We present a physical model to describe iontophoresis time recordings. The model is a combination of monodimensional material diffusion and decay, probably due to transport by blood flow. It has four adjustable parameters, the diffusion coefficient, the decay constant, the height of the response, and the shot saturation constant, a parameter representing the relative importance of subsequent shots (in case of saturation). We test the model with measurements of blood perfusion in the capillary bed of the fingers of women who recently had preeclampsia and in women with a history of normal pregnancy. From the fits to the measurements, we conclude that the model provides a useful physical description of the iontophoresis process.
A generic probability based model to derive regional patterns of crops in time and space
NASA Astrophysics Data System (ADS)
Wattenbach, Martin; Luedtke, Stefan; Redweik, Richard; van Oijen, Marcel; Balkovic, Juraj; Reinds, Gert Jan
2015-04-01
Croplands are not only the key to human food supply, they also change the biophysical and biogeochemical properties of the land surface leading to changes in the water cycle, energy portioning, they influence soil erosion and substantially contribute to the amount of greenhouse gases entering the atmosphere. The effects of croplands on the environment depend on the type of crop and the associated management which both are related to the site conditions, economic boundary settings as well as preferences of individual farmers. The method described here is designed to predict the most probable crop to appear at a given location and time. The method uses statistical crop area information on NUTS2 level from EUROSTAT and the Common Agricultural Policy Regionalized Impact Model (CAPRI) as observation. These crops are then spatially disaggregated to the 1 x 1 km grid scale within the region, using the assumption that the probability of a crop appearing at a given location and a given year depends on a) the suitability of the land for the cultivation of the crop derived from the MARS Crop Yield Forecast System (MCYFS) and b) expert knowledge of agricultural practices. The latter includes knowledge concerning the feasibility of one crop following another (e.g. a late-maturing crop might leave too little time for the establishment of a winter cereal crop) and the need to combat weed infestations or crop diseases. The model is implemented in R and PostGIS. The quality of the generated crop sequences per grid cell is evaluated on the basis of the given statistics reported by the joint EU/CAPRI database. The assessment is given on NUTS2 level using per cent bias as a measure with a threshold of 15% as minimum quality. The results clearly indicates that crops with a large relative share within the administrative unit are not as error prone as crops that allocate only minor parts of the unit. However, still roughly 40% show an absolute per cent bias above the 15% threshold. This
Influencing the Probability for Graduation at Four-Year Institutions: A Multi-Model Analysis
ERIC Educational Resources Information Center
Cragg, Kristina M.
2009-01-01
The purpose of this study is to identify student and institutional characteristics that influence the probability for graduation. The study delves further into the probability for graduation by examining how far the student deviates from the institutional mean with respect to academics and affordability; this concept is referred to as the "match."…
An EEG-Based Fuzzy Probability Model for Early Diagnosis of Alzheimer's Disease.
Chiang, Hsiu-Sen; Pao, Shun-Chi
2016-05-01
Alzheimer's disease is a degenerative brain disease that results in cardinal memory deterioration and significant cognitive impairments. The early treatment of Alzheimer's disease can significantly reduce deterioration. Early diagnosis is difficult, and early symptoms are frequently overlooked. While much of the literature focuses on disease detection, the use of electroencephalography (EEG) in Alzheimer's diagnosis has received relatively little attention. This study combines the fuzzy and associative Petri net methodologies to develop a model for the effective and objective detection of Alzheimer's disease. Differences in EEG patterns between normal subjects and Alzheimer patients are used to establish prediction criteria for Alzheimer's disease, potentially providing physicians with a reference for early diagnosis, allowing for early action to delay the disease progression.
Ezawa, Kiyoshi
2016-08-11
Insertions and deletions (indels) account for more nucleotide differences between two related DNA sequences than substitutions do, and thus it is imperative to develop a stochastic evolutionary model that enables us to reliably calculate the probability of the sequence evolution through indel processes. Recently, indel probabilistic models are mostly based on either hidden Markov models (HMMs) or transducer theories, both of which give the indel component of the probability of a given sequence alignment as a product of either probabilities of column-to-column transitions or block-wise contributions along the alignment. However, it is not a priori clear how these models are related with any genuine stochastic evolutionary model, which describes the stochastic evolution of an entire sequence along the time-axis. Moreover, currently none of these models can fully accommodate biologically realistic features, such as overlapping indels, power-law indel-length distributions, and indel rate variation across regions. Here, we theoretically dissect the ab initio calculation of the probability of a given sequence alignment under a genuine stochastic evolutionary model, more specifically, a general continuous-time Markov model of the evolution of an entire sequence via insertions and deletions. Our model is a simple extension of the general "substitution/insertion/deletion (SID) model". Using the operator representation of indels and the technique of time-dependent perturbation theory, we express the ab initio probability as a summation over all alignment-consistent indel histories. Exploiting the equivalence relations between different indel histories, we find a "sufficient and nearly necessary" set of conditions under which the probability can be factorized into the product of an overall factor and the contributions from regions separated by gapless columns of the alignment, thus providing a sort of generalized HMM. The conditions distinguish evolutionary models with
NASA Astrophysics Data System (ADS)
Peng, Guanghan; Liu, Changqing; Tuo, Manxian
2015-10-01
In this paper, a new lattice model is proposed with the traffic interruption probability term in two-lane traffic system. The linear stability condition and the mKdV equation are derived from linear stability analysis and nonlinear analysis by introducing the traffic interruption probability of optimal current for two-lane traffic freeway, respectively. Numerical simulation shows that the traffic interruption probability corresponding to high reaction coefficient can efficiently improve the stability of two-lane traffic flow as traffic interruption occurs with lane changing.
A Tool for Modelling the Probability of Landslides Impacting Road Networks
NASA Astrophysics Data System (ADS)
Taylor, Faith E.; Santangelo, Michele; Marchesini, Ivan; Malamud, Bruce D.; Guzzetti, Fausto
2014-05-01
Triggers such as earthquakes or heavy rainfall can result in hundreds to thousands of landslides occurring across a region within a short space of time. These landslides can in turn result in blockages across the road network, impacting how people move about a region. Here, we show the development and application of a semi-stochastic model to simulate how landslides intersect with road networks during a triggered landslide event. This was performed by creating 'synthetic' triggered landslide inventory maps and overlaying these with a road network map to identify where road blockages occur. Our landslide-road model has been applied to two regions: (i) the Collazzone basin (79 km2) in Central Italy where 422 landslides were triggered by rapid snowmelt in January 1997, (ii) the Oat Mountain quadrangle (155 km2) in California, USA, where 1,350 landslides were triggered by the Northridge Earthquake (M = 6.7) in January 1994. For both regions, detailed landslide inventory maps for the triggered events were available, in addition to maps of landslide susceptibility and road networks of primary, secondary and tertiary roads. To create 'synthetic' landslide inventory maps, landslide areas (AL) were randomly selected from a three-parameter inverse gamma probability density function, consisting of a power law decay of about -2.4 for medium and large values of AL and an exponential rollover for small values of AL. The number of landslide areas selected was based on the observed density of landslides (number of landslides km-2) in the triggered event inventories. Landslide shapes were approximated as ellipses, where the ratio of the major and minor axes varies with AL. Landslides were then dropped over the region semi-stochastically, conditioned by a landslide susceptibility map, resulting in a synthetic landslide inventory map. The originally available landslide susceptibility maps did not take into account susceptibility changes in the immediate vicinity of roads, therefore
Gomberg, J.; Felzer, K.
2008-01-01
We have used observations from Felzer and Brodsky (2006) of the variation of linear aftershock densities (i.e., aftershocks per unit length) with the magnitude of and distance from the main shock fault to derive constraints on how the probability of a main shock triggering a single aftershock at a point, P(r, D), varies as a function of distance, r, and main shock rupture dimension, D. We find that P(r, D) becomes independent of D as the triggering fault is approached. When r ??? D P(r, D) scales as Dm where m-2 and decays with distance approximately as r-n with n = 2, with a possible change to r-(n-1) at r > h, where h is the closest distance between the fault and the boundaries of the seismogenic zone. These constraints may be used to test hypotheses about the types of deformations and mechanisms that trigger aftershocks. We illustrate this using dynamic deformations (i.e., radiated seismic waves) and a posited proportionality with P(r, D). Deformation characteristics examined include peak displacements, peak accelerations and velocities (proportional to strain rates and strains, respectively), and two measures that account for cumulative deformations. Our model indicates that either peak strains alone or strain rates averaged over the duration of rupture may be responsible for aftershock triggering.
Volkov, M. V.; Ostrovsky, V. N.
2007-02-15
Multistate generalizations of Landau-Zener model are studied by summing entire series of perturbation theory. A technique for analysis of the series is developed. Analytical expressions for probabilities of survival at the diabatic potential curves with extreme slope are proved. Degenerate situations are considered when there are several potential curves with extreme slope. Expressions for some state-to-state transition probabilities are derived in degenerate cases.
Comonotonic bounds on the survival probabilities in the Lee-Carter model for mortality projection
NASA Astrophysics Data System (ADS)
Denuit, Michel; Dhaene, Jan
2007-06-01
In the Lee-Carter framework, future survival probabilities are random variables with an intricate distribution function. In large homogeneous portfolios of life annuities, value-at-risk or conditional tail expectation of the total yearly payout of the company are approximately equal to the corresponding quantities involving random survival probabilities. This paper aims to derive some bounds in the increasing convex (or stop-loss) sense on these random survival probabilities. These bounds are obtained with the help of comonotonic upper and lower bounds on sums of correlated random variables.
Modeling Longitudinal Data Containing Non-Normal Within Subject Errors
NASA Technical Reports Server (NTRS)
Feiveson, Alan; Glenn, Nancy L.
2013-01-01
The mission of the National Aeronautics and Space Administration’s (NASA) human research program is to advance safe human spaceflight. This involves conducting experiments, collecting data, and analyzing data. The data are longitudinal and result from a relatively few number of subjects; typically 10 – 20. A longitudinal study refers to an investigation where participant outcomes and possibly treatments are collected at multiple follow-up times. Standard statistical designs such as mean regression with random effects and mixed–effects regression are inadequate for such data because the population is typically not approximately normally distributed. Hence, more advanced data analysis methods are necessary. This research focuses on four such methods for longitudinal data analysis: the recently proposed linear quantile mixed models (lqmm) by Geraci and Bottai (2013), quantile regression, multilevel mixed–effects linear regression, and robust regression. This research also provides computational algorithms for longitudinal data that scientists can directly use for human spaceflight and other longitudinal data applications, then presents statistical evidence that verifies which method is best for specific situations. This advances the study of longitudinal data in a broad range of applications including applications in the sciences, technology, engineering and mathematics fields.
Lühr, Armin; Löck, Steffen; Jakobi, Annika; Stützer, Kristin; Bandurska-Luque, Anna; Vogelius, Ivan Richter; Enghardt, Wolfgang; Baumann, Michael; Krause, Mechthild
2017-07-01
Objectives of this work are (1) to derive a general clinically relevant approach to model tumor control probability (TCP) for spatially variable risk of failure and (2) to demonstrate its applicability by estimating TCP for patients planned for photon and proton irradiation. The approach divides the target volume into sub-volumes according to retrospectively observed spatial failure patterns. The product of all sub-volume TCPi values reproduces the observed TCP for the total tumor. The derived formalism provides for each target sub-volume i the tumor control dose (D50,i) and slope (γ50,i) parameters at 50% TCPi. For a simultaneous integrated boost (SIB) prescription for 45 advanced head and neck cancer patients, TCP values for photon and proton irradiation were calculated and compared. The target volume was divided into gross tumor volume (GTV), surrounding clinical target volume (CTV), and elective CTV (CTVE). The risk of a local failure in each of these sub-volumes was taken from the literature. Convenient expressions for D50,i and γ50,i were provided for the Poisson and the logistic model. Comparable TCP estimates were obtained for photon and proton plans of the 45 patients using the sub-volume model, despite notably higher dose levels (on average +4.9%) in the low-risk CTVE for photon irradiation. In contrast, assuming a homogeneous dose response in the entire target volume resulted in TCP estimates contradicting clinical experience (the highest failure rate in the low-risk CTVE) and differing substantially between photon and proton irradiation. The presented method is of practical value for three reasons: It (a) is based on empirical clinical outcome data; (b) can be applied to non-uniform dose prescriptions as well as different tumor entities and dose-response models; and (c) is provided in a convenient compact form. The approach may be utilized to target spatial patterns of local failures observed in patient cohorts by prescribing different doses to
Multivariate Models for Normal and Binary Responses in Intervention Studies
ERIC Educational Resources Information Center
Pituch, Keenan A.; Whittaker, Tiffany A.; Chang, Wanchen
2016-01-01
Use of multivariate analysis (e.g., multivariate analysis of variance) is common when normally distributed outcomes are collected in intervention research. However, when mixed responses--a set of normal and binary outcomes--are collected, standard multivariate analyses are no longer suitable. While mixed responses are often obtained in…
Multivariate Models for Normal and Binary Responses in Intervention Studies
ERIC Educational Resources Information Center
Pituch, Keenan A.; Whittaker, Tiffany A.; Chang, Wanchen
2016-01-01
Use of multivariate analysis (e.g., multivariate analysis of variance) is common when normally distributed outcomes are collected in intervention research. However, when mixed responses--a set of normal and binary outcomes--are collected, standard multivariate analyses are no longer suitable. While mixed responses are often obtained in…
van Lieverloo, J Hein M; Mesman, George A M; Bakker, Geo L; Baggelaar, Paul K; Hamed, Anas; Medema, Gertjan
2007-11-01
Drinking water supply companies monitor the presence of Escherichia coli in drinking water to verify the effectiveness of measures that prevent faecal contamination of drinking water. Data are lacking, however, on the sensitivity of the monitoring programmes, as designed under the EU Drinking Water Directive. In this study, the sensitivity of such a monitoring programme was evaluated by hydraulic model simulations of contamination events and calculations of the detection probability of the actual sampling programme of 2002. In the hydraulic model simulations of 16-h periods of 1l h(-1) ingress of untreated domestic sewage, the spread of the contamination through the network and the E. coli concentration dynamics were calculated. The results show that when large parts of the sewage reach reservoirs, e.g. when they originate from the treatment plant or a trunk main, mean detection probabilities are 55-65%. When the contamination does not reach any of the reservoirs, however, the detection probability varies from 0% (when no sampling site is reached) to 13% (when multiple sites are reached). Mean detection probabilities of nine simulated ingress incidents in mains are 5.5% with an SD of 6.5%. In reality, these detection probabilities are probably lower as the study assumed no inactivation or clustering of E. coli, 100% recovery efficiency of the E. coli detection methods and immediate mixing of contaminations in mains and reservoirs. The described method provides a starting point for automated evaluations and optimisations of sampling programmes.
1994-03-01
i, of L hops in p’ coded transmission are jammed. This can be simplified to L L L ij--o ijt-1-0 il --0 i i j j (5)L l, j.-- i, Y 1=1 (I -y)’:’ ý... 20r2 2-a 202 a 2j8 for q > 0 where A = rms signal amplitude, S= variance of the narrowband process, and I0(e) = modified Bessel function of the first...the conditional probability density function of Z, given that i,, i2, ...... i hops have interference __ e1: 2() ex ____)Slli_• il 45 The above
TU-CD-BRB-01: Normal Lung CT Texture Features Improve Predictive Models for Radiation Pneumonitis
Krafft, S; Briere, T; Court, L; Martel, M
2015-06-15
Purpose: Existing normal tissue complication probability (NTCP) models for radiation pneumonitis (RP) traditionally rely on dosimetric and clinical data but are limited in terms of performance and generalizability. Extraction of pre-treatment image features provides a potential new category of data that can improve NTCP models for RP. We consider quantitative measures of total lung CT intensity and texture in a framework for prediction of RP. Methods: Available clinical and dosimetric data was collected for 198 NSCLC patients treated with definitive radiotherapy. Intensity- and texture-based image features were extracted from the T50 phase of the 4D-CT acquired for treatment planning. A total of 3888 features (15 clinical, 175 dosimetric, and 3698 image features) were gathered and considered candidate predictors for modeling of RP grade≥3. A baseline logistic regression model with mean lung dose (MLD) was first considered. Additionally, a least absolute shrinkage and selection operator (LASSO) logistic regression was applied to the set of clinical and dosimetric features, and subsequently to the full set of clinical, dosimetric, and image features. Model performance was assessed by comparing area under the curve (AUC). Results: A simple logistic fit of MLD was an inadequate model of the data (AUC∼0.5). Including clinical and dosimetric parameters within the framework of the LASSO resulted in improved performance (AUC=0.648). Analysis of the full cohort of clinical, dosimetric, and image features provided further and significant improvement in model performance (AUC=0.727). Conclusions: To achieve significant gains in predictive modeling of RP, new categories of data should be considered in addition to clinical and dosimetric features. We have successfully incorporated CT image features into a framework for modeling RP and have demonstrated improved predictive performance. Validation and further investigation of CT image features in the context of RP NTCP
Benndorf, Klaus; Kusch, Jana; Schulz, Eckhard
2012-01-01
Hyperpolarization-activated cyclic nucleotide-modulated (HCN) channels are voltage-gated tetrameric cation channels that generate electrical rhythmicity in neurons and cardiomyocytes. Activation can be enhanced by the binding of adenosine-3′,5′-cyclic monophosphate (cAMP) to an intracellular cyclic nucleotide binding domain. Based on previously determined rate constants for a complex Markovian model describing the gating of homotetrameric HCN2 channels, we analyzed probability fluxes within this model, including unidirectional probability fluxes and the probability flux along transition paths. The time-dependent probability fluxes quantify the contributions of all 13 transitions of the model to channel activation. The binding of the first, third and fourth ligand evoked robust channel opening whereas the binding of the second ligand obstructed channel opening similar to the empty channel. Analysis of the net probability fluxes in terms of the transition path theory revealed pronounced hysteresis for channel activation and deactivation. These results provide quantitative insight into the complex interaction of the four structurally equal subunits, leading to non-equality in their function. PMID:23093920
A Collision Probability Model of Portal Vein Tumor Thrombus Formation in Hepatocellular Carcinoma
Xiong, Fei
2015-01-01
Hepatocellular carcinoma is one of the most common malignancies worldwide, with a high risk of portal vein tumor thrombus (PVTT). Some promising results have been achieved for venous metastases of hepatocellular carcinoma; however, the etiology of PVTT is largely unknown, and it is unclear why the incidence of PVTT is not proportional to its distance from the carcinoma. We attempted to address this issue using physical concepts and mathematical tools. Finally, we discuss the relationship between the probability of a collision event and the microenvironment of the PVTT. Our formulae suggest that the collision probability can alter the tumor microenvironment by increasing the number of tumor cells. PMID:26131562
John, Mathew; Gopinath, Deepa
2013-06-01
Gestational diabetes mellitus diagnosed by classical oral glucose tolerance test can result in fetal complications like macrosomia and polyhydramnios. Guidelines exist on management of patients diagnose by abnormal oral glucose tolerance test with diet modification followed by insulin. Even patients with abnormal oral glucose tolerance test maintaining apparently normal blood sugars with diet are advised insulin if there is accelerated fetal growth. But patients with normal oral glucose tolerance test can present with macrosomia and polyhydramnios. These patients are labelled as not having gestational diabetes mellitus and are followed up with repeat oral glucose tolerance test. We hypothesise that these patients may have an altered placental threshold to glucose or abnormal sensitivity of fetal tissues to glucose. Meal related glucose monitoring in these patients can identify minor abnormalities in glucose disturbance and should be treated to targets similar to physiological levels of glucose in non pregnant adults.
ERIC Educational Resources Information Center
Edwards, William F.; Shiflett, Ray C.; Shultz, Harris
2008-01-01
The mathematical model used to describe independence between two events in probability has a non-intuitive consequence called dependent spaces. The paper begins with a very brief history of the development of probability, then defines dependent spaces, and reviews what is known about finite spaces with uniform probability. The study of finite…
A new model for bed load sampler calibration to replace the probability-matching method
Robert B. Thomas; Jack Lewis
1993-01-01
In 1977 extensive data were collected to calibrate six Helley-Smith bed load samplers with four sediment particle sizes in a flume at the St. Anthony Falls Hydraulic Laboratory at the University of Minnesota. Because sampler data cannot be collected at the same time and place as ""true"" trap measurements, the ""probability-matching...
ERIC Educational Resources Information Center
Rasanen, Okko
2011-01-01
Word segmentation from continuous speech is a difficult task that is faced by human infants when they start to learn their native language. Several studies indicate that infants might use several different cues to solve this problem, including intonation, linguistic stress, and transitional probabilities between subsequent speech sounds. In this…
Random forest models for the probable biological condition of streams and rivers in the USA
The National Rivers and Streams Assessment (NRSA) is a probability based survey conducted by the US Environmental Protection Agency and its state and tribal partners. It provides information on the ecological condition of the rivers and streams in the conterminous USA, and the ex...
Spatial prediction models for the probable biological condition of streams and rivers in the USA
The National Rivers and Streams Assessment (NRSA) is a probability-based survey conducted by the US Environmental Protection Agency and its state and tribal partners. It provides information on the ecological condition of the rivers and streams in the conterminous USA, and the ex...
Random forest models for the probable biological condition of streams and rivers in the USA
The National Rivers and Streams Assessment (NRSA) is a probability based survey conducted by the US Environmental Protection Agency and its state and tribal partners. It provides information on the ecological condition of the rivers and streams in the conterminous USA, and the ex...
ERIC Educational Resources Information Center
Rasanen, Okko
2011-01-01
Word segmentation from continuous speech is a difficult task that is faced by human infants when they start to learn their native language. Several studies indicate that infants might use several different cues to solve this problem, including intonation, linguistic stress, and transitional probabilities between subsequent speech sounds. In this…
Spatial prediction models for the probable biological condition of streams and rivers in the USA
The National Rivers and Streams Assessment (NRSA) is a probability-based survey conducted by the US Environmental Protection Agency and its state and tribal partners. It provides information on the ecological condition of the rivers and streams in the conterminous USA, and the ex...
Guan, Li; Hao, Bibo; Cheng, Qijin; Yip, Paul Sf; Zhu, Tingshao
2015-01-01
Traditional offline assessment of suicide probability is time consuming and difficult in convincing at-risk individuals to participate. Identifying individuals with high suicide probability through online social media has an advantage in its efficiency and potential to reach out to hidden individuals, yet little research has been focused on this specific field. The objective of this study was to apply two classification models, Simple Logistic Regression (SLR) and Random Forest (RF), to examine the feasibility and effectiveness of identifying high suicide possibility microblog users in China through profile and linguistic features extracted from Internet-based data. There were nine hundred and nine Chinese microblog users that completed an Internet survey, and those scoring one SD above the mean of the total Suicide Probability Scale (SPS) score, as well as one SD above the mean in each of the four subscale scores in the participant sample were labeled as high-risk individuals, respectively. Profile and linguistic features were fed into two machine learning algorithms (SLR and RF) to train the model that aims to identify high-risk individuals in general suicide probability and in its four dimensions. Models were trained and then tested by 5-fold cross validation; in which both training set and test set were generated under the stratified random sampling rule from the whole sample. There were three classic performance metrics (Precision, Recall, F1 measure) and a specifically defined metric "Screening Efficiency" that were adopted to evaluate model effectiveness. Classification performance was generally matched between SLR and RF. Given the best performance of the classification models, we were able to retrieve over 70% of the labeled high-risk individuals in overall suicide probability as well as in the four dimensions. Screening Efficiency of most models varied from 1/4 to 1/2. Precision of the models was generally below 30%. Individuals in China with high suicide
Ding, Tian; Wang, Jun; Park, Myoung-Su; Hwang, Cheng-An; Oh, Deog-Hwan
2013-02-01
Bacillus cereus is frequently isolated from a variety of foods, including vegetables, dairy products, meats, and other raw and processed foods. The bacterium is capable of producing an enterotoxin and emetic toxin that can cause severe nausea, vomiting, and diarrhea. The objectives of this study were to assess and model the probability of enterotoxin production of B. cereus in a broth model as affected by the broth pH and storage temperature. A three-strain mixture of B. cereus was inoculated in tryptic soy broth adjusted to pH 5.0, 6.0, 7.2, 8.0, and 8.5, and the samples were stored at 15, 20, 25, 30, and 35°C for 24 h. A total of 25 combinations of pH and temperature, each with 10 samples, were tested. The presence of enterotoxin in broth was assayed using a commercial test kit. The probabilities of positive enterotoxin production in 25 treatments were fitted with a logistic regression to develop a probability model to describe the probability of toxin production as a function of pH and temperature. The resulting model showed that the probabilities of enterotoxin production of B. cereus in broth increased as the temperature increased and/or as the broth pH approached 7.0. The model described the experimental data satisfactorily and identified the boundary of pH and temperature for the production of enterotoxin. The model could provide information for assessing the food poisoning risk associated with enterotoxins of B. cereus and for the selection of product pH and storage temperature for foods to reduce the hazards associated with B. cereus.
USDA-ARS?s Scientific Manuscript database
Staphylococcus aureus is a foodborne pathogen widespread in the environment and found in various food products. This pathogen can produce enterotoxins that cause illnesses in humans. The objectives of this study were to develop a probability model of S. aureus enterotoxin production as affected by w...
We show that a conditional probability analysis that utilizes a stressor-response model based on a logistic regression provides a useful approach for developing candidate water quality criterai from empirical data. The critical step in this approach is transforming the response ...
Gene name identification and normalization using a model organism database.
Morgan, Alexander A; Hirschman, Lynette; Colosimo, Marc; Yeh, Alexander S; Colombe, Jeff B
2004-12-01
Biology has now become an information science, and researchers are increasingly dependent on expert-curated biological databases to organize the findings from the published literature. We report here on a series of experiments related to the application of natural language processing to aid in the curation process for FlyBase. We focused on listing the normalized form of genes and gene products discussed in an article. We broke this into two steps: gene mention tagging in text, followed by normalization of gene names. For gene mention tagging, we adopted a statistical approach. To provide training data, we were able to reverse engineer the gene lists from the associated articles and abstracts, to generate text labeled (imperfectly) with gene mentions. We then evaluated the quality of the noisy training data (precision of 78%, recall 88%) and the quality of the HMM tagger output trained on this noisy data (precision 78%, recall 71%). In order to generate normalized gene lists, we explored two approaches. First, we explored simple pattern matching based on synonym lists to obtain a high recall/low precision system (recall 95%, precision 2%). Using a series of filters, we were able to improve precision to 50% with a recall of 72% (balanced F-measure of 0.59). Our second approach combined the HMM gene mention tagger with various filters to remove ambiguous mentions; this approach achieved an F-measure of 0.72 (precision 88%, recall 61%). These experiments indicate that the lexical resources provided by FlyBase are complete enough to achieve high recall on the gene list task, and that normalization requires accurate disambiguation; different strategies for tagging and normalization trade off recall for precision.
ERIC Educational Resources Information Center
Haberman, Shelby J.
2006-01-01
A simple score test of the normal two-parameter logistic (2PL) model is presented that examines the potential attraction of the normal three-parameter logistic (3PL) model for use with a particular item. Application is made to data from a test from the Praxis™ series. Results from this example raise the question whether the normal 3PL model should…
Bivariate Normal Wind Statistics model: User’s Manual.
1980-09-01
BIKI/ SLqTX. SOSTY, PROSTD COMMON /BLK2/ XBAR, YBAR COIIMON /BLK3/ CORR., DENOM DATA ITERM, IYES/-I.’ Y "/ I FORMAT (’ *** USAFETAC/DND WIND STATISTICS...THE FIVE BASIC PARAMETERS *** CCC 70 WRITE (ITERM,77) 77 FORMAT (’ INPUT MEAN X,STDEVX,MEAM YSTDE Y ,*CORR. COEFF.-’ READ (ITERP.8) XBAR, STDEVX. YBAR ...the X- Y axes through a given angle. Subroutine RSPGDR Gives the (conditional) probability of a specified range of wind speeds when the wind direction
NASA Astrophysics Data System (ADS)
Baidillah, Marlin R.; Takei, Masahiro
2017-06-01
A nonlinear normalization model which is called exponential model for electrical capacitance tomography (ECT) with external electrodes under gap permittivity conditions has been developed. The exponential model normalization is proposed based on the inherently nonlinear relationship characteristic between the mixture permittivity and the measured capacitance due to the gap permittivity of inner wall. The parameters of exponential equation are derived by using an exponential fitting curve based on the simulation and a scaling function is added to adjust the experiment system condition. The exponential model normalization was applied to two dimensional low and high contrast dielectric distribution phantoms by using simulation and experimental studies. The proposed normalization model has been compared with other normalization models i.e. Parallel, Series, Maxwell and Böttcher models. Based on the comparison of image reconstruction results, the exponential model is reliable to predict the nonlinear normalization of measured capacitance in term of low and high contrast dielectric distribution.
Regional Permafrost Probability Modelling in the northwestern Cordillera, 59°N - 61°N, Canada
NASA Astrophysics Data System (ADS)
Bonnaventure, P. P.; Lewkowicz, A. G.
2010-12-01
High resolution (30 x 30 m) permafrost probability models were created for eight mountainous areas in the Yukon and northernmost British Columbia. Empirical-statistical modelling based on the Basal Temperature of Snow (BTS) method was used to develop spatial relationships. Model inputs include equivalent elevation (a variable that incorporates non-uniform temperature change with elevation), potential incoming solar radiation and slope. Probability relationships between predicted BTS and permafrost presence were developed for each area using late-summer physical observations in pits, or by using year-round ground temperature measurements. A high-resolution spatial model for the region has now been generated based on seven of the area models. Each was applied to the entire region, and their predictions were then blended based on a distance decay function from the model source area. The regional model is challenging to validate independently because there are few boreholes in the region. However, a comparison of results to a recently established inventory of rock glaciers for the Yukon suggests its validity because predicted permafrost probabilities were 0.8 or greater for almost 90% of these landforms. Furthermore, the regional model results have a similar spatial pattern to those modelled independently in the eighth area, although predicted probabilities using the regional model are generally higher. The regional model predicts that permafrost underlies about half of the non-glaciated terrain in the region, with probabilities increasing regionally from south to north and from east to west. Elevation is significant, but not always linked in a straightforward fashion because of weak or inverted trends in permafrost probability below treeline. Above treeline, however, permafrost probabilities increase and approach 1.0 in very high elevation areas throughout the study region. The regional model shows many similarities to previous Canadian permafrost maps (Heginbottom
Three Dimensional Deformation of Mining Area Detection by InSAR and Probability Integral Model
NASA Astrophysics Data System (ADS)
Fan, H. D.; Gao, X. X.; Cheng, D.; Zhao, W. Y.; Zhao, C. L.
2015-06-01
A new solution algorithm that combined D-InSAR and probability integral method was proposed to generate the three dimensional deformation in mining area. The details are as follows: according to the geological and mining data, the control points set should be established, which contains correct phase unwrapping points in subsidence basin edge generated by D-InSAR and several GPS points; Using the modulus method to calculate the optimum parameters of probability integral prediction; Finally, generate the three dimensional deformation of mining work face by the parameters. Using this method, the land subsidence with big deformation gradients in mining area were correctly generated by example TerraSAR-X images. The results of the example show that this method can generate the correct mining subsidence basin with a few surface observations, and it is much better than the results of D-InSAR.
NASA Astrophysics Data System (ADS)
Rowicka, Małgorzata; Otwinowski, Zbyszek
2004-04-01
Using the Maximum Entropy principle, we find probability distribution of torsion angles in proteins. We estimate parameters of this distribution numerically, by implementing the conjugate gradient method in Polak-Ribiere variant. We investigate practical approximations of the theoretical distribution. We discuss the information content of these approximations and compare them with standard histogram method. Our data are pairs of main chain torsion angles for a selected subset of high resolution non-homologous protein structures from Protein Data Bank.
A probability model for evaluating building contamination from an environmental event.
Spicer, R C; Gangloff, H J
2000-09-01
Asbestos dust and bioaerosol sampling data from suspected contaminated zones in buildings allowed development of an environmental data evaluation protocol based on the differences in frequency of detection of a target contaminant between zones of comparison. Under the assumption that the two test zones of comparison are similar, application of population proportion probability calculates the significance of observed differences in contaminant levels. This was used to determine whether levels of asbestos dust contamination detected after a fire were likely the result of smoke-borne contamination, or were caused by pre-existing/background conditions. Bioaerosol sampling from several sites was also used to develop the population proportion probability protocol. In this case, significant differences in indoor air contamination relative to the ambient conditions were identified that were consistent with the visual observations of contamination. Implicit in this type of probability analysis is a definition of "contamination" based on significant differences in contaminant levels relative to a control zone. Detection of a suspect contaminant can be assessed as to possible sources(s) as well as the contribution made by pre-existing (i.e., background) conditions, provided the test and control zones are subjected to the same sampling and analytical methods.
NASA Astrophysics Data System (ADS)
Rosa, A. N. F.; Wiatr, P.; Cavdar, C.; Carvalho, S. V.; Costa, J. C. W. A.; Wosinska, L.
2015-11-01
In Elastic Optical Network (EON), spectrum fragmentation refers to the existence of non-aligned, small-sized blocks of free subcarrier slots in the optical spectrum. Several metrics have been proposed in order to quantify a level of spectrum fragmentation. Approximation methods might be used for estimating average blocking probability and some fragmentation measures, but are so far unable to accurately evaluate the influence of different sizes of connection requests and do not allow in-depth investigation of blocking events and their relation to fragmentation. The analytical study of the effect of fragmentation on requests' blocking probability is still under-explored. In this work, we introduce new definitions for blocking that differentiate between the reasons for the blocking events. We developed a framework based on Markov modeling to calculate steady-state probabilities for the different blocking events and to analyze fragmentation related problems in elastic optical links under dynamic traffic conditions. This framework can also be used for evaluation of different definitions of fragmentation in terms of their relation to the blocking probability. We investigate how different allocation request sizes contribute to fragmentation and blocking probability. Moreover, we show to which extend blocking events, due to insufficient amount of available resources, become inevitable and, compared to the amount of blocking events due to fragmented spectrum, we draw conclusions on the possible gains one can achieve by system defragmentation. We also show how efficient spectrum allocation policies really are in reducing the part of fragmentation that in particular leads to actual blocking events. Simulation experiments are carried out showing good match with our analytical results for blocking probability in a small scale scenario. Simulated blocking probabilities for the different blocking events are provided for a larger scale elastic optical link.
2012-01-01
Background Osteoporotic hip fractures represent major cause of disability, loss of quality of life and even mortality among the elderly population. Decisions on drug therapy are based on the assessment of risk factors for fracture, from BMD measurements. The combination of biomechanical models with clinical studies could better estimate bone strength and supporting the specialists in their decision. Methods A model to assess the probability of fracture, based on the Damage and Fracture Mechanics has been developed, evaluating the mechanical magnitudes involved in the fracture process from clinical BMD measurements. The model is intended for simulating the degenerative process in the skeleton, with the consequent lost of bone mass and hence the decrease of its mechanical resistance which enables the fracture due to different traumatisms. Clinical studies were chosen, both in non-treatment conditions and receiving drug therapy, and fitted to specific patients according their actual BMD measures. The predictive model is applied in a FE simulation of the proximal femur. The fracture zone would be determined according loading scenario (sideway fall, impact, accidental loads, etc.), using the mechanical properties of bone obtained from the evolutionary model corresponding to the considered time. Results BMD evolution in untreated patients and in those under different treatments was analyzed. Evolutionary curves of fracture probability were obtained from the evolution of mechanical damage. The evolutionary curve of the untreated group of patients presented a marked increase of the fracture probability, while the curves of patients under drug treatment showed variable decreased risks, depending on the therapy type. Conclusion The FE model allowed to obtain detailed maps of damage and fracture probability, identifying high-risk local zones at femoral neck and intertrochanteric and subtrochanteric areas, which are the typical locations of osteoporotic hip fractures. The
Normalization and Implementation of Three Gravitational Acceleration Models
NASA Technical Reports Server (NTRS)
Eckman, Randy A.; Brown, Aaron J.; Adamo, Daniel R.; Gottlieb, Robert G.
2016-01-01
Unlike the uniform density spherical shell approximations of Newton, the consequence of spaceflight in the real universe is that gravitational fields are sensitive to the asphericity of their generating central bodies. The gravitational potential of an aspherical central body is typically resolved using spherical harmonic approximations. However, attempting to directly calculate the spherical harmonic approximations results in at least two singularities that must be removed to generalize the method and solve for any possible orbit, including polar orbits. Samuel Pines, Bill Lear, and Robert Gottlieb developed three unique algorithms to eliminate these singularities. This paper documents the methodical normalization of two of the three known formulations for singularity-free gravitational acceleration (namely, the Lear and Gottlieb algorithms) and formulates a general method for defining normalization parameters used to generate normalized Legendre polynomials and Associated Legendre Functions (ALFs) for any algorithm. A treatment of the conventional formulation of the gravitational potential and acceleration is also provided, in addition to a brief overview of the philosophical differences between the three known singularity-free algorithms.
Haber, M.; An, Q.; Foppa, I. M.; Shay, D. K.; Ferdinands, J. M.; Orenstein, W. A.
2014-01-01
Summary As influenza vaccination is now widely recommended, randomized clinical trials are no longer ethical in many populations. Therefore, observational studies on patients seeking medical care for acute respiratory illnesses (ARI) are a popular option for estimating influenza vaccine effectiveness (VE). We developed a probability model for evaluating and comparing bias and precision of estimates of VE against symptomatic influenza from two commonly-used case-control study designs: the test-negative design and the traditional case-control design. We show that when vaccination does not affect the probability of developing non-influenza ARI then VE estimates from test-negative design studies are unbiased even if vaccinees and non-vaccinees have different probabilities of seeking medical care against ARI, as long as the ratio of these probabilities is the same for illnesses resulting from influenza and non-influenza infections. Our numerical results suggest that in general, estimates from the test-negative design have smaller bias compared to estimates from the traditional case-control design as long as the probability of non-influenza ARI is similar among vaccinated and unvaccinated individuals. We did not find consistent differences between the standard errors of the estimates from the two study designs. PMID:25147970
NASA Astrophysics Data System (ADS)
Tan, Elcin
A new physically-based methodology for probable maximum precipitation (PMP) estimation is developed over the American River Watershed (ARW) using the Weather Research and Forecast (WRF-ARW) model. A persistent moisture flux convergence pattern, called Pineapple Express, is analyzed for 42 historical extreme precipitation events, and it is found that Pineapple Express causes extreme precipitation over the basin of interest. An average correlation between moisture flux convergence and maximum precipitation is estimated as 0.71 for 42 events. The performance of the WRF model is verified for precipitation by means of calibration and independent validation of the model. The calibration procedure is performed only for the first ranked flood event 1997 case, whereas the WRF model is validated for 42 historical cases. Three nested model domains are set up with horizontal resolutions of 27 km, 9 km, and 3 km over the basin of interest. As a result of Chi-square goodness-of-fit tests, the hypothesis that "the WRF model can be used in the determination of PMP over the ARW for both areal average and point estimates" is accepted at the 5% level of significance. The sensitivities of model physics options on precipitation are determined using 28 microphysics, atmospheric boundary layer, and cumulus parameterization schemes combinations. It is concluded that the best triplet option is Thompson microphysics, Grell 3D ensemble cumulus, and YSU boundary layer (TGY), based on 42 historical cases, and this TGY triplet is used for all analyses of this research. Four techniques are proposed to evaluate physically possible maximum precipitation using the WRF: 1. Perturbations of atmospheric conditions; 2. Shift in atmospheric conditions; 3. Replacement of atmospheric conditions among historical events; and 4. Thermodynamically possible worst-case scenario creation. Moreover, climate change effect on precipitation is discussed by emphasizing temperature increase in order to determine the
Lexicographic Probability, Conditional Probability, and Nonstandard Probability
2009-11-11
the following conditions: CP1. µ(U |U) = 1 if U ∈ F ′. CP2 . µ(V1 ∪ V2 |U) = µ(V1 |U) + µ(V2 |U) if V1 ∩ V2 = ∅, U ∈ F ′, and V1, V2 ∈ F . CP3. µ(V |U...µ(V |X)× µ(X |U) if V ⊆ X ⊆ U , U,X ∈ F ′, V ∈ F . Note that it follows from CP1 and CP2 that µ(· |U) is a probability measure on (W,F) (and, in... CP2 hold. This is easily seen to determine µ. Moreover, µ vaciously satisfies CP3, since there do not exist distinct sets U and X in F ′ such that U
NASA Technical Reports Server (NTRS)
1979-01-01
The computer model for erythropoietic control was adapted to the mouse system by altering system parameters originally given for the human to those which more realistically represent the mouse. Parameter values were obtained from a variety of literature sources. Using the mouse model, the mouse was studied as a potential experimental model for spaceflight. Simulation studies of dehydration and hypoxia were performed. A comparison of system parameters for the mouse and human models is presented. Aside from the obvious differences expected in fluid volumes, blood flows and metabolic rates, larger differences were observed in the following: erythrocyte life span, erythropoietin half-life, and normal arterial pO2.
On application of optimal control to SEIR normalized models: Pros and cons.
de Pinho, Maria do Rosario; Nogueira, Filipa Nunes
2017-02-01
In this work we normalize a SEIR model that incorporates exponential natural birth and death, as well as disease-caused death. We use optimal control to control by vaccination the spread of a generic infectious disease described by a normalized model with L1 cost. We discuss the pros and cons of SEIR normalized models when compared with classical models when optimal control with L1 costs are considered. Our discussion highlights the role of the cost. Additionally, we partially validate our numerical solutions for our optimal control problem with normalized models using the Maximum Principle.
Kramer, Andrew A; Higgins, Thomas L; Zimmerman, Jack E
2014-03-01
To examine the accuracy of the original Mortality Probability Admission Model III, ICU Outcomes Model/National Quality Forum modification of Mortality Probability Admission Model III, and Acute Physiology and Chronic Health Evaluation IVa models for comparing observed and risk-adjusted hospital mortality predictions. Retrospective paired analyses of day 1 hospital mortality predictions using three prognostic models. Fifty-five ICUs at 38 U.S. hospitals from January 2008 to December 2012. Among 174,001 intensive care admissions, 109,926 met model inclusion criteria and 55,304 had data for mortality prediction using all three models. None. We compared patient exclusions and the discrimination, calibration, and accuracy for each model. Acute Physiology and Chronic Health Evaluation IVa excluded 10.7% of all patients, ICU Outcomes Model/National Quality Forum 20.1%, and Mortality Probability Admission Model III 24.1%. Discrimination of Acute Physiology and Chronic Health Evaluation IVa was superior with area under receiver operating curve (0.88) compared with Mortality Probability Admission Model III (0.81) and ICU Outcomes Model/National Quality Forum (0.80). Acute Physiology and Chronic Health Evaluation IVa was better calibrated (lowest Hosmer-Lemeshow statistic). The accuracy of Acute Physiology and Chronic Health Evaluation IVa was superior (adjusted Brier score = 31.0%) to that for Mortality Probability Admission Model III (16.1%) and ICU Outcomes Model/National Quality Forum (17.8%). Compared with observed mortality, Acute Physiology and Chronic Health Evaluation IVa overpredicted mortality by 1.5% and Mortality Probability Admission Model III by 3.1%; ICU Outcomes Model/National Quality Forum underpredicted mortality by 1.2%. Calibration curves showed that Acute Physiology and Chronic Health Evaluation performed well over the entire risk range, unlike the Mortality Probability Admission Model and ICU Outcomes Model/National Quality Forum models. Acute
Wall, Clifton; Boersma, Bendiks Jan; Moin, Parviz
2000-10-01
The assumed beta distribution model for the subgrid-scale probability density function (PDF) of the mixture fraction in large eddy simulation of nonpremixed, turbulent combustion is tested, a priori, for a reacting jet having significant heat release (density ratio of 5). The assumed beta distribution is tested as a model for both the subgrid-scale PDF and the subgrid-scale Favre PDF of the mixture fraction. The beta model is successful in approximating both types of PDF but is slightly more accurate in approximating the normal (non-Favre) PDF. To estimate the subgrid-scale variance of mixture fraction, which is required by the beta model, both a scale similarity model and a dynamic model are used. Predictions using the dynamic model are found to be more accurate. The beta model is used to predict the filtered value of a function chosen to resemble the reaction rate. When no model is used, errors in the predicted value are of the same order as the actual value. The beta model is found to reduce this error by about a factor of two, providing a significant improvement. (c) 2000 American Institute of Physics.
A Probabilistic Model of Student Nurses' Knowledge of Normal Nutrition.
ERIC Educational Resources Information Center
Passmore, David Lynn
1983-01-01
Vocational and technical education researchers need to be aware of the uses and limits of various statistical models. The author reviews the Rasch Model and applies it to results from a nutrition test given to student nurses. (Author)
NASA Astrophysics Data System (ADS)
Anuwar, Muhammad Hafidz; Jaffar, Maheran Mohd
2017-08-01
This paper provides an overview for the assessment of credit risk specific to the banks. In finance, risk is a term to reflect the potential of financial loss. The risk of default on loan may increase when a company does not make a payment on that loan when the time comes. Hence, this framework analyses the KMV-Merton model to estimate the probabilities of default for Malaysian listed companies. In this way, banks can verify the ability of companies to meet their loan commitments in order to overcome bad investments and financial losses. This model has been applied to all Malaysian listed companies in Bursa Malaysia for estimating the credit default probabilities of companies and compare with the rating given by the rating agency, which is RAM Holdings Berhad to conform to reality. Then, the significance of this study is a credit risk grade is proposed by using the KMV-Merton model for the Malaysian listed companies.
Elmouttie, David; Flinn, Paul; Kiermeier, Andreas; Subramanyam, Bhadriraju; Hagstrum, David; Hamilton, Grant
2013-09-01
Developing sampling strategies to target biological pests such as insects in stored grain is inherently difficult owing to species biology and behavioural characteristics. The design of robust sampling programmes should be based on an underlying statistical distribution that is sufficiently flexible to capture variations in the spatial distribution of the target species. Comparisons are made of the accuracy of four probability-of-detection sampling models - the negative binomial model,(1) the Poisson model,(1) the double logarithmic model(2) and the compound model(3) - for detection of insects over a broad range of insect densities. Although the double log and negative binomial models performed well under specific conditions, it is shown that, of the four models examined, the compound model performed the best over a broad range of insect spatial distributions and densities. In particular, this model predicted well the number of samples required when insect density was high and clumped within experimental storages. This paper reinforces the need for effective sampling programs designed to detect insects over a broad range of spatial distributions. The compound model is robust over a broad range of insect densities and leads to substantial improvement in detection probabilities within highly variable systems such as grain storage. © 2013 Society of Chemical Industry.
Adamovich, Igor V.
2014-04-15
A three-dimensional, nonperturbative, semiclassical analytic model of vibrational energy transfer in collisions between a rotating diatomic molecule and an atom, and between two rotating diatomic molecules (Forced Harmonic Oscillator–Free Rotation model) has been extended to incorporate rotational relaxation and coupling between vibrational, translational, and rotational energy transfer. The model is based on analysis of semiclassical trajectories of rotating molecules interacting by a repulsive exponential atom-to-atom potential. The model predictions are compared with the results of three-dimensional close-coupled semiclassical trajectory calculations using the same potential energy surface. The comparison demonstrates good agreement between analytic and numerical probabilities of rotational and vibrational energy transfer processes, over a wide range of total collision energies, rotational energies, and impact parameter. The model predicts probabilities of single-quantum and multi-quantum vibrational-rotational transitions and is applicable up to very high collision energies and quantum numbers. Closed-form analytic expressions for these transition probabilities lend themselves to straightforward incorporation into DSMC nonequilibrium flow codes.
A two-stage approach in solving the state probabilities of the multi-queue M/G/1 model
NASA Astrophysics Data System (ADS)
Chen, Mu-Song; Yen, Hao-Wei
2016-04-01
The M/G/1 model is the fundamental basis of the queueing system in many network systems. Usually, the study of the M/G/1 is limited by the assumption of single queue and infinite capacity. In practice, however, these postulations may not be valid, particularly when dealing with many real-world problems. In this paper, a two-stage state-space approach is devoted to solving the state probabilities for the multi-queue finite-capacity M/G/1 model, i.e. q-M/G/1/Ki with Ki buffers in the ith queue. The state probabilities at departure instants are determined by solving a set of state transition equations. Afterward, an embedded Markov chain analysis is applied to derive the state probabilities with another set of state balance equations at arbitrary time instants. The closed forms of the state probabilities are also presented with theorems for reference. Applications of Little's theorem further present the corresponding results for queue lengths and average waiting times. Simulation experiments have demonstrated the correctness of the proposed approaches.
NASA Astrophysics Data System (ADS)
Smith, L.
2004-12-01
The aim of constructing a forecast from the best model(s) simulations should be distinguished from the aim of improving the model(s) whenever possible. The common confusion of these distinct aims in earth system science sometimes results both in the misinterpretation of results and in a less than ideal experimental design. The motivation, resource distribution, and scientific goals of these two aims almost always differ in the earth sciences. The goal of this talk is to illustrate these differences in the contexts of operational weather forecasting and that of climate modelling. We adopt the mathematical framework of indistinguishable states (Judd and Smith, Physica D, 2001 & 2004), which allows us to clarify fundamental limitations on any attempt to extract accountable (physically relevant) probability forecasts from imperfect models of any physical system, even relatively simple ones. Operational weather forecasts from ECMWF and NCEP are considered in the light of THORPEX societal goals. Monte Carlo experiments in general, and ensemble systems in particular, generate distributions of simulations, but the interpretation of the output depends on the design of the ensemble, and this in turn is rather different if the aim is to better understand the model rather than to better predict electricity demand. Also, we show that there are alternatives to interpreting the ensemble as a probability forecast, alternatives that are sometime more relevant to industrial applications. Extracting seasonal forecasts from multi-model, multi-initial condition ensembles of simulations is also discussed. Finally, two different approaches to interpreting ensembles of climate model simulations are discussed. Our main conclusions reflect the need to distinguish the ways and means of using geophysical ensembles for model improvement from their applications to socio-economic risk management and policy, and to verify the physical relevance of requested deliverables like probability forecasts.
Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation
Jordan, P.D.; Oldenburg, C.M.; Nicot, J.-P.
2011-05-15
Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.
DotKnot: pseudoknot prediction using the probability dot plot under a refined energy model.
Sperschneider, Jana; Datta, Amitava
2010-04-01
RNA pseudoknots are functional structure elements with key roles in viral and cellular processes. Prediction of a pseudoknotted minimum free energy structure is an NP-complete problem. Practical algorithms for RNA structure prediction including restricted classes of pseudoknots suffer from high runtime and poor accuracy for longer sequences. A heuristic approach is to search for promising pseudoknot candidates in a sequence and verify those. Afterwards, the detected pseudoknots can be further analysed using bioinformatics or laboratory techniques. We present a novel pseudoknot detection method called DotKnot that extracts stem regions from the secondary structure probability dot plot and assembles pseudoknot candidates in a constructive fashion. We evaluate pseudoknot free energies using novel parameters, which have recently become available. We show that the conventional probability dot plot makes a wide class of pseudoknots including those with bulged stems manageable in an explicit fashion. The energy parameters now become the limiting factor in pseudoknot prediction. DotKnot is an efficient method for long sequences, which finds pseudoknots with higher accuracy compared to other known prediction algorithms. DotKnot is accessible as a web server at http://dotknot.csse.uwa.edu.au.
Mesh-Based Entry Vehicle and Explosive Debris Re-Contact Probability Modeling
NASA Technical Reports Server (NTRS)
McPherson, Mark A.; Mendeck, Gavin F.
2011-01-01
The risk to a crewed vehicle arising from potential re-contact with fragments from an explosive breakup of any jettisoned spacecraft segments during entry has long sought to be quantified. However, great difficulty lies in efficiently capturing the potential locations of each fragment and their collective threat to the vehicle. The method presented in this paper addresses this problem by using a stochastic approach that discretizes simulated debris pieces into volumetric cells, and then assesses strike probabilities accordingly. Combining spatial debris density and relative velocity between the debris and the entry vehicle, the strike probability can be calculated from the integral of the debris flux inside each cell over time. Using this technique it is possible to assess the risk to an entry vehicle along an entire trajectory as it separates from the jettisoned segment. By decoupling the fragment trajectories from that of the entry vehicle, multiple potential separation maneuvers can then be evaluated rapidly to provide an assessment of the best strategy to mitigate the re-contact risk.
NASA Astrophysics Data System (ADS)
Wenmackers, S.; Vanpoucke, D. E. P.; Douven, I.
2012-01-01
We present a model for studying communities of epistemically interacting agents who update their belief states by averaging (in a specified way) the belief states of other agents in the community. The agents in our model have a rich belief state, involving multiple independent issues which are interrelated in such a way that they form a theory of the world. Our main goal is to calculate the probability for an agent to end up in an inconsistent belief state due to updating (in the given way). To that end, an analytical expression is given and evaluated numerically, both exactly and using statistical sampling. It is shown that, under the assumptions of our model, an agent always has a probability of less than 2% of ending up in an inconsistent belief state. Moreover, this probability can be made arbitrarily small by increasing the number of independent issues the agents have to judge or by increasing the group size. A real-world situation to which this model applies is a group of experts participating in a Delphi-study.
Nixon, Zachary; Michel, Jacqueline
2015-04-07
To better understand the distribution of remaining lingering subsurface oil residues from the Exxon Valdez oil spill (EVOS) along the shorelines of Prince William Sound (PWS), AK, we revised previous modeling efforts to allow spatially explicit predictions of the distribution of subsurface oil. We used a set of pooled field data and predictor variables stored as Geographic Information Systems (GIS) data to generate calibrated boosted tree models predicting the encounter probability of different categories of subsurface oil. The models demonstrated excellent predictive performance as evaluated by cross-validated performance statistics. While the average encounter probabilities at most shoreline locations are low across western PWS, clusters of shoreline locations with elevated encounter probabilities remain in the northern parts of the PWS, as well as more isolated locations. These results can be applied to estimate the location and amount of remaining oil, evaluate potential ongoing impacts, and guide remediation. This is the first application of quantitative machine-learning based modeling techniques in estimating the likelihood of ongoing, long-term shoreline oil persistence after a major oil spill.
High-Latitude Filtering in a Global Grid-Point Model Using Model Normal Modes
NASA Technical Reports Server (NTRS)
Takacs, L. L.; Navon, I. M.; Kalnay, E.
1985-01-01
The aim of high-latitude filtering in the vicinity of the poles is to avoid the excessively short time steps imposed on an explicit time-differencing scheme by linear stability due to fast moving inertia-gravity waves near the poles. The model normal mode expansion toward the problem of high-latitude filtering in a global shallow water model using the same philosophy as that used by Daley for the problem for large timesteps in P.E. models with explicit time integration schemes was applied.
High-Latitude Filtering in a Global Grid-Point Model Using Model Normal Modes
NASA Technical Reports Server (NTRS)
Takacs, L. L.; Navon, I. M.; Kalnay, E.
1985-01-01
The aim of high-latitude filtering in the vicinity of the poles is to avoid the excessively short time steps imposed on an explicit time-differencing scheme by linear stability due to fast moving inertia-gravity waves near the poles. The model normal mode expansion toward the problem of high-latitude filtering in a global shallow water model using the same philosophy as that used by Daley for the problem for large timesteps in P.E. models with explicit time integration schemes was applied.
Latent Partially Ordered Classification Models and Normal Mixtures
ERIC Educational Resources Information Center
Tatsuoka, Curtis; Varadi, Ferenc; Jaeger, Judith
2013-01-01
Latent partially ordered sets (posets) can be employed in modeling cognitive functioning, such as in the analysis of neuropsychological (NP) and educational test data. Posets are cognitively diagnostic in the sense that classification states in these models are associated with detailed profiles of cognitive functioning. These profiles allow for…
Modeling Non-Stationary Asymmetric Lens Blur by Normal Sinh-Arcsinh Model.
Jang, Jinhyeok; Yun, Joo Dong; Yang, Seungjoon
2016-05-01
Images acquired by a camera show lens blur due to imperfection in the optical system even when images are properly focused. Lens blur is non-stationary in a sense that the amount of blur depends on pixel locations in a sensor. Lens blur is also asymmetric in a sense that the amount of blur is different in the radial and tangential directions, and also in the inward and outward radial directions. This paper presents parametric blur kernel models based on the normal sinh-arcsinh distribution function. The proposed models can provide flexible shapes of blur kernels with a different symmetry and skewness to model complicated lens blur due to optical aberration in a properly focused images accurately. Blur of single focal length lenses is estimated, and the accuracy of the models is compared with the existing parametric blur models. An advantage of the proposed models is demonstrated through deblurring experiments.
Modeling Non-Stationary Asymmetric Lens Blur By Normal Sinh-Arcsinh Model.
Jang, Jinhyeok; Yun, Joo Dong; Yang, Seungjoon
2016-03-08
Images acquired by a camera show lens blur due to imperfection in the optical system even when images are properly focused. Lens blur is non-stationary in a sense the amount of blur depends on pixel locations in a sensor. Lens blur is also asymmetric in a sense the amount of blur is different in the radial and tangential directions, and also in the inward and outward radial directions. This paper presents parametric blur kernel models based on the normal sinh-arcsinh distribution function. The proposed models can provide flexible shapes of blur kernels with different symmetry and skewness to model complicated lens blur due to optical aberration in a properly focused images accurately. Blur of sing