Statistical Validation of Normal Tissue Complication Probability Models
Xu Chengjian; Schaaf, Arjen van der; Veld, Aart A. van't; Langendijk, Johannes A.; Schilstra, Cornelis
2012-09-01
Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.
Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van't
2012-03-15
Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.
Improving normal tissue complication probability models: the need to adopt a "data-pooling" culture.
Deasy, Joseph O; Bentzen, Søren M; Jackson, Andrew; Ten Haken, Randall K; Yorke, Ellen D; Constine, Louis S; Sharma, Ashish; Marks, Lawrence B
2010-03-01
Clinical studies of the dependence of normal tissue response on dose-volume factors are often confusingly inconsistent, as the QUANTEC reviews demonstrate. A key opportunity to accelerate progress is to begin storing high-quality datasets in repositories. Using available technology, multiple repositories could be conveniently queried, without divulging protected health information, to identify relevant sources of data for further analysis. After obtaining institutional approvals, data could then be pooled, greatly enhancing the capability to construct predictive models that are more widely applicable and better powered to accurately identify key predictive factors (whether dosimetric, image-based, clinical, socioeconomic, or biological). Data pooling has already been carried out effectively in a few normal tissue complication probability studies and should become a common strategy.
IMPROVING NORMAL TISSUE COMPLICATION PROBABILITY MODELS: THE NEED TO ADOPT A “DATA-POOLING” CULTURE
Deasy, Joseph O.; Bentzen, Søren M.; Jackson, Andrew; Ten Haken, Randall K.; Yorke, Ellen D.; Constine, Louis S.; Sharma, Ashish; Marks, Lawrence B.
2010-01-01
Clinical studies of the dependence of normal tissue response on dose-volume factors are often confusingly inconsistent, as the QUANTEC reviews demonstrate. A key opportunity to accelerate progress is to begin storing high-quality datasets in repositories. Using available technology, multiple repositories could be conveniently queried, without divulging protected health information, to identify relevant sources of data for further analysis. After obtaining institutional approvals, data could then be pooled, greatly enhancing the capability to construct predictive models that are more widely applicable and better powered to accurately identify key predictive factors (whether dosimetric, image-based, clinical, socioeconomic, or biological). Data pooling has already been carried out effectively in a few normal tissue complication probability studies and should become a common strategy. PMID:20171511
Normal probability plots with confidence.
Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang
2015-01-01
Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods.
Rose, Brent S.; Aydogan, Bulent; Liang, Yun; Yeginer, Mete; Hasselle, Michael D.; Dandekar, Virag; Bafana, Rounak; Yashar, Catheryn M.; Mundt, Arno J.; Roeske, John C.; Mell, Loren K.
2011-03-01
Purpose: To test the hypothesis that increased pelvic bone marrow (BM) irradiation is associated with increased hematologic toxicity (HT) in cervical cancer patients undergoing chemoradiotherapy and to develop a normal tissue complication probability (NTCP) model for HT. Methods and Materials: We tested associations between hematologic nadirs during chemoradiotherapy and the volume of BM receiving {>=}10 and 20 Gy (V{sub 10} and V{sub 20}) using a previously developed linear regression model. The validation cohort consisted of 44 cervical cancer patients treated with concurrent cisplatin and pelvic radiotherapy. Subsequently, these data were pooled with data from 37 identically treated patients from a previous study, forming a cohort of 81 patients for normal tissue complication probability analysis. Generalized linear modeling was used to test associations between hematologic nadirs and dosimetric parameters, adjusting for body mass index. Receiver operating characteristic curves were used to derive optimal dosimetric planning constraints. Results: In the validation cohort, significant negative correlations were observed between white blood cell count nadir and V{sub 10} (regression coefficient ({beta}) = -0.060, p = 0.009) and V{sub 20} ({beta} = -0.044, p = 0.010). In the combined cohort, the (adjusted) {beta} estimates for log (white blood cell) vs. V{sub 10} and V{sub 20} were as follows: -0.022 (p = 0.025) and -0.021 (p = 0.002), respectively. Patients with V{sub 10} {>=} 95% were more likely to experience Grade {>=}3 leukopenia (68.8% vs. 24.6%, p < 0.001) than were patients with V{sub 20} > 76% (57.7% vs. 21.8%, p = 0.001). Conclusions: These findings support the hypothesis that HT increases with increasing pelvic BM volume irradiated. Efforts to maintain V{sub 10} < 95% and V{sub 20} < 76% may reduce HT.
Cella, Laura; Liuzzi, Raffaele; Conson, Manuel; D’Avino, Vittoria; Salvatore, Marco; Pacelli, Roberto
2013-10-01
Purpose: To establish a multivariate normal tissue complication probability (NTCP) model for radiation-induced asymptomatic heart valvular defects (RVD). Methods and Materials: Fifty-six patients treated with sequential chemoradiation therapy for Hodgkin lymphoma (HL) were retrospectively reviewed for RVD events. Clinical information along with whole heart, cardiac chambers, and lung dose distribution parameters was collected, and the correlations to RVD were analyzed by means of Spearman's rank correlation coefficient (Rs). For the selection of the model order and parameters for NTCP modeling, a multivariate logistic regression method using resampling techniques (bootstrapping) was applied. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC). Results: When we analyzed the whole heart, a 3-variable NTCP model including the maximum dose, whole heart volume, and lung volume was shown to be the optimal predictive model for RVD (Rs = 0.573, P<.001, AUC = 0.83). When we analyzed the cardiac chambers individually, for the left atrium and for the left ventricle, an NTCP model based on 3 variables including the percentage volume exceeding 30 Gy (V30), cardiac chamber volume, and lung volume was selected as the most predictive model (Rs = 0.539, P<.001, AUC = 0.83; and Rs = 0.557, P<.001, AUC = 0.82, respectively). The NTCP values increase as heart maximum dose or cardiac chambers V30 increase. They also increase with larger volumes of the heart or cardiac chambers and decrease when lung volume is larger. Conclusions: We propose logistic NTCP models for RVD considering not only heart irradiation dose but also the combined effects of lung and heart volumes. Our study establishes the statistical evidence of the indirect effect of lung size on radio-induced heart toxicity.
Bakhshandeh, Mohsen; Hashemi, Bijan; Mahdavi, Seied Rabi Mehdi; Nikoofar, Alireza; Vasheghani, Maryam; Kazemnejad, Anoshirvan
2013-02-01
Purpose: To determine the dose-response relationship of the thyroid for radiation-induced hypothyroidism in head-and-neck radiation therapy, according to 6 normal tissue complication probability models, and to find the best-fit parameters of the models. Methods and Materials: Sixty-five patients treated with primary or postoperative radiation therapy for various cancers in the head-and-neck region were prospectively evaluated. Patient serum samples (tri-iodothyronine, thyroxine, thyroid-stimulating hormone [TSH], free tri-iodothyronine, and free thyroxine) were measured before and at regular time intervals until 1 year after the completion of radiation therapy. Dose-volume histograms (DVHs) of the patients' thyroid gland were derived from their computed tomography (CT)-based treatment planning data. Hypothyroidism was defined as increased TSH (subclinical hypothyroidism) or increased TSH in combination with decreased free thyroxine and thyroxine (clinical hypothyroidism). Thyroid DVHs were converted to 2 Gy/fraction equivalent doses using the linear-quadratic formula with {alpha}/{beta} = 3 Gy. The evaluated models included the following: Lyman with the DVH reduced to the equivalent uniform dose (EUD), known as LEUD; Logit-EUD; mean dose; relative seriality; individual critical volume; and population critical volume models. The parameters of the models were obtained by fitting the patients' data using a maximum likelihood analysis method. The goodness of fit of the models was determined by the 2-sample Kolmogorov-Smirnov test. Ranking of the models was made according to Akaike's information criterion. Results: Twenty-nine patients (44.6%) experienced hypothyroidism. None of the models was rejected according to the evaluation of the goodness of fit. The mean dose model was ranked as the best model on the basis of its Akaike's information criterion value. The D{sub 50} estimated from the models was approximately 44 Gy. Conclusions: The implemented normal tissue
Defraene, Gilles; Van den Bergh, Laura; Al-Mamgani, Abrahim; Haustermans, Karin; Heemsbergen, Wilma; Van den Heuvel, Frank; Lebesque, Joos V.
2012-03-01
Purpose: To study the impact of clinical predisposing factors on rectal normal tissue complication probability modeling using the updated results of the Dutch prostate dose-escalation trial. Methods and Materials: Toxicity data of 512 patients (conformally treated to 68 Gy [n = 284] and 78 Gy [n = 228]) with complete follow-up at 3 years after radiotherapy were studied. Scored end points were rectal bleeding, high stool frequency, and fecal incontinence. Two traditional dose-based models (Lyman-Kutcher-Burman (LKB) and Relative Seriality (RS) and a logistic model were fitted using a maximum likelihood approach. Furthermore, these model fits were improved by including the most significant clinical factors. The area under the receiver operating characteristic curve (AUC) was used to compare the discriminating ability of all fits. Results: Including clinical factors significantly increased the predictive power of the models for all end points. In the optimal LKB, RS, and logistic models for rectal bleeding and fecal incontinence, the first significant (p = 0.011-0.013) clinical factor was 'previous abdominal surgery.' As second significant (p = 0.012-0.016) factor, 'cardiac history' was included in all three rectal bleeding fits, whereas including 'diabetes' was significant (p = 0.039-0.048) in fecal incontinence modeling but only in the LKB and logistic models. High stool frequency fits only benefitted significantly (p = 0.003-0.006) from the inclusion of the baseline toxicity score. For all models rectal bleeding fits had the highest AUC (0.77) where it was 0.63 and 0.68 for high stool frequency and fecal incontinence, respectively. LKB and logistic model fits resulted in similar values for the volume parameter. The steepness parameter was somewhat higher in the logistic model, also resulting in a slightly lower D{sub 50}. Anal wall DVHs were used for fecal incontinence, whereas anorectal wall dose best described the other two endpoints. Conclusions: Comparable
Chang, Liyun; Ting, Hui-Min; Huang, Yu-Jie
2015-01-01
Purpose Symptomatic radiation pneumonitis (SRP), which decreases quality of life (QoL), is the most common pulmonary complication in patients receiving breast irradiation. If it occurs, acute SRP usually develops 4–12 weeks after completion of radiotherapy and presents as a dry cough, dyspnea and low-grade fever. If the incidence of SRP is reduced, not only the QoL but also the compliance of breast cancer patients may be improved. Therefore, we investigated the incidence SRP in breast cancer patients after hybrid intensity modulated radiotherapy (IMRT) to find the risk factors, which may have important effects on the risk of radiation-induced complications. Methods In total, 93 patients with breast cancer were evaluated. The final endpoint for acute SRP was defined as those who had density changes together with symptoms, as measured using computed tomography. The risk factors for a multivariate normal tissue complication probability model of SRP were determined using the least absolute shrinkage and selection operator (LASSO) technique. Results Five risk factors were selected using LASSO: the percentage of the ipsilateral lung volume that received more than 20-Gy (IV20), energy, age, body mass index (BMI) and T stage. Positive associations were demonstrated among the incidence of SRP, IV20, and patient age. Energy, BMI and T stage showed a negative association with the incidence of SRP. Our analyses indicate that the risk of SPR following hybrid IMRT in elderly or low-BMI breast cancer patients is increased once the percentage of the ipsilateral lung volume receiving more than 20-Gy is controlled below a limitation. Conclusions We suggest to define a dose-volume percentage constraint of IV20< 37% (or AIV20< 310cc) for the irradiated ipsilateral lung in radiation therapy treatment planning to maintain the incidence of SPR below 20%, and pay attention to the sequelae especially in elderly or low-BMI breast cancer patients. (AIV20: the absolute ipsilateral lung
Abstract Models of Probability
NASA Astrophysics Data System (ADS)
Maximov, V. M.
2001-12-01
Probability theory presents a mathematical formalization of intuitive ideas of independent events and a probability as a measure of randomness. It is based on axioms 1-5 of A.N. Kolmogorov 1 and their generalizations 2. Different formalized refinements were proposed for such notions as events, independence, random value etc., 2,3, whereas the measure of randomness, i.e. numbers from [0,1], remained unchanged. To be precise we mention some attempts of generalization of the probability theory with negative probabilities 4. From another side the physicists tryed to use the negative and even complex values of probability to explain some paradoxes in quantum mechanics 5,6,7. Only recently, the necessity of formalization of quantum mechanics and their foundations 8 led to the construction of p-adic probabilities 9,10,11, which essentially extended our concept of probability and randomness. Therefore, a natural question arises how to describe algebraic structures whose elements can be used as a measure of randomness. As consequence, a necessity arises to define the types of randomness corresponding to every such algebraic structure. Possibly, this leads to another concept of randomness that has another nature different from combinatorical - metric conception of Kolmogorov. Apparenly, discrepancy of real type of randomness corresponding to some experimental data lead to paradoxes, if we use another model of randomness for data processing 12. Algebraic structure whose elements can be used to estimate some randomness will be called a probability set Φ. Naturally, the elements of Φ are the probabilities.
Xu ZhiYong; Liang Shixiong; Zhu Ji; Zhu Xiaodong; Zhao Jiandong; Lu Haijie; Yang Yunli; Chen Long; Wang Anyu; Fu Xiaolong; Jiang Guoliang . E-mail: jianggl@21cn.com
2006-05-01
Purpose: To describe the probability of RILD by application of the Lyman-Kutcher-Burman normal-tissue complication (NTCP) model for primary liver carcinoma (PLC) treated with hypofractionated three-dimensional conformal radiotherapy (3D-CRT). Methods and Materials: A total of 109 PLC patients treated by 3D-CRT were followed for RILD. Of these patients, 93 were in liver cirrhosis of Child-Pugh Grade A, and 16 were in Child-Pugh Grade B. The Michigan NTCP model was used to predict the probability of RILD, and then the modified Lyman NTCP model was generated for Child-Pugh A and Child-Pugh B patients by maximum-likelihood analysis. Results: Of all patients, 17 developed RILD in which 8 were of Child-Pugh Grade A, and 9 were of Child-Pugh Grade B. The prediction of RILD by the Michigan model was underestimated for PLC patients. The modified n, m, TD{sub 5} (1) were 1.1, 0.28, and 40.5 Gy and 0.7, 0.43, and 23 Gy for patients with Child-Pugh A and B, respectively, which yielded better estimations of RILD probability. The hepatic tolerable doses (TD{sub 5}) would be MDTNL of 21 Gy and 6 Gy, respectively, for Child-Pugh A and B patients. Conclusions: The Michigan model was probably not fit to predict RILD in PLC patients. A modified Lyman NTCP model for RILD was recommended.
2012-01-01
Background With advances in modern radiotherapy (RT), many patients with head and neck (HN) cancer can be effectively cured. However, xerostomia is a common complication in patients after RT for HN cancer. The purpose of this study was to use the Lyman–Kutcher–Burman (LKB) model to derive parameters for the normal tissue complication probability (NTCP) for xerostomia based on scintigraphy assessments and quality of life (QoL) questionnaires. We performed validation tests of the Quantitative Analysis of Normal Tissue Effects in the Clinic (QUANTEC) guidelines against prospectively collected QoL and salivary scintigraphic data. Methods Thirty-one patients with HN cancer were enrolled. Salivary excretion factors (SEFs) measured by scintigraphy and QoL data from self-reported questionnaires were used for NTCP modeling to describe the incidence of grade 3+ xerostomia. The NTCP parameters estimated from the QoL and SEF datasets were compared. Model performance was assessed using Pearson’s chi-squared test, Nagelkerke’s R2, the area under the receiver operating characteristic curve, and the Hosmer–Lemeshow test. The negative predictive value (NPV) was checked for the rate of correctly predicting the lack of incidence. Pearson’s chi-squared test was used to test the goodness of fit and association. Results Using the LKB NTCP model and assuming n=1, the dose for uniform irradiation of the whole or partial volume of the parotid gland that results in 50% probability of a complication (TD50) and the slope of the dose–response curve (m) were determined from the QoL and SEF datasets, respectively. The NTCP-fitted parameters for local disease were TD50=43.6 Gy and m=0.18 with the SEF data, and TD50=44.1 Gy and m=0.11 with the QoL data. The rate of grade 3+ xerostomia for treatment plans meeting the QUANTEC guidelines was specifically predicted, with a NPV of 100%, using either the QoL or SEF dataset. Conclusions Our study shows the agreement between the NTCP
Robertson, John M.; Soehn, Matthias; Yan Di
2010-05-01
Purpose: Understanding the dose-volume relationship of small bowel irradiation and severe acute diarrhea may help reduce the incidence of this side effect during adjuvant treatment for rectal cancer. Methods and Materials: Consecutive patients treated curatively for rectal cancer were reviewed, and the maximum grade of acute diarrhea was determined. The small bowel was outlined on the treatment planning CT scan, and a dose-volume histogram was calculated for the initial pelvic treatment (45 Gy). Logistic regression models were fitted for varying cutoff-dose levels from 5 to 45 Gy in 5-Gy increments. The model with the highest LogLikelihood was used to develop a cutoff-dose normal tissue complication probability (NTCP) model. Results: There were a total of 152 patients (48% preoperative, 47% postoperative, 5% other), predominantly treated prone (95%) with a three-field technique (94%) and a protracted venous infusion of 5-fluorouracil (78%). Acute Grade 3 diarrhea occurred in 21%. The largest LogLikelihood was found for the cutoff-dose logistic regression model with 15 Gy as the cutoff-dose, although the models for 20 Gy and 25 Gy had similar significance. According to this model, highly significant correlations (p <0.001) between small bowel volumes receiving at least 15 Gy and toxicity exist in the considered patient population. Similar findings applied to both the preoperatively (p = 0.001) and postoperatively irradiated groups (p = 0.001). Conclusion: The incidence of Grade 3 diarrhea was significantly correlated with the volume of small bowel receiving at least 15 Gy using a cutoff-dose NTCP model.
Bazan, Jose G.; Luxton, Gary; Kozak, Margaret M.; Anderson, Eric M.; Hancock, Steven L.; Kapp, Daniel S.; Kidd, Elizabeth A.; Koong, Albert C.; Chang, Daniel T.
2013-12-01
Purpose: To determine how chemotherapy agents affect radiation dose parameters that correlate with acute hematologic toxicity (HT) in patients treated with pelvic intensity modulated radiation therapy (P-IMRT) and concurrent chemotherapy. Methods and Materials: We assessed HT in 141 patients who received P-IMRT for anal, gynecologic, rectal, or prostate cancers, 95 of whom received concurrent chemotherapy. Patients were separated into 4 groups: mitomycin (MMC) + 5-fluorouracil (5FU, 37 of 141), platinum ± 5FU (Cis, 32 of 141), 5FU (26 of 141), and P-IMRT alone (46 of 141). The pelvic bone was contoured as a surrogate for pelvic bone marrow (PBM) and divided into subsites: ilium, lower pelvis, and lumbosacral spine (LSS). The volumes of each region receiving 5-40 Gy were calculated. The endpoint for HT was grade ≥3 (HT3+) leukopenia, neutropenia or thrombocytopenia. Normal tissue complication probability was calculated using the Lyman-Kutcher-Burman model. Logistic regression was used to analyze association between HT3+ and dosimetric parameters. Results: Twenty-six patients experienced HT3+: 10 of 37 (27%) MMC, 14 of 32 (44%) Cis, 2 of 26 (8%) 5FU, and 0 of 46 P-IMRT. PBM dosimetric parameters were correlated with HT3+ in the MMC group but not in the Cis group. LSS dosimetric parameters were well correlated with HT3+ in both the MMC and Cis groups. Constrained optimization (0
Bazan, Jose G.; Luxton, Gary; Mok, Edward C.; Koong, Albert C.; Chang, Daniel T.
2012-11-01
Purpose: To identify dosimetric parameters that correlate with acute hematologic toxicity (HT) in patients with squamous cell carcinoma of the anal canal treated with definitive chemoradiotherapy (CRT). Methods and Materials: We analyzed 33 patients receiving CRT. Pelvic bone (PBM) was contoured for each patient and divided into subsites: ilium, lower pelvis (LP), and lumbosacral spine (LSS). The volume of each region receiving at least 5, 10, 15, 20, 30, and 40 Gy was calculated. Endpoints included grade {>=}3 HT (HT3+) and hematologic event (HE), defined as any grade {>=}2 HT with a modification in chemotherapy dose. Normal tissue complication probability (NTCP) was evaluated with the Lyman-Kutcher-Burman (LKB) model. Logistic regression was used to test associations between HT and dosimetric/clinical parameters. Results: Nine patients experienced HT3+ and 15 patients experienced HE. Constrained optimization of the LKB model for HT3+ yielded the parameters m = 0.175, n = 1, and TD{sub 50} = 32 Gy. With this model, mean PBM doses of 25 Gy, 27.5 Gy, and 31 Gy result in a 10%, 20%, and 40% risk of HT3+, respectively. Compared with patients with mean PBM dose of <30 Gy, patients with mean PBM dose {>=}30 Gy had a 14-fold increase in the odds of developing HT3+ (p = 0.005). Several low-dose radiation parameters (i.e., PBM-V10) were associated with the development of HT3+ and HE. No association was found with the ilium, LP, or clinical factors. Conclusions: LKB modeling confirms the expectation that PBM acts like a parallel organ, implying that the mean dose to the organ is a useful predictor for toxicity. Low-dose radiation to the PBM was also associated with clinically significant HT. Keeping the mean PBM dose <22.5 Gy and <25 Gy is associated with a 5% and 10% risk of HT, respectively.
Multinomial mixture model with heterogeneous classification probabilities
Holland, M.D.; Gray, B.R.
2011-01-01
Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.
NASA Astrophysics Data System (ADS)
Trojková, Darina; Judas, Libor; Trojek, Tomáš
2014-11-01
Minimizing the late rectal toxicity of prostate cancer patients is a very important and widely-discussed topic. Normal tissue complication probability (NTCP) models can be used to evaluate competing treatment plans. In our work, the parameters of the Lyman-Kutcher-Burman (LKB), Källman, and Logit+EUD models are optimized by minimizing the Brier score for a group of 302 prostate cancer patients. The NTCP values are calculated and are compared with the values obtained using previously published values for the parameters. χ2 Statistics were calculated as a check of goodness of optimization.
Transition probabilities of normal states determine the Jordan structure of a quantum system
NASA Astrophysics Data System (ADS)
Leung, Chi-Wai; Ng, Chi-Keung; Wong, Ngai-Ching
2016-01-01
Let Φ : 𝔖(M1) → 𝔖(M2) be a bijection (not assumed affine nor continuous) between the sets of normal states of two quantum systems, modelled on the self-adjoint parts of von Neumann algebras M1 and M2, respectively. This paper concerns with the situation when Φ preserves (or partially preserves) one of the following three notions of "transition probability" on the normal state spaces: the transition probability PU introduced by Uhlmann [Rep. Math. Phys. 9, 273-279 (1976)], the transition probability PR introduced by Raggio [Lett. Math. Phys. 6, 233-236 (1982)], and an "asymmetric transition probability" P0 (as introduced in this article). It is shown that the two systems are isomorphic, i.e., M1 and M2 are Jordan ∗-isomorphic, if Φ preserves all pairs with zero Uhlmann (respectively, Raggio or asymmetric) transition probability, in the sense that for any normal states μ and ν, we have P (" separators=" Φ ( μ ) , Φ ( ν ) " separators=" ) = 0 if and only if P(μ, ν) = 0, where P stands for PU (respectively, PR or P0). Furthermore, as an extension of Wigner's theorem, it is shown that there is a Jordan ∗-isomorphism Θ : M2 → M1 satisfying Φ = Θ∗|𝔖(M1) if and only if Φ preserves the "asymmetric transition probability." This is also equivalent to Φ preserving the Raggio transition probability. Consequently, if Φ preserves the Raggio transition probability, it will preserve the Uhlmann transition probability as well. As another application, the sets of normal states equipped with either the usual metric, the Bures metric or "the metric induced by the self-dual cone," are complete Jordan ∗-invariants for the underlying von Neumann algebras.
Dean, Jamie A; Wong, Kee H; Welsh, Liam C; Jones, Ann-Britt; Schick, Ulrike; Newbold, Kate L; Bhide, Shreerang A; Harrington, Kevin J; Nutting, Christopher M; Gulliford, Sarah L
2016-01-01
Background and Purpose Severe acute mucositis commonly results from head and neck (chemo)radiotherapy. A predictive model of mucositis could guide clinical decision-making and inform treatment planning. We aimed to generate such a model using spatial dose metrics and machine learning. Material and Methods Predictive models of severe acute mucositis were generated using radiotherapy dose (dose-volume and spatial dose metrics) and clinical data. Penalised logistic regression, support vector classification and random forest classification (RFC) models were generated and compared. Internal validation was performed (with 100-iteration cross-validation), using multiple metrics, including area under the receiver operating characteristic curve (AUC) and calibration slope, to assess performance. Associations between covariates and severe mucositis were explored using the models. Results The dose-volume-based models (standard) performed equally to those incorporating spatial information. Discrimination was similar between models, but the RFCstandard had the best calibration. The mean AUC and calibration slope for this model were 0.71 (s.d.=0.09) and 3.9 (s.d.=2.2), respectively. The volumes of oral cavity receiving intermediate and high doses were associated with severe mucositis. Conclusions The RFCstandard model performance is modest-to-good, but should be improved, and requires external validation. Reducing the volumes of oral cavity receiving intermediate and high doses may reduce mucositis incidence. PMID:27240717
DISJUNCTIVE NORMAL SHAPE MODELS
Ramesh, Nisha; Mesadi, Fitsum; Cetin, Mujdat; Tasdizen, Tolga
2016-01-01
A novel implicit parametric shape model is proposed for segmentation and analysis of medical images. Functions representing the shape of an object can be approximated as a union of N polytopes. Each polytope is obtained by the intersection of M half-spaces. The shape function can be approximated as a disjunction of conjunctions, using the disjunctive normal form. The shape model is initialized using seed points defined by the user. We define a cost function based on the Chan-Vese energy functional. The model is differentiable, hence, gradient based optimization algorithms are used to find the model parameters. PMID:27403233
Soehn, Matthias . E-mail: Matthias.Soehn@med.uni-tuebingen.de; Yan Di; Liang Jian; Meldolesi, Elisa; Vargas, Carlos; Alber, Markus
2007-03-15
Purpose: Accurate modeling of rectal complications based on dose-volume histogram (DVH) data are necessary to allow safe dose escalation in radiotherapy of prostate cancer. We applied different equivalent uniform dose (EUD)-based and dose-volume-based normal tissue complication probability (NTCP) models to rectal wall DVHs and follow-up data for 319 prostate cancer patients to identify the dosimetric factors most predictive for Grade {>=} 2 rectal bleeding. Methods and Materials: Data for 319 patients treated at the William Beaumont Hospital with three-dimensional conformal radiotherapy (3D-CRT) under an adaptive radiotherapy protocol were used for this study. The following models were considered: (1) Lyman model and (2) logit-formula with DVH reduced to generalized EUD (3) serial reconstruction unit (RU) model (4) Poisson-EUD model, and (5) mean dose- and (6) cutoff dose-logistic regression model. The parameters and their confidence intervals were determined using maximum likelihood estimation. Results: Of the patients, 51 (16.0%) showed Grade 2 or higher bleeding. As assessed qualitatively and quantitatively, the Lyman- and Logit-EUD, serial RU, and Poisson-EUD model fitted the data very well. Rectal wall mean dose did not correlate to Grade 2 or higher bleeding. For the cutoff dose model, the volume receiving > 73.7 Gy showed most significant correlation to bleeding. However, this model fitted the data more poorly than the EUD-based models. Conclusions: Our study clearly confirms a volume effect for late rectal bleeding. This can be described very well by the EUD-like models, of which the serial RU- and Poisson-EUD model can describe the data with only two parameters. Dose-volume-based cutoff-dose models performed wor0008.
Calibrating Subjective Probabilities Using Hierarchical Bayesian Models
NASA Astrophysics Data System (ADS)
Merkle, Edgar C.
A body of psychological research has examined the correspondence between a judge's subjective probability of an event's outcome and the event's actual outcome. The research generally shows that subjective probabilities are noisy and do not match the "true" probabilities. However, subjective probabilities are still useful for forecasting purposes if they bear some relationship to true probabilities. The purpose of the current research is to exploit relationships between subjective probabilities and outcomes to create improved, model-based probabilities for forecasting. Once the model has been trained in situations where the outcome is known, it can then be used in forecasting situations where the outcome is unknown. These concepts are demonstrated using experimental psychology data, and potential applications are discussed.
Integrated statistical modelling of spatial landslide probability
NASA Astrophysics Data System (ADS)
Mergili, M.; Chu, H.-J.
2015-09-01
Statistical methods are commonly employed to estimate spatial probabilities of landslide release at the catchment or regional scale. Travel distances and impact areas are often computed by means of conceptual mass point models. The present work introduces a fully automated procedure extending and combining both concepts to compute an integrated spatial landslide probability: (i) the landslide inventory is subset into release and deposition zones. (ii) We employ a simple statistical approach to estimate the pixel-based landslide release probability. (iii) We use the cumulative probability density function of the angle of reach of the observed landslide pixels to assign an impact probability to each pixel. (iv) We introduce the zonal probability i.e. the spatial probability that at least one landslide pixel occurs within a zone of defined size. We quantify this relationship by a set of empirical curves. (v) The integrated spatial landslide probability is defined as the maximum of the release probability and the product of the impact probability and the zonal release probability relevant for each pixel. We demonstrate the approach with a 637 km2 study area in southern Taiwan, using an inventory of 1399 landslides triggered by the typhoon Morakot in 2009. We observe that (i) the average integrated spatial landslide probability over the entire study area corresponds reasonably well to the fraction of the observed landside area; (ii) the model performs moderately well in predicting the observed spatial landslide distribution; (iii) the size of the release zone (or any other zone of spatial aggregation) influences the integrated spatial landslide probability to a much higher degree than the pixel-based release probability; (iv) removing the largest landslides from the analysis leads to an enhanced model performance.
Site occupancy models with heterogeneous detection probabilities
Royle, J. Andrew
2006-01-01
Models for estimating the probability of occurrence of a species in the presence of imperfect detection are important in many ecological disciplines. In these ?site occupancy? models, the possibility of heterogeneity in detection probabilities among sites must be considered because variation in abundance (and other factors) among sampled sites induces variation in detection probability (p). In this article, I develop occurrence probability models that allow for heterogeneous detection probabilities by considering several common classes of mixture distributions for p. For any mixing distribution, the likelihood has the general form of a zero-inflated binomial mixture for which inference based upon integrated likelihood is straightforward. A recent paper by Link (2003, Biometrics 59, 1123?1130) demonstrates that in closed population models used for estimating population size, different classes of mixture distributions are indistinguishable from data, yet can produce very different inferences about population size. I demonstrate that this problem can also arise in models for estimating site occupancy in the presence of heterogeneous detection probabilities. The implications of this are discussed in the context of an application to avian survey data and the development of animal monitoring programs.
Computational Modelling and Simulation Fostering New Approaches in Learning Probability
ERIC Educational Resources Information Center
Kuhn, Markus; Hoppe, Ulrich; Lingnau, Andreas; Wichmann, Astrid
2006-01-01
Discovery learning in mathematics in the domain of probability based on hands-on experiments is normally limited because of the difficulty in providing sufficient materials and data volume in terms of repetitions of the experiments. Our cooperative, computational modelling and simulation environment engages students and teachers in composing and…
Multiple model cardinalized probability hypothesis density filter
NASA Astrophysics Data System (ADS)
Georgescu, Ramona; Willett, Peter
2011-09-01
The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.
ERIC Educational Resources Information Center
Gibbons, Robert D.; And Others
The probability integral of the multivariate normal distribution (ND) has received considerable attention since W. F. Sheppard's (1900) and K. Pearson's (1901) seminal work on the bivariate ND. This paper evaluates the formula that represents the "n x n" correlation matrix of the "chi(sub i)" and the standardized multivariate normal density…
A Quantum Probability Model of Causal Reasoning
Trueblood, Jennifer S.; Busemeyer, Jerome R.
2012-01-01
People can often outperform statistical methods and machine learning algorithms in situations that involve making inferences about the relationship between causes and effects. While people are remarkably good at causal reasoning in many situations, there are several instances where they deviate from expected responses. This paper examines three situations where judgments related to causal inference problems produce unexpected results and describes a quantum inference model based on the axiomatic principles of quantum probability theory that can explain these effects. Two of the three phenomena arise from the comparison of predictive judgments (i.e., the conditional probability of an effect given a cause) with diagnostic judgments (i.e., the conditional probability of a cause given an effect). The third phenomenon is a new finding examining order effects in predictive causal judgments. The quantum inference model uses the notion of incompatibility among different causes to account for all three phenomena. Psychologically, the model assumes that individuals adopt different points of view when thinking about different causes. The model provides good fits to the data and offers a coherent account for all three causal reasoning effects thus proving to be a viable new candidate for modeling human judgment. PMID:22593747
Tai An; Erickson, Beth; Li, X. Allen
2009-05-01
Purpose: The ability to predict normal tissue complication probability (NTCP) is essential for NTCP-based treatment planning. The purpose of this work is to estimate the Lyman NTCP model parameters for liver irradiation from published clinical data of different fractionation regimens. A new expression of normalized total dose (NTD) is proposed to convert NTCP data between different treatment schemes. Method and Materials: The NTCP data of radiation- induced liver disease (RILD) from external beam radiation therapy for primary liver cancer patients were selected for analysis. The data were collected from 4 institutions for tumor sizes in the range of of 8-10 cm. The dose per fraction ranged from 1.5 Gy to 6 Gy. A modified linear-quadratic model with two components corresponding to radiosensitive and radioresistant cells in the normal liver tissue was proposed to understand the new NTD formalism. Results: There are five parameters in the model: TD{sub 50}, m, n, {alpha}/{beta} and f. With two parameters n and {alpha}/{beta} fixed to be 1.0 and 2.0 Gy, respectively, the extracted parameters from the fitting are TD{sub 50}(1) = 40.3 {+-} 8.4Gy, m =0.36 {+-} 0.09, f = 0.156 {+-} 0.074 Gy and TD{sub 50}(1) = 23.9 {+-} 5.3Gy, m = 0.41 {+-} 0.15, f = 0.0 {+-} 0.04 Gy for patients with liver cirrhosis scores of Child-Pugh A and Child-Pugh B, respectively. The fitting results showed that the liver cirrhosis score significantly affects fractional dose dependence of NTD. Conclusion: The Lyman parameters generated presently and the new form of NTD may be used to predict NTCP for treatment planning of innovative liver irradiation with different fractionations, such as hypofractioned stereotactic body radiation therapy.
PABS: A Computer Program to Normalize Emission Probabilities and Calculate Realistic Uncertainties
Caron, D. S.; Browne, E.; Norman, E. B.
2009-08-21
The program PABS normalizes relative particle emission probabilities to an absolute scale and calculates the relevant uncertainties on this scale. The program is written in Java using the JDK 1.6 library. For additional information about system requirements, the code itself, and compiling from source, see the README file distributed with this program. The mathematical procedures used are given below.
Dinov, Ivo D; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas
2013-01-01
Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students' understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference.
Wang, Gufeng; Platz, Charles P; Geng, M Lei
2006-05-01
Differential normalized fluorescence (DNF) is an efficient and effective method for the differentiation of normal and cancerous tissue fluorescence spectra. The diagnostic features are extracted from the difference between the averaged cancerous and averaged normal tissue spectra and used as indices in tissue classification. In this paper, a new method, probability-based DNF bivariate analysis, is introduced based on the univariate DNF method. Two differentiation features are used concurrently in the new method to achieve better classification accuracy. The probability of each sample belonging to a disease state is determined with Bayes decision theory. This probability approach classifies the tissue spectra according to disease states and provides uncertainty information on classification. With a data set of 57 colonic tissue sites, probability-based DNF bivariate analysis is demonstrated to improve the accuracy of cancer diagnosis. The bivariate DNF analysis only requires the collection of a few data points across the entire emission spectrum and has the potential of improving data acquisition speed in tissue imaging.
Model estimates hurricane wind speed probabilities
NASA Astrophysics Data System (ADS)
Mumane, Richard J.; Barton, Chris; Collins, Eric; Donnelly, Jeffrey; Eisner, James; Emanuel, Kerry; Ginis, Isaac; Howard, Susan; Landsea, Chris; Liu, Kam-biu; Malmquist, David; McKay, Megan; Michaels, Anthony; Nelson, Norm; O Brien, James; Scott, David; Webb, Thompson, III
In the United States, intense hurricanes (category 3, 4, and 5 on the Saffir/Simpson scale) with winds greater than 50 m s -1 have caused more damage than any other natural disaster [Pielke and Pielke, 1997]. Accurate estimates of wind speed exceedance probabilities (WSEP) due to intense hurricanes are therefore of great interest to (re)insurers, emergency planners, government officials, and populations in vulnerable coastal areas.The historical record of U.S. hurricane landfall is relatively complete only from about 1900, and most model estimates of WSEP are derived from this record. During the 1899-1998 period, only two category-5 and 16 category-4 hurricanes made landfall in the United States. The historical record therefore provides only a limited sample of the most intense hurricanes.
Thompson, Sierra; Muzinic, Laura; Muzinic, Christopher; Niemiller, Matthew L.
2014-01-01
Abstract Multiple factors are thought to cause limb abnormalities in amphibian populations by altering processes of limb development and regeneration. We examined adult and juvenile axolotls (Ambystoma mexicanum) in the Ambystoma Genetic Stock Center (AGSC) for limb and digit abnormalities to investigate the probability of normal regeneration after bite injury. We observed that 80% of larval salamanders show evidence of bite injury at the time of transition from group housing to solitary housing. Among 717 adult axolotls that were surveyed, which included solitary‐housed males and group‐housed females, approximately half presented abnormalities, including examples of extra or missing digits and limbs, fused digits, and digits growing from atypical anatomical positions. Bite injury probably explains these limb defects, and not abnormal development, because limbs with normal anatomy regenerated after performing rostral amputations. We infer that only 43% of AGSC larvae will present four anatomically normal looking adult limbs after incurring a bite injury. Our results show regeneration of normal limb anatomy to be less than perfect after bite injury. PMID:25745564
Probability-summation model of multiple laser-exposure effects.
Menendez, A R; Cheney, F E; Zuclich, J A; Crump, P
1993-11-01
A probability-summation model is introduced to provide quantitative criteria for discriminating independent from interactive effects of multiple laser exposures on biological tissue. Data that differ statistically from predictions of the probability-summation model indicate the action of sensitizing (synergistic/positive) or desensitizing (hardening/negative) biophysical interactions. Interactions are indicated when response probabilities vary with changes in the spatial or temporal separation of exposures. In the absence of interactions, probability-summation parsimoniously accounts for "cumulative" effects. Data analyzed using the probability-summation model show instances of both sensitization and desensitization of retinal tissue by laser exposures. Other results are shown to be consistent with probability-summation. The relevance of the probability-summation model to previous laser-bioeffects studies, models, and safety standards is discussed and an appeal is made for improved empirical estimates of response probabilities for single exposures.
Probability of Future Observations Exceeding One-Sided, Normal, Upper Tolerance Limits
Edwards, Timothy S.
2014-10-29
Normal tolerance limits are frequently used in dynamic environments specifications of aerospace systems as a method to account for aleatory variability in the environments. Upper tolerance limits, when used in this way, are computed from records of the environment and used to enforce conservatism in the specification by describing upper extreme values the environment may take in the future. Components and systems are designed to withstand these extreme loads to ensure they do not fail under normal use conditions. The degree of conservatism in the upper tolerance limits is controlled by specifying the coverage and confidence level (usually written inmore » “coverage/confidence” form). Moreover, in high-consequence systems it is common to specify tolerance limits at 95% or 99% coverage and confidence at the 50% or 90% level. Despite the ubiquity of upper tolerance limits in the aerospace community, analysts and decision-makers frequently misinterpret their meaning. The misinterpretation extends into the standards that govern much of the acceptance and qualification of commercial and government aerospace systems. As a result, the risk of a future observation of the environment exceeding the upper tolerance limit is sometimes significantly underestimated by decision makers. This note explains the meaning of upper tolerance limits and a related measure, the upper prediction limit. So, the objective of this work is to clarify the probability of exceeding these limits in flight so that decision-makers can better understand the risk associated with exceeding design and test levels during flight and balance the cost of design and development with that of mission failure.« less
Probability of Future Observations Exceeding One-Sided, Normal, Upper Tolerance Limits
Edwards, Timothy S.
2014-10-29
Normal tolerance limits are frequently used in dynamic environments specifications of aerospace systems as a method to account for aleatory variability in the environments. Upper tolerance limits, when used in this way, are computed from records of the environment and used to enforce conservatism in the specification by describing upper extreme values the environment may take in the future. Components and systems are designed to withstand these extreme loads to ensure they do not fail under normal use conditions. The degree of conservatism in the upper tolerance limits is controlled by specifying the coverage and confidence level (usually written in “coverage/confidence” form). Moreover, in high-consequence systems it is common to specify tolerance limits at 95% or 99% coverage and confidence at the 50% or 90% level. Despite the ubiquity of upper tolerance limits in the aerospace community, analysts and decision-makers frequently misinterpret their meaning. The misinterpretation extends into the standards that govern much of the acceptance and qualification of commercial and government aerospace systems. As a result, the risk of a future observation of the environment exceeding the upper tolerance limit is sometimes significantly underestimated by decision makers. This note explains the meaning of upper tolerance limits and a related measure, the upper prediction limit. So, the objective of this work is to clarify the probability of exceeding these limits in flight so that decision-makers can better understand the risk associated with exceeding design and test levels during flight and balance the cost of design and development with that of mission failure.
NASA Technical Reports Server (NTRS)
Falls, L. W.
1975-01-01
Vandenberg Air Force Base (AFB), California, wind component statistics are presented to be used for aerospace engineering applications that require component wind probabilities for various flight azimuths and selected altitudes. The normal (Gaussian) distribution is presented as a statistical model to represent component winds at Vandenberg AFB. Head tail, and crosswind components are tabulated for all flight azimuths for altitudes from 0 to 70 km by monthly reference periods. Wind components are given for 11 selected percentiles ranging from 0.135 percent to 99.865 percent for each month. The results of statistical goodness-of-fit tests are presented to verify the use of the Gaussian distribution as an adequate model to represent component winds at Vandenberg AFB.
NASA Technical Reports Server (NTRS)
Falls, L. W.
1973-01-01
This document replaces Cape Kennedy empirical wind component statistics which are presently being used for aerospace engineering applications that require component wind probabilities for various flight azimuths and selected altitudes. The normal (Gaussian) distribution is presented as an adequate statistical model to represent component winds at Cape Kennedy. Head-, tail-, and crosswind components are tabulated for all flight azimuths for altitudes from 0 to 70 km by monthly reference periods. Wind components are given for 11 selected percentiles ranging from 0.135 percent to 99,865 percent for each month. Results of statistical goodness-of-fit tests are presented to verify the use of the Gaussian distribution as an adequate model to represent component winds at Cape Kennedy, Florida.
Probability density function modeling for sub-powered interconnects
NASA Astrophysics Data System (ADS)
Pater, Flavius; Amaricǎi, Alexandru
2016-06-01
This paper proposes three mathematical models for reliability probability density function modeling the interconnect supplied at sub-threshold voltages: spline curve approximations, Gaussian models,and sine interpolation. The proposed analysis aims at determining the most appropriate fitting for the switching delay - probability of correct switching for sub-powered interconnects. We compare the three mathematical models with the Monte-Carlo simulations of interconnects for 45 nm CMOS technology supplied at 0.25V.
Generalized emptiness formation probability in the six-vertex model
NASA Astrophysics Data System (ADS)
Colomo, F.; Pronko, A. G.; Sportiello, A.
2016-10-01
In the six-vertex model with domain wall boundary conditions, the emptiness formation probability is the probability that a rectangular region in the top left corner of the lattice is frozen. We generalize this notion to the case where the frozen region has the shape of a generic Young diagram. We derive here a multiple integral representation for this correlation function.
Gendist: An R Package for Generated Probability Distribution Models
Abu Bakar, Shaiful Anuar; Nadarajah, Saralees; ABSL Kamarul Adzhar, Zahrul Azmir; Mohamed, Ibrahim
2016-01-01
In this paper, we introduce the R package gendist that computes the probability density function, the cumulative distribution function, the quantile function and generates random values for several generated probability distribution models including the mixture model, the composite model, the folded model, the skewed symmetric model and the arc tan model. These models are extensively used in the literature and the R functions provided here are flexible enough to accommodate various univariate distributions found in other R packages. We also show its applications in graphing, estimation, simulation and risk measurements. PMID:27272043
Normalization of Gravitational Acceleration Models
NASA Technical Reports Server (NTRS)
Eckman, Randy A.; Brown, Aaron J.; Adamo, Daniel R.
2011-01-01
Unlike the uniform density spherical shell approximations of Newton, the con- sequence of spaceflight in the real universe is that gravitational fields are sensitive to the nonsphericity of their generating central bodies. The gravitational potential of a nonspherical central body is typically resolved using spherical harmonic approximations. However, attempting to directly calculate the spherical harmonic approximations results in at least two singularities which must be removed in order to generalize the method and solve for any possible orbit, including polar orbits. Three unique algorithms have been developed to eliminate these singularities by Samuel Pines [1], Bill Lear [2], and Robert Gottlieb [3]. This paper documents the methodical normalization of two1 of the three known formulations for singularity-free gravitational acceleration (namely, the Lear [2] and Gottlieb [3] algorithms) and formulates a general method for defining normalization parameters used to generate normalized Legendre Polynomials and ALFs for any algorithm. A treatment of the conventional formulation of the gravitational potential and acceleration is also provided, in addition to a brief overview of the philosophical differences between the three known singularity-free algorithms.
The role of probability of reinforcement in models of choice.
Williams, B A
1994-10-01
A general account of choice behavior in animals, the cumulative effects model, has been proposed by Davis, Staddon, Machado, and Palmer (1993). Its basic assumptions are that choice occurs in an all-or-none fashion for the response alternative with the highest probability of reinforcement and that the probability of reinforcement for each response alternative is calculated from the entire history of training (total number of reinforced responses/total number of reinforced and nonreinforced responses). The model's reliance on probability of reinforcement as the fundamental variable controlling choice behavior subjects the cumulative effects model to the same criticisms as have been directed toward other related models of choice, notably melioration theory. Several different data sets show that the relative value of a response alternative is not predicted by the obtained probability of reinforcement associated with that alternative. Alternative approaches to choice theory are considered.
Review of Literature for Model Assisted Probability of Detection
Meyer, Ryan M.; Crawford, Susan L.; Lareau, John P.; Anderson, Michael T.
2014-09-30
This is a draft technical letter report for NRC client documenting a literature review of model assisted probability of detection (MAPOD) for potential application to nuclear power plant components for improvement of field NDE performance estimations.
Aggregate and Individual Replication Probability within an Explicit Model of the Research Process
ERIC Educational Resources Information Center
Miller, Jeff; Schwarz, Wolf
2011-01-01
We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by…
Probability bounds analysis for nonlinear population ecology models.
Enszer, Joshua A; Andrei Măceș, D; Stadtherr, Mark A
2015-09-01
Mathematical models in population ecology often involve parameters that are empirically determined and inherently uncertain, with probability distributions for the uncertainties not known precisely. Propagating such imprecise uncertainties rigorously through a model to determine their effect on model outputs can be a challenging problem. We illustrate here a method for the direct propagation of uncertainties represented by probability bounds though nonlinear, continuous-time, dynamic models in population ecology. This makes it possible to determine rigorous bounds on the probability that some specified outcome for a population is achieved, which can be a core problem in ecosystem modeling for risk assessment and management. Results can be obtained at a computational cost that is considerably less than that required by statistical sampling methods such as Monte Carlo analysis. The method is demonstrated using three example systems, with focus on a model of an experimental aquatic food web subject to the effects of contamination by ionic liquids, a new class of potentially important industrial chemicals.
Exact integration of height probabilities in the Abelian Sandpile model
NASA Astrophysics Data System (ADS)
Caracciolo, Sergio; Sportiello, Andrea
2012-09-01
The height probabilities for the recurrent configurations in the Abelian Sandpile model on the square lattice have analytic expressions, in terms of multidimensional quadratures. At first, these quantities were evaluated numerically with high accuracy and conjectured to be certain cubic rational-coefficient polynomials in π-1. Later their values were determined by different methods. We revert to the direct derivation of these probabilities, by computing analytically the corresponding integrals. Once again, we confirm the predictions on the probabilities, and thus, as a corollary, the conjecture on the average height, <ρ> = 17/8.
Semenenko, Vladimir A.; Tarima, Sergey S.; Devisetty, Kiran; Pelizzari, Charles A.; Liauw, Stanley L.
2013-03-15
Purpose: To perform validation of risk predictions for late rectal toxicity (LRT) in prostate cancer obtained using a new approach to synthesize published normal tissue complication data. Methods and Materials: A published study survey was performed to identify the dose-response relationships for LRT derived from nonoverlapping patient populations. To avoid mixing models based on different symptoms, the emphasis was placed on rectal bleeding. The selected models were used to compute the risk estimates of grade 2+ and grade 3+ LRT for an independent validation cohort composed of 269 prostate cancer patients with known toxicity outcomes. Risk estimates from single studies were combined to produce consolidated risk estimates. An agreement between the actuarial toxicity incidence 3 years after radiation therapy completion and single-study or consolidated risk estimates was evaluated using the concordance correlation coefficient. Goodness of fit for the consolidated risk estimates was assessed using the Hosmer-Lemeshow test. Results: A total of 16 studies of grade 2+ and 5 studies of grade 3+ LRT met the inclusion criteria. The consolidated risk estimates of grade 2+ and 3+ LRT were constructed using 3 studies each. For grade 2+ LRT, the concordance correlation coefficient for the consolidated risk estimates was 0.537 compared with 0.431 for the best-fit single study. For grade 3+ LRT, the concordance correlation coefficient for the consolidated risk estimates was 0.477 compared with 0.448 for the best-fit single study. No evidence was found for a lack of fit for the consolidated risk estimates using the Hosmer-Lemeshow test (P=.531 and P=.397 for grade 2+ and 3+ LRT, respectively). Conclusions: In a large cohort of prostate cancer patients, selected sets of consolidated risk estimates were found to be more accurate predictors of LRT than risk estimates derived from any single study.
Normal peer models and autistic children's learning.
Egel, A L; Richman, G S; Koegel, R L
1981-01-01
Present research and legislation regarding mainstreaming autistic children into normal classrooms have raised the importance of studying whether autistic children can benefit from observing normal peer models. The present investigation systematically assessed whether autistic children's learning of discrimination tasks could be improved if they observed normal children perform the tasks correctly. In the context of a multiple baseline design, four autistic children worked on five discrimination tasks that their teachers reported were posing difficulty. Throughout the baseline condition the children evidenced very low levels of correct responding on all five tasks. In the subsequent treatment condition, when normal peers modeled correct responses, the autistic children's correct responding increased dramatically. In each case, the peer modeling procedure produced rapid achievement of the acquisition which was maintained after the peer models were removed. These results are discussed in relation to issues concerning observational learning and in relation to the implications for mainstreaming autistic children into normal classrooms. PMID:7216930
Establishment probability in fluctuating environments: a branching process model.
Haccou, P; Iwasa, Y
1996-12-01
We study the establishment probability of invaders in stochastically fluctuating environments and the related issue of extinction probability of small populations in such environments, by means of an inhomogeneous branching process model. In the model it is assumed that individuals reproduce asexually during discrete reproduction periods. Within each period, individuals have (independent) Poisson distributed numbers of offspring. The expected numbers of offspring per individual are independently identically distributed over the periods. It is shown that the establishment probability of an invader varies over the reproduction periods according to a stable distribution. We give a method for simulating the establishment probabilities and approximations for the expected establishment probability. Furthermore, we show that, due to the stochasticity of the establishment success over different periods, the expected success of sequential invasions is larger then that of simultaneous invasions and we study the effects of environmental fluctuations on the extinction probability of small populations and metapopulations. The results can easily be generalized to other offspring distributions than the Poisson. PMID:9000490
Naive Probability: Model-Based Estimates of Unique Events.
Khemlani, Sangeet S; Lotstein, Max; Johnson-Laird, Philip N
2015-08-01
We describe a dual-process theory of how individuals estimate the probabilities of unique events, such as Hillary Clinton becoming U.S. President. It postulates that uncertainty is a guide to improbability. In its computer implementation, an intuitive system 1 simulates evidence in mental models and forms analog non-numerical representations of the magnitude of degrees of belief. This system has minimal computational power and combines evidence using a small repertoire of primitive operations. It resolves the uncertainty of divergent evidence for single events, for conjunctions of events, and for inclusive disjunctions of events, by taking a primitive average of non-numerical probabilities. It computes conditional probabilities in a tractable way, treating the given event as evidence that may be relevant to the probability of the dependent event. A deliberative system 2 maps the resulting representations into numerical probabilities. With access to working memory, it carries out arithmetical operations in combining numerical estimates. Experiments corroborated the theory's predictions. Participants concurred in estimates of real possibilities. They violated the complete joint probability distribution in the predicted ways, when they made estimates about conjunctions: P(A), P(B), P(A and B), disjunctions: P(A), P(B), P(A or B or both), and conditional probabilities P(A), P(B), P(B|A). They were faster to estimate the probabilities of compound propositions when they had already estimated the probabilities of each of their components. We discuss the implications of these results for theories of probabilistic reasoning.
Naive Probability: Model-Based Estimates of Unique Events.
Khemlani, Sangeet S; Lotstein, Max; Johnson-Laird, Philip N
2015-08-01
We describe a dual-process theory of how individuals estimate the probabilities of unique events, such as Hillary Clinton becoming U.S. President. It postulates that uncertainty is a guide to improbability. In its computer implementation, an intuitive system 1 simulates evidence in mental models and forms analog non-numerical representations of the magnitude of degrees of belief. This system has minimal computational power and combines evidence using a small repertoire of primitive operations. It resolves the uncertainty of divergent evidence for single events, for conjunctions of events, and for inclusive disjunctions of events, by taking a primitive average of non-numerical probabilities. It computes conditional probabilities in a tractable way, treating the given event as evidence that may be relevant to the probability of the dependent event. A deliberative system 2 maps the resulting representations into numerical probabilities. With access to working memory, it carries out arithmetical operations in combining numerical estimates. Experiments corroborated the theory's predictions. Participants concurred in estimates of real possibilities. They violated the complete joint probability distribution in the predicted ways, when they made estimates about conjunctions: P(A), P(B), P(A and B), disjunctions: P(A), P(B), P(A or B or both), and conditional probabilities P(A), P(B), P(B|A). They were faster to estimate the probabilities of compound propositions when they had already estimated the probabilities of each of their components. We discuss the implications of these results for theories of probabilistic reasoning. PMID:25363706
Symmetric extensions of normal discrete velocity models
NASA Astrophysics Data System (ADS)
Bobylev, A. V.; Vinerean, M. C.
2012-11-01
In this paper we discuss a general problem related to spurious conservation laws for discrete velocity models (DVMs) of the classical (elastic) Boltzmann equation. Models with spurious conservation laws appeared already at the early stage of the development of discrete kinetic theory. The well-known theorem of uniqueness of collision invariants for the continuous velocity space very often does not hold for a set of discrete velocities. In our previous works we considered the general problem of the construction of normal DVMs, we found a general algorithm for the construction of all such models and presented a complete classification of normal DVMs with small number n of velocities (n<11). Even if we have a general method to classify all normal discrete kinetic models (and in particular DVMs), the existing method is relatively slow and the amount of possible cases to check increases rapidly with n. We remarked that many of our normal DVMs appear to be axially symmetric. In this paper we consider a connection between symmetric transformations and normal DVMs. We first develop a new inductive method that, starting with a given normal DVM, leads by symmetric extensions to a new normal DVM. This method can produce very fast many new normal DVMs with larger number of velocities, showing that the class of normal DVMs contains a large subclass of symmetric models. We finally apply the method to several normal DVMs and construct new models that are not only normal, but also symmetric relatively to more and more axes. We hope that such symmetric velocity sets can be used for DSMC methods of solving Boltzmann equation.
Simulation modeling of the probability of magmatic disruption of the potential Yucca Mountain Site
Crowe, B.M.; Perry, F.V.; Valentine, G.A.; Wallmann, P.C.; Kossik, R.
1993-11-01
The first phase of risk simulation modeling was completed for the probability of magmatic disruption of a potential repository at Yucca Mountain. E1, the recurrence rate of volcanic events, is modeled using bounds from active basaltic volcanic fields and midpoint estimates of E1. The cumulative probability curves for El are generated by simulation modeling using a form of a triangular distribution. The 50% estimates are about 5 to 8 {times} 10{sup 8} events yr{sup {minus}1}. The simulation modeling shows that the cumulative probability distribution for E1 is more sensitive to the probability bounds then the midpoint estimates. The E2 (disruption probability) is modeled through risk simulation using a normal distribution and midpoint estimates from multiple alternative stochastic and structural models. The 50% estimate of E2 is 4.3 {times} 10{sup {minus}3} The probability of magmatic disruption of the potential Yucca Mountain site is 2.5 {times} 10{sup {minus}8} yr{sup {minus}1}. This median estimate decreases to 9.6 {times} 10{sup {minus}9} yr{sup {minus}1} if E1 is modified for the structural models used to define E2. The Repository Integration Program was tested to compare releases of a simulated repository (without volcanic events) to releases from time histories which may include volcanic disruptive events. Results show that the performance modeling can be used for sensitivity studies of volcanic effects.
Camera-Model Identification Using Markovian Transition Probability Matrix
NASA Astrophysics Data System (ADS)
Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei
Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.
Defining Predictive Probability Functions for Species Sampling Models
Lee, Jaeyong; Quintana, Fernando A.; Müller, Peter; Trippa, Lorenzo
2013-01-01
We review the class of species sampling models (SSM). In particular, we investigate the relation between the exchangeable partition probability function (EPPF) and the predictive probability function (PPF). It is straightforward to define a PPF from an EPPF, but the converse is not necessarily true. In this paper we introduce the notion of putative PPFs and show novel conditions for a putative PPF to define an EPPF. We show that all possible PPFs in a certain class have to define (unnormalized) probabilities for cluster membership that are linear in cluster size. We give a new necessary and sufficient condition for arbitrary putative PPFs to define an EPPF. Finally, we show posterior inference for a large class of SSMs with a PPF that is not linear in cluster size and discuss a numerical method to derive its PPF. PMID:24368874
Modeling highway travel time distribution with conditional probability models
Oliveira Neto, Francisco Moraes; Chin, Shih-Miao; Hwang, Ho-Ling; Han, Lee
2014-01-01
ABSTRACT Under the sponsorship of the Federal Highway Administration's Office of Freight Management and Operations, the American Transportation Research Institute (ATRI) has developed performance measures through the Freight Performance Measures (FPM) initiative. Under this program, travel speed information is derived from data collected using wireless based global positioning systems. These telemetric data systems are subscribed and used by trucking industry as an operations management tool. More than one telemetric operator submits their data dumps to ATRI on a regular basis. Each data transmission contains truck location, its travel time, and a clock time/date stamp. Data from the FPM program provides a unique opportunity for studying the upstream-downstream speed distributions at different locations, as well as different time of the day and day of the week. This research is focused on the stochastic nature of successive link travel speed data on the continental United States Interstates network. Specifically, a method to estimate route probability distributions of travel time is proposed. This method uses the concepts of convolution of probability distributions and bivariate, link-to-link, conditional probability to estimate the expected distributions for the route travel time. Major contribution of this study is the consideration of speed correlation between upstream and downstream contiguous Interstate segments through conditional probability. The established conditional probability distributions, between successive segments, can be used to provide travel time reliability measures. This study also suggests an adaptive method for calculating and updating route travel time distribution as new data or information is added. This methodology can be useful to estimate performance measures as required by the recent Moving Ahead for Progress in the 21st Century Act (MAP 21).
NASA Astrophysics Data System (ADS)
Jaynes, E. T.; Bretthorst, G. Larry
2003-04-01
Foreword; Preface; Part I. Principles and Elementary Applications: 1. Plausible reasoning; 2. The quantitative rules; 3. Elementary sampling theory; 4. Elementary hypothesis testing; 5. Queer uses for probability theory; 6. Elementary parameter estimation; 7. The central, Gaussian or normal distribution; 8. Sufficiency, ancillarity, and all that; 9. Repetitive experiments, probability and frequency; 10. Physics of 'random experiments'; Part II. Advanced Applications: 11. Discrete prior probabilities, the entropy principle; 12. Ignorance priors and transformation groups; 13. Decision theory: historical background; 14. Simple applications of decision theory; 15. Paradoxes of probability theory; 16. Orthodox methods: historical background; 17. Principles and pathology of orthodox statistics; 18. The Ap distribution and rule of succession; 19. Physical measurements; 20. Model comparison; 21. Outliers and robustness; 22. Introduction to communication theory; References; Appendix A. Other approaches to probability theory; Appendix B. Mathematical formalities and style; Appendix C. Convolutions and cumulants.
A propagation model of computer virus with nonlinear vaccination probability
NASA Astrophysics Data System (ADS)
Gan, Chenquan; Yang, Xiaofan; Liu, Wanping; Zhu, Qingyi
2014-01-01
This paper is intended to examine the effect of vaccination on the spread of computer viruses. For that purpose, a novel computer virus propagation model, which incorporates a nonlinear vaccination probability, is proposed. A qualitative analysis of this model reveals that, depending on the value of the basic reproduction number, either the virus-free equilibrium or the viral equilibrium is globally asymptotically stable. The results of simulation experiments not only demonstrate the validity of our model, but also show the effectiveness of nonlinear vaccination strategies. Through parameter analysis, some effective strategies for eradicating viruses are suggested.
An Integrated Modeling Framework for Probable Maximum Precipitation and Flood
NASA Astrophysics Data System (ADS)
Gangrade, S.; Rastogi, D.; Kao, S. C.; Ashfaq, M.; Naz, B. S.; Kabela, E.; Anantharaj, V. G.; Singh, N.; Preston, B. L.; Mei, R.
2015-12-01
With the increasing frequency and magnitude of extreme precipitation and flood events projected in the future climate, there is a strong need to enhance our modeling capabilities to assess the potential risks on critical energy-water infrastructures such as major dams and nuclear power plants. In this study, an integrated modeling framework is developed through high performance computing to investigate the climate change effects on probable maximum precipitation (PMP) and probable maximum flood (PMF). Multiple historical storms from 1981-2012 over the Alabama-Coosa-Tallapoosa River Basin near the Atlanta metropolitan area are simulated by the Weather Research and Forecasting (WRF) model using the Climate Forecast System Reanalysis (CFSR) forcings. After further WRF model tuning, these storms are used to simulate PMP through moisture maximization at initial and lateral boundaries. A high resolution hydrological model, Distributed Hydrology-Soil-Vegetation Model, implemented at 90m resolution and calibrated by the U.S. Geological Survey streamflow observations, is then used to simulate the corresponding PMF. In addition to the control simulation that is driven by CFSR, multiple storms from the Community Climate System Model version 4 under the Representative Concentrations Pathway 8.5 emission scenario are used to simulate PMP and PMF in the projected future climate conditions. The multiple PMF scenarios developed through this integrated modeling framework may be utilized to evaluate the vulnerability of existing energy-water infrastructures with various aspects associated PMP and PMF.
Quantum Probability -- A New Direction for Modeling in Cognitive Science
NASA Astrophysics Data System (ADS)
Roy, Sisir
2014-07-01
Human cognition is still a puzzling issue in research and its appropriate modeling. It depends on how the brain behaves at that particular instance and identifies and responds to a signal among myriads of noises that are present in the surroundings (called external noise) as well as in the neurons themselves (called internal noise). Thus it is not surprising to assume that the functionality consists of various uncertainties, possibly a mixture of aleatory and epistemic uncertainties. It is also possible that a complicated pathway consisting of both types of uncertainties in continuum play a major role in human cognition. For more than 200 years mathematicians and philosophers have been using probability theory to describe human cognition. Recently in several experiments with human subjects, violation of traditional probability theory has been clearly revealed in plenty of cases. Literature survey clearly suggests that classical probability theory fails to model human cognition beyond a certain limit. While the Bayesian approach may seem to be a promising candidate to this problem, the complete success story of Bayesian methodology is yet to be written. The major problem seems to be the presence of epistemic uncertainty and its effect on cognition at any given time. Moreover the stochasticity in the model arises due to the unknown path or trajectory (definite state of mind at each time point), a person is following. To this end a generalized version of probability theory borrowing ideas from quantum mechanics may be a plausible approach. A superposition state in quantum theory permits a person to be in an indefinite state at each point of time. Such an indefinite state allows all the states to have the potential to be expressed at each moment. Thus a superposition state appears to be able to represent better, the uncertainty, ambiguity or conflict experienced by a person at any moment demonstrating that mental states follow quantum mechanics during perception and
Modeling spatial variation in avian survival and residency probabilities
Saracco, James F.; Royle, J. Andrew; DeSante, David F.; Gardner, Beth
2010-01-01
The importance of understanding spatial variation in processes driving animal population dynamics is widely recognized. Yet little attention has been paid to spatial modeling of vital rates. Here we describe a hierarchical spatial autoregressive model to provide spatially explicit year-specific estimates of apparent survival (phi) and residency (pi) probabilities from capture-recapture data. We apply the model to data collected on a declining bird species, Wood Thrush (Hylocichla mustelina), as part of a broad-scale bird-banding network, the Monitoring Avian Productivity and Survivorship (MAPS) program. The Wood Thrush analysis showed variability in both phi and pi among years and across space. Spatial heterogeneity in residency probability was particularly striking, suggesting the importance of understanding the role of transients in local populations. We found broad-scale spatial patterning in Wood Thrush phi and pi that lend insight into population trends and can direct conservation and research. The spatial model developed here represents a significant advance over approaches to investigating spatial pattern in vital rates that aggregate data at coarse spatial scales and do not explicitly incorporate spatial information in the model. Further development and application of hierarchical capture-recapture models offers the opportunity to more fully investigate spatiotemporal variation in the processes that drive population changes.
An empirical probability model of detecting species at low densities.
Delaney, David G; Leung, Brian
2010-06-01
False negatives, not detecting things that are actually present, are an important but understudied problem. False negatives are the result of our inability to perfectly detect species, especially those at low density such as endangered species or newly arriving introduced species. They reduce our ability to interpret presence-absence survey data and make sound management decisions (e.g., rapid response). To reduce the probability of false negatives, we need to compare the efficacy and sensitivity of different sampling approaches and quantify an unbiased estimate of the probability of detection. We conducted field experiments in the intertidal zone of New England and New York to test the sensitivity of two sampling approaches (quadrat vs. total area search, TAS), given different target characteristics (mobile vs. sessile). Using logistic regression we built detection curves for each sampling approach that related the sampling intensity and the density of targets to the probability of detection. The TAS approach reduced the probability of false negatives and detected targets faster than the quadrat approach. Mobility of targets increased the time to detection but did not affect detection success. Finally, we interpreted two years of presence-absence data on the distribution of the Asian shore crab (Hemigrapsus sanguineus) in New England and New York, using our probability model for false negatives. The type of experimental approach in this paper can help to reduce false negatives and increase our ability to detect species at low densities by refining sampling approaches, which can guide conservation strategies and management decisions in various areas of ecology such as conservation biology and invasion ecology.
Probability of a disease outbreak in stochastic multipatch epidemic models.
Lahodny, Glenn E; Allen, Linda J S
2013-07-01
Environmental heterogeneity, spatial connectivity, and movement of individuals play important roles in the spread of infectious diseases. To account for environmental differences that impact disease transmission, the spatial region is divided into patches according to risk of infection. A system of ordinary differential equations modeling spatial spread of disease among multiple patches is used to formulate two new stochastic models, a continuous-time Markov chain, and a system of stochastic differential equations. An estimate for the probability of disease extinction is computed by approximating the Markov chain model with a multitype branching process. Numerical examples illustrate some differences between the stochastic models and the deterministic model, important for prevention of disease outbreaks that depend on the location of infectious individuals, the risk of infection, and the movement of individuals. PMID:23666483
Bayesian failure probability model sensitivity study. Final report
Not Available
1986-05-30
The Office of the Manager, National Communications System (OMNCS) has developed a system-level approach for estimating the effects of High-Altitude Electromagnetic Pulse (HEMP) on the connectivity of telecommunications networks. This approach incorporates a Bayesian statistical model which estimates the HEMP-induced failure probabilities of telecommunications switches and transmission facilities. The purpose of this analysis is to address the sensitivity of the Bayesian model. This is done by systematically varying two model input parameters--the number of observations, and the equipment failure rates. Throughout the study, a non-informative prior distribution is used. The sensitivity of the Bayesian model to the noninformative prior distribution is investigated from a theoretical mathematical perspective.
Improving Conceptual Models Using AEM Data and Probability Distributions
NASA Astrophysics Data System (ADS)
Davis, A. C.; Munday, T. J.; Christensen, N. B.
2012-12-01
With emphasis being placed on uncertainty in groundwater modelling and prediction, coupled with questions concerning the value of geophysical methods in hydrogeology, it is important to ask meaningful questions of hydrogeophysical data and inversion results. For example, to characterise aquifers using electromagnetic (EM) data, we ask questions such as "Given that the electrical conductivity of aquifer 'A' is less than x, where is that aquifer elsewhere in the survey area?" The answer may be given by examining inversion models, selecting locations and layers that satisfy the condition 'conductivity <= x', and labelling them as aquifer 'A'. One difficulty with this approach is that the inversion model result often be considered to be the only model for the data. In reality it is just one image of the subsurface that, given the method and the regularisation imposed in the inversion, agrees with measured data within a given error bound. We have no idea whether the final model realised by the inversion satisfies the global minimum error, or whether it is simply in a local minimum. There is a distribution of inversion models that satisfy the error tolerance condition: the final model is not the only one, nor is it necessarily the correct one. AEM inversions are often linearised in the calculation of the parameter sensitivity: we rely on the second derivatives in the Taylor expansion, thus the minimum model has all layer parameters distributed about their mean parameter value with well-defined variance. We investigate the validity of the minimum model, and its uncertainty, by examining the full posterior covariance matrix. We ask questions of the minimum model, and answer them in a probabilistically. The simplest question we can pose is "What is the probability that all layer resistivity values are <= a cut-off value?" We can calculate through use of the erf or the erfc functions. The covariance values of the inversion become marginalised in the integration: only the
Defining prior probabilities for hydrologic model structures in UK catchments
NASA Astrophysics Data System (ADS)
Clements, Michiel; Pianosi, Francesca; Wagener, Thorsten; Coxon, Gemma; Freer, Jim; Booij, Martijn
2014-05-01
The selection of a model structure is an essential part of the hydrological modelling process. Recently flexible modeling frameworks have been proposed where hybrid model structures can be obtained by mixing together components from a suite of existing hydrological models. When sufficient and reliable data are available, this framework can be successfully utilised to identify the most appropriate structure, and associated optimal parameters, for a given catchment by maximizing the different models ability to reproduce the desired range of flow behaviour. In this study, we use a flexible modelling framework to address a rather different question: can the most appropriate model structure be inferred a priori (i.e without using flow observations) from catchment characteristics like topography, geology, land use, and climate? Furthermore and more generally, can we define priori probabilities of different model structures as a function of catchment characteristics? To address these questions we propose a two-step methodology and demonstrate it by application to a national database of meteo-hydrological data and catchment characteristics for 89 catchments across the UK. In the first step, each catchment is associated with its most appropriate model structure. We consider six possible structures obtained by combining two soil moisture accounting components widely used in the UK (Penman and PDM) and three different flow routing modules (linear, parallel, leaky). We measure the suitability of a model structure by the probability of finding behavioural parameterizations for that model structure when applied to the catchment under study. In the second step, we use regression analysis to establish a relation between selected model structures and the catchment characteristics. Specifically, we apply Classification And Regression Trees (CART) and show that three catchment characteristics, the Base Flow Index, the Runoff Coefficient and the mean Drainage Path Slope, can be used
Predictions of Geospace Drivers By the Probability Distribution Function Model
NASA Astrophysics Data System (ADS)
Bussy-Virat, C.; Ridley, A. J.
2014-12-01
Geospace drivers like the solar wind speed, interplanetary magnetic field (IMF), and solar irradiance have a strong influence on the density of the thermosphere and the near-Earth space environment. This has important consequences on the drag on satellites that are in low orbit and therefore on their position. One of the basic problems with space weather prediction is that these drivers can only be measured about one hour before they affect the environment. In order to allow for adequate planning for some members of the commercial, military, or civilian communities, reliable long-term space weather forecasts are needed. The study presents a model for predicting geospace drivers up to five days in advance. This model uses the same general technique to predict the solar wind speed, the three components of the IMF, and the solar irradiance F10.7. For instance, it uses Probability distribution functions (PDFs) to relate the current solar wind speed and slope to the future solar wind speed, as well as the solar wind speed to the solar wind speed one solar rotation in the future. The PDF Model has been compared to other models for predictions of the speed. It has been found that it is better than using the current solar wind speed (i.e., persistence), and better than the Wang-Sheeley-Arge Model for prediction horizons of 24 hours. Once the drivers are predicted, and the uncertainty on the drivers are specified, the density in the thermosphere can be derived using various models of the thermosphere, such as the Global Ionosphere Thermosphere Model. In addition, uncertainties on the densities can be estimated, based on ensembles of simulations. From the density and uncertainty predictions, satellite positions, as well as the uncertainty in those positions can be estimated. These can assist operators in determining the probability of collisions between objects in low Earth orbit.
Modeling evolution using the probability of fixation: history and implications.
McCandlish, David M; Stoltzfus, Arlin
2014-09-01
Many models of evolution calculate the rate of evolution by multiplying the rate at which new mutations originate within a population by a probability of fixation. Here we review the historical origins, contemporary applications, and evolutionary implications of these "origin-fixation" models, which are widely used in evolutionary genetics, molecular evolution, and phylogenetics. Origin-fixation models were first introduced in 1969, in association with an emerging view of "molecular" evolution. Early origin-fixation models were used to calculate an instantaneous rate of evolution across a large number of independently evolving loci; in the 1980s and 1990s, a second wave of origin-fixation models emerged to address a sequence of fixation events at a single locus. Although origin fixation models have been applied to a broad array of problems in contemporary evolutionary research, their rise in popularity has not been accompanied by an increased appreciation of their restrictive assumptions or their distinctive implications. We argue that origin-fixation models constitute a coherent theory of mutation-limited evolution that contrasts sharply with theories of evolution that rely on the presence of standing genetic variation. A major unsolved question in evolutionary biology is the degree to which these models provide an accurate approximation of evolution in natural populations.
Decision from Models: Generalizing Probability Information to Novel Tasks
Zhang, Hang; Paily, Jacienta T.; Maloney, Laurence T.
2014-01-01
We investigate a new type of decision under risk where—to succeed—participants must generalize their experience in one set of tasks to a novel set of tasks. We asked participants to trade distance for reward in a virtual minefield where each successive step incurred the same fixed probability of failure (referred to as hazard). With constant hazard, the probability of success (the survival function) decreases exponentially with path length. On each trial, participants chose between a shorter path with smaller reward and a longer (more dangerous) path with larger reward. They received feedback in 160 training trials: encountering a mine along their chosen path resulted in zero reward and successful completion of the path led to the reward associated with the path chosen. They then completed 600 no-feedback test trials with novel combinations of path length and rewards. To maximize expected gain, participants had to learn the correct exponential model in training and generalize it to the test conditions. We compared how participants discounted reward with increasing path length to the predictions of nine choice models including the correct exponential model. The choices of a majority of the participants were best accounted for by a model of the correct exponential form although with marked overestimation of the hazard rate. The decision-from-models paradigm differs from experience-based decision paradigms such as decision-from-sampling in the importance assigned to generalizing experience-based information to novel tasks. The task itself is representative of everyday tasks involving repeated decisions in stochastically invariant environments. PMID:25621287
Recent Advances in Model-Assisted Probability of Detection
NASA Technical Reports Server (NTRS)
Thompson, R. Bruce; Brasche, Lisa J.; Lindgren, Eric; Swindell, Paul; Winfree, William P.
2009-01-01
The increased role played by probability of detection (POD) in structural integrity programs, combined with the significant time and cost associated with the purely empirical determination of POD, provides motivation for alternate means to estimate this important metric of NDE techniques. One approach to make the process of POD estimation more efficient is to complement limited empirical experiments with information from physics-based models of the inspection process or controlled laboratory experiments. The Model-Assisted Probability of Detection (MAPOD) Working Group was formed by the Air Force Research Laboratory, the FAA Technical Center, and NASA to explore these possibilities. Since the 2004 inception of the MAPOD Working Group, 11 meetings have been held in conjunction with major NDE conferences. This paper will review the accomplishments of this group, which includes over 90 members from around the world. Included will be a discussion of strategies developed to combine physics-based and empirical understanding, draft protocols that have been developed to guide application of the strategies, and demonstrations that have been or are being carried out in a number of countries. The talk will conclude with a discussion of future directions, which will include documentation of benefits via case studies, development of formal protocols for engineering practice, as well as a number of specific technical issues.
A probability cellular automaton model for hepatitis B viral infections.
Xiao, Xuan; Shao, Shi-Huang; Chou, Kuo-Chen
2006-04-01
The existing models of hepatitis B virus (HBV) infection dynamics are based on the assumption that the populations of viruses and cells are uniformly mixed. However, the real virus infection system is actually not homogeneous and some spatial factors might play a nontrivial role in governing the development of HBV infection and its outcome. For instance, the localized populations of dead cells might adversely affect the spread of infection. To consider this kind of inhomogeneous feature, a simple 2D (dimensional) probability Cellular Automaton model was introduced to study the dynamic process of HBV infection. The model took into account the existence of different types of HBV infectious and non-infectious particles. The simulation results thus obtained showed that the Cellular Automaton model could successfully account for some important features of the disease, such as its wide variety in manifestation and its age dependency. Meanwhile, the effects of the model's parameters on the dynamical process of the infection were also investigated. It is anticipated that the Cellular Automaton model may be extended to serve as a useful vehicle for studying, among many other complicated dynamic biological systems, various persistent infections with replicating parasites.
Modelling the Probability of Landslides Impacting Road Networks
NASA Astrophysics Data System (ADS)
Taylor, F. E.; Malamud, B. D.
2012-04-01
During a landslide triggering event, the threat of landslides blocking roads poses a risk to logistics, rescue efforts and communities dependant on those road networks. Here we present preliminary results of a stochastic model we have developed to evaluate the probability of landslides intersecting a simple road network during a landslide triggering event and apply simple network indices to measure the state of the road network in the affected region. A 4000 x 4000 cell array with a 5 m x 5 m resolution was used, with a pre-defined simple road network laid onto it, and landslides 'randomly' dropped onto it. Landslide areas (AL) were randomly selected from a three-parameter inverse gamma probability density function, consisting of a power-law decay of about -2.4 for medium and large values of AL and an exponential rollover for small values of AL; the rollover (maximum probability) occurs at about AL = 400 m2 This statistical distribution was chosen based on three substantially complete triggered landslide inventories recorded in existing literature. The number of landslide areas (NL) selected for each triggered event iteration was chosen to have an average density of 1 landslide km-2, i.e. NL = 400 landslide areas chosen randomly for each iteration, and was based on several existing triggered landslide event inventories. A simple road network was chosen, in a 'T' shape configuration, with one road 1 x 4000 cells (5 m x 20 km) in a 'T' formation with another road 1 x 2000 cells (5 m x 10 km). The landslide areas were then randomly 'dropped' over the road array and indices such as the location, size (ABL) and number of road blockages (NBL) recorded. This process was performed 500 times (iterations) in a Monte-Carlo type simulation. Initial results show that for a landslide triggering event with 400 landslides over a 400 km2 region, the number of road blocks per iteration, NBL,ranges from 0 to 7. The average blockage area for the 500 iterations (A¯ BL) is about 3000 m
A count probability cookbok: Spurious effects and the scaling model
NASA Technical Reports Server (NTRS)
Colombi, S.; Bouchet, F. R.; Schaeffer, R.
1995-01-01
We study the errors brought by finite volume effects and dilution effects on the practical determination of the count probability distribution function P(sub N)(n,l), which is the probability of having N objects in a cell of volume l cubed for a set of average number density n. Dilution effects are particularly revelant to the so-called sparse sampling strategy. This work is mainly done in the framework of the Bailan & Schaeffer scaling model, which assumes that the Q-body correlation functions obey the scaling relation Xi(sub Q)(lambda r(sub l),....lambda r(sub Q) = lambda(exp -(Q-1)gamma) Xi(sub Q)(r(sub 1),....r(sub Q)). We use three synthetic samples as references to perform our analysis: a fractal generated by a Rayleigh-Levy random walk with approximately 3 x 10(exp 4) objects, a sample dominated by a spherical power-law cluster with approximately 3 x 10(exp 4) objects and a cold dark matter (CDM) universe involving approximately 3 x 10(exp 5) matter particles.
Modeling pore corrosion in normally open gold- plated copper connectors.
Battaile, Corbett Chandler; Moffat, Harry K.; Sun, Amy Cha-Tien; Enos, David George; Serna, Lysle M.; Sorensen, Neil Robert
2008-09-01
The goal of this study is to model the electrical response of gold plated copper electrical contacts exposed to a mixed flowing gas stream consisting of air containing 10 ppb H{sub 2}S at 30 C and a relative humidity of 70%. This environment accelerates the attack normally observed in a light industrial environment (essentially a simplified version of the Battelle Class 2 environment). Corrosion rates were quantified by measuring the corrosion site density, size distribution, and the macroscopic electrical resistance of the aged surface as a function of exposure time. A pore corrosion numerical model was used to predict both the growth of copper sulfide corrosion product which blooms through defects in the gold layer and the resulting electrical contact resistance of the aged surface. Assumptions about the distribution of defects in the noble metal plating and the mechanism for how corrosion blooms affect electrical contact resistance were needed to complete the numerical model. Comparisons are made to the experimentally observed number density of corrosion sites, the size distribution of corrosion product blooms, and the cumulative probability distribution of the electrical contact resistance. Experimentally, the bloom site density increases as a function of time, whereas the bloom size distribution remains relatively independent of time. These two effects are included in the numerical model by adding a corrosion initiation probability proportional to the surface area along with a probability for bloom-growth extinction proportional to the corrosion product bloom volume. The cumulative probability distribution of electrical resistance becomes skewed as exposure time increases. While the electrical contact resistance increases as a function of time for a fraction of the bloom population, the median value remains relatively unchanged. In order to model this behavior, the resistance calculated for large blooms has been weighted more heavily.
Biomechanical modelling of normal pressure hydrocephalus.
Dutta-Roy, Tonmoy; Wittek, Adam; Miller, Karol
2008-07-19
This study investigates the mechanics of normal pressure hydrocephalus (NPH) growth using a computational approach. We created a generic 3-D brain mesh of a healthy human brain and modelled the brain parenchyma as single phase and biphasic continuum. In our model, hyperelastic constitutive law and finite deformation theory described deformations within the brain parenchyma. We used a value of 155.77Pa for the shear modulus (mu) of the brain parenchyma. Additionally, in our model, contact boundary definitions constrained the brain outer surface inside the skull. We used transmantle pressure difference to load the model. Fully nonlinear, implicit finite element procedures in the time domain were used to obtain the deformations of the ventricles and the brain. To the best of our knowledge, this was the first 3-D, fully nonlinear model investigating NPH growth mechanics. Clinicians generally accept that at most 1mm of Hg transmantle pressure difference (133.416Pa) is associated with the condition of NPH. Our computations showed that transmantle pressure difference of 1mm of Hg (133.416Pa) did not produce NPH for either single phase or biphasic model of the brain parenchyma. A minimum transmantle pressure difference of 1.764mm of Hg (235.44Pa) was required to produce the clinical condition of NPH. This suggested that the hypothesis of a purely mechanical basis for NPH growth needs to be revised. We also showed that under equal transmantle pressure difference load, there were no significant differences between the computed ventricular volumes for biphasic and incompressible/nearly incompressible single phase model of the brain parenchyma. As a result, there was no major advantage gained by using a biphasic model for the brain parenchyma. We propose that for modelling NPH, nearly incompressible single phase model of the brain parenchyma was adequate. Single phase treatment of the brain parenchyma simplified the mathematical description of the NPH model and resulted in
Low-probability flood risk modeling for New York City.
Aerts, Jeroen C J H; Lin, Ning; Botzen, Wouter; Emanuel, Kerry; de Moel, Hans
2013-05-01
The devastating impact by Hurricane Sandy (2012) again showed New York City (NYC) is one of the most vulnerable cities to coastal flooding around the globe. The low-lying areas in NYC can be flooded by nor'easter storms and North Atlantic hurricanes. The few studies that have estimated potential flood damage for NYC base their damage estimates on only a single, or a few, possible flood events. The objective of this study is to assess the full distribution of hurricane flood risk in NYC. This is done by calculating potential flood damage with a flood damage model that uses many possible storms and surge heights as input. These storms are representative for the low-probability/high-impact flood hazard faced by the city. Exceedance probability-loss curves are constructed under different assumptions about the severity of flood damage. The estimated flood damage to buildings for NYC is between US$59 and 129 millions/year. The damage caused by a 1/100-year storm surge is within a range of US$2 bn-5 bn, while this is between US$5 bn and 11 bn for a 1/500-year storm surge. An analysis of flood risk in each of the five boroughs of NYC finds that Brooklyn and Queens are the most vulnerable to flooding. This study examines several uncertainties in the various steps of the risk analysis, which resulted in variations in flood damage estimations. These uncertainties include: the interpolation of flood depths; the use of different flood damage curves; and the influence of the spectra of characteristics of the simulated hurricanes.
Aerosol Behavior Log-Normal Distribution Model.
2001-10-22
HAARM3, an acronym for Heterogeneous Aerosol Agglomeration Revised Model 3, is the third program in the HAARM series developed to predict the time-dependent behavior of radioactive aerosols under postulated LMFBR accident conditions. HAARM3 was developed to include mechanisms of aerosol growth and removal which had not been accounted for in the earlier models. In addition, experimental measurements obtained on sodium oxide aerosols have been incorporated in the code. As in HAARM2, containment gas temperature, pressure,more » and temperature gradients normal to interior surfaces are permitted to vary with time. The effects of reduced density on sodium oxide agglomerate behavior and of nonspherical shape of particles on aerosol behavior mechanisms are taken into account, and aerosol agglomeration due to turbulent air motion is considered. Also included is a capability to calculate aerosol concentration attenuation factors and to restart problems requiring long computing times.« less
A Probability Model of Accuracy in Deception Detection Experiments.
ERIC Educational Resources Information Center
Park, Hee Sun; Levine, Timothy R.
2001-01-01
Extends the recent work on the veracity effect in deception detection. Explains the probabilistic nature of a receiver's accuracy in detecting deception and analyzes a receiver's detection of deception in terms of set theory and conditional probability. Finds that accuracy is shown to be a function of the relevant conditional probability and the…
Estimation of State Transition Probabilities: A Neural Network Model
NASA Astrophysics Data System (ADS)
Saito, Hiroshi; Takiyama, Ken; Okada, Masato
2015-12-01
Humans and animals can predict future states on the basis of acquired knowledge. This prediction of the state transition is important for choosing the best action, and the prediction is only possible if the state transition probability has already been learned. However, how our brains learn the state transition probability is unknown. Here, we propose a simple algorithm for estimating the state transition probability by utilizing the state prediction error. We analytically and numerically confirmed that our algorithm is able to learn the probability completely with an appropriate learning rate. Furthermore, our learning rule reproduced experimentally reported psychometric functions and neural activities in the lateral intraparietal area in a decision-making task. Thus, our algorithm might describe the manner in which our brains learn state transition probabilities and predict future states.
Modelling probabilities of heavy precipitation by regional approaches
NASA Astrophysics Data System (ADS)
Gaal, L.; Kysely, J.
2009-09-01
Extreme precipitation events are associated with large negative consequences for human society, mainly as they may trigger floods and landslides. The recent series of flash floods in central Europe (affecting several isolated areas) on June 24-28, 2009, the worst one over several decades in the Czech Republic as to the number of persons killed and the extent of damage to buildings and infrastructure, is an example. Estimates of growth curves and design values (corresponding e.g. to 50-yr and 100-yr return periods) of precipitation amounts, together with their uncertainty, are important in hydrological modelling and other applications. The interest in high quantiles of precipitation distributions is also related to possible climate change effects, as climate model simulations tend to project increased severity of precipitation extremes in a warmer climate. The present study compares - in terms of Monte Carlo simulation experiments - several methods to modelling probabilities of precipitation extremes that make use of ‘regional approaches’: the estimation of distributions of extremes takes into account data in a ‘region’ (‘pooling group’), in which one may assume that the distributions at individual sites are identical apart from a site-specific scaling factor (the condition is referred to as ‘regional homogeneity’). In other words, all data in a region - often weighted in some way - are taken into account when estimating the probability distribution of extremes at a given site. The advantage is that sampling variations in the estimates of model parameters and high quantiles are to a large extent reduced compared to the single-site analysis. We focus on the ‘region-of-influence’ (ROI) method which is based on the identification of unique pooling groups (forming the database for the estimation) for each site under study. The similarity of sites is evaluated in terms of a set of site attributes related to the distributions of extremes. The issue of
Modelling probabilities of heavy precipitation by regional approaches
NASA Astrophysics Data System (ADS)
Gaal, L.; Kysely, J.
2009-09-01
Extreme precipitation events are associated with large negative consequences for human society, mainly as they may trigger floods and landslides. The recent series of flash floods in central Europe (affecting several isolated areas) on June 24-28, 2009, the worst one over several decades in the Czech Republic as to the number of persons killed and the extent of damage to buildings and infrastructure, is an example. Estimates of growth curves and design values (corresponding e.g. to 50-yr and 100-yr return periods) of precipitation amounts, together with their uncertainty, are important in hydrological modelling and other applications. The interest in high quantiles of precipitation distributions is also related to possible climate change effects, as climate model simulations tend to project increased severity of precipitation extremes in a warmer climate. The present study compares - in terms of Monte Carlo simulation experiments - several methods to modelling probabilities of precipitation extremes that make use of ‘regional approaches’: the estimation of distributions of extremes takes into account data in a ‘region’ (‘pooling group’), in which one may assume that the distributions at individual sites are identical apart from a site-specific scaling factor (the condition is referred to as ‘regional homogeneity’). In other words, all data in a region - often weighted in some way - are taken into account when estimating the probability distribution of extremes at a given site. The advantage is that sampling variations in the estimates of model parameters and high quantiles are to a large extent reduced compared to the single-site analysis. We focus on the ‘region-of-influence’ (ROI) method which is based on the identification of unique pooling groups (forming the database for the estimation) for each site under study. The similarity of sites is evaluated in terms of a set of site attributes related to the distributions of extremes. The issue of
Sabelnikov, Alexander; Zhukov, Vladimir; Kempf, Ruth
2006-05-15
Real-time biosensors are expected to provide significant help in emergency response management should a terrorist attack with the use of biowarfare, BW, agents occur. In spite of recent and spectacular progress in the field of biosensors, several core questions still remain unaddressed. For instance, how sensitive should be a sensor? To what levels of infection would the different sensitivity limits correspond? How the probabilities of identification correspond to the probabilities of infection by an agent? In this paper, an attempt was made to address these questions. A simple probability model was generated for the calculation of risks of infection of humans exposed to different doses of infectious agents and of the probability of their simultaneous real-time detection/identification by a model biosensor and its network. A model biosensor was defined as a single device that included an aerosol sampler and a device for identification by any known (or conceived) method. A network of biosensors was defined as a set of several single biosensors that operated in a similar way and dealt with the same amount of an agent. Neither the particular deployment of sensors within the network, nor the spacious and timely distribution of agent aerosols due to wind, ventilation, humidity, temperature, etc., was considered by the model. Three model biosensors based on PCR-, antibody/antigen-, and MS-technique were used for simulation. A wide range of their metric parameters encompassing those of commercially available and laboratory biosensors, and those of future, theoretically conceivable devices was used for several hundred simulations. Based on the analysis of the obtained results, it is concluded that small concentrations of aerosolized agents that are still able to provide significant risks of infection especially for highly infectious agents (e.g. for small pox those risk are 1, 8, and 37 infected out of 1000 exposed, depending on the viability of the virus preparation) will
Model and test in a fungus of the probability that beneficial mutations survive drift.
Gifford, Danna R; de Visser, J Arjan G M; Wahl, Lindi M
2013-02-23
Determining the probability of fixation of beneficial mutations is critically important for building predictive models of adaptive evolution. Despite considerable theoretical work, models of fixation probability have stood untested for nearly a century. However, recent advances in experimental and theoretical techniques permit the development of models with testable predictions. We developed a new model for the probability of surviving genetic drift, a major component of fixation probability, for novel beneficial mutations in the fungus Aspergillus nidulans, based on the life-history characteristics of its colony growth on a solid surface. We tested the model by measuring the probability of surviving drift in 11 adapted strains introduced into wild-type populations of different densities. We found that the probability of surviving drift increased with mutant invasion fitness, and decreased with wild-type density, as expected. The model accurately predicted the survival probability for the majority of mutants, yielding one of the first direct tests of the extinction probability of beneficial mutations.
Using skew-logistic probability density function as a model for age-specific fertility rate pattern.
Asili, Sahar; Rezaei, Sadegh; Najjar, Lotfollah
2014-01-01
Fertility rate is one of the most important global indexes. Past researchers found models which fit to age-specific fertility rates. For example, mixture probability density functions have been proposed for situations with bi-modal fertility patterns. This model is less useful for unimodal age-specific fertility rate patterns, so a model based on skew-symmetric (skew-normal) pdf was proposed by Mazzuco and Scarpa (2011) which was flexible for unimodal and bimodal fertility patterns. In this paper, we introduce skew-logistic probability density function as a better model: its residuals are less than those of the skew-normal model and it can more precisely estimate the parameters of the model. PMID:24967404
Flexible regression model selection for survival probabilities: with application to AIDS.
DiRienzo, A Gregory
2009-12-01
Clinicians are often interested in the effect of covariates on survival probabilities at prespecified study times. Because different factors can be associated with the risk of short- and long-term failure, a flexible modeling strategy is pursued. Given a set of multiple candidate working models, an objective methodology is proposed that aims to construct consistent and asymptotically normal estimators of regression coefficients and average prediction error for each working model, that are free from the nuisance censoring variable. It requires the conditional distribution of censoring given covariates to be modeled. The model selection strategy uses stepup or stepdown multiple hypothesis testing procedures that control either the proportion of false positives or generalized familywise error rate when comparing models based on estimates of average prediction error. The context can actually be cast as a missing data problem, where augmented inverse probability weighted complete case estimators of regression coefficients and prediction error can be used (Tsiatis, 2006, Semiparametric Theory and Missing Data). A simulation study and an interesting analysis of a recent AIDS trial are provided. PMID:19173693
Modeling the effect of reward amount on probability discounting.
Myerson, Joel; Green, Leonard; Morris, Joshua
2011-03-01
The present study with college students examined the effect of amount on the discounting of probabilistic monetary rewards. A hyperboloid function accurately described the discounting of hypothetical rewards ranging in amount from $20 to $10,000,000. The degree of discounting increased continuously with amount of probabilistic reward. This effect of amount was not due to changes in the rate parameter of the discounting function, but rather was due to increases in the exponent. These results stand in contrast to those observed with the discounting of delayed monetary rewards, in which the degree of discounting decreases with reward amount due to amount-dependent decreases in the rate parameter. Taken together, this pattern of results suggests that delay and probability discounting reflect different underlying mechanisms. That is, the fact that the exponent in the delay discounting function is independent of amount is consistent with a psychophysical scaling interpretation, whereas the finding that the exponent of the probability-discounting function is amount-dependent is inconsistent with such an interpretation. Instead, the present results are consistent with the idea that the probability-discounting function is itself the product of a value function and a weighting function. This idea was first suggested by Kahneman and Tversky (1979), although their prospect theory does not predict amount effects like those observed. The effect of amount on probability discounting was parsimoniously incorporated into our hyperboloid discounting function by assuming that the exponent was proportional to the amount raised to a power. The amount-dependent exponent of the probability-discounting function may be viewed as reflecting the effect of amount on the weighting of the probability with which the reward will be received.
ERIC Educational Resources Information Center
Cuevas, Eduardo J.
1997-01-01
Discusses cornerstone of Montessori theory, normalization, which asserts that if a child is placed in an optimum prepared environment where inner impulses match external opportunities, the undeviated self emerges, a being totally in harmony with its surroundings. Makes distinctions regarding normalization, normalized, and normality, indicating how…
The Sequential Probability Ratio Test and Binary Item Response Models
ERIC Educational Resources Information Center
Nydick, Steven W.
2014-01-01
The sequential probability ratio test (SPRT) is a common method for terminating item response theory (IRT)-based adaptive classification tests. To decide whether a classification test should stop, the SPRT compares a simple log-likelihood ratio, based on the classification bound separating two categories, to prespecified critical values. As has…
ERIC Educational Resources Information Center
Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas
2013-01-01
Data analysis requires subtle probability reasoning to answer questions like "What is the chance of event A occurring, given that event B was observed?" This generic question arises in discussions of many intriguing scientific questions such as "What is the probability that an adolescent weighs between 120 and 140 pounds given that…
Valve, normally open, titanium: Pyronetics Model 1425
NASA Technical Reports Server (NTRS)
Avalos, E.
1972-01-01
An operating test series was applied to two explosive actuated, normally open, titanium valves. There were no failures. Tests included: proof pressure and external leakage test, gross leak test, post actuation leakage test, and burst pressure test.
NASA Astrophysics Data System (ADS)
Baer, P.; Mastrandrea, M.
2006-12-01
Simple probabilistic models which attempt to estimate likely transient temperature change from specified CO2 emissions scenarios must make assumptions about at least six uncertain aspects of the causal chain between emissions and temperature: current radiative forcing (including but not limited to aerosols), current land use emissions, carbon sinks, future non-CO2 forcing, ocean heat uptake, and climate sensitivity. Of these, multiple PDFs (probability density functions) have been published for the climate sensitivity, a couple for current forcing and ocean heat uptake, one for future non-CO2 forcing, and none for current land use emissions or carbon cycle uncertainty (which are interdependent). Different assumptions about these parameters, as well as different model structures, will lead to different estimates of likely temperature increase from the same emissions pathway. Thus policymakers will be faced with a range of temperature probability distributions for the same emissions scenarios, each described by a central tendency and spread. Because our conventional understanding of uncertainty and probability requires that a probabilistically defined variable of interest have only a single mean (or median, or modal) value and a well-defined spread, this "multidimensional" uncertainty defies straightforward utilization in policymaking. We suggest that there are no simple solutions to the questions raised. Crucially, we must dispel the notion that there is a "true" probability probabilities of this type are necessarily subjective, and reasonable people may disagree. Indeed, we suggest that what is at stake is precisely the question, what is it reasonable to believe, and to act as if we believe? As a preliminary suggestion, we demonstrate how the output of a simple probabilistic climate model might be evaluated regarding the reasonableness of the outputs it calculates with different input PDFs. We suggest further that where there is insufficient evidence to clearly
Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.
2012-04-01
Purpose: To determine whether normal tissue complication probability (NTCP) analyses of the human spinal cord by use of the Lyman-Kutcher-Burman (LKB) model, supplemented by linear-quadratic modeling to account for the effect of fractionation, predict the risk of myelopathy from stereotactic radiosurgery (SRS). Methods and Materials: From November 2001 to July 2008, 24 spinal hemangioblastomas in 17 patients were treated with SRS. Of the tumors, 17 received 1 fraction with a median dose of 20 Gy (range, 18-30 Gy) and 7 received 20 to 25 Gy in 2 or 3 sessions, with cord maximum doses of 22.7 Gy (range, 17.8-30.9 Gy) and 22.0 Gy (range, 20.2-26.6 Gy), respectively. By use of conventional values for {alpha}/{beta}, volume parameter n, 50% complication probability dose TD{sub 50}, and inverse slope parameter m, a computationally simplified implementation of the LKB model was used to calculate the biologically equivalent uniform dose and NTCP for each treatment. Exploratory calculations were performed with alternate values of {alpha}/{beta} and n. Results: In this study 1 case (4%) of myelopathy occurred. The LKB model using radiobiological parameters from Emami and the logistic model with parameters from Schultheiss overestimated complication rates, predicting 13 complications (54%) and 18 complications (75%), respectively. An increase in the volume parameter (n), to assume greater parallel organization, improved the predictive value of the models. Maximum-likelihood LKB fitting of {alpha}/{beta} and n yielded better predictions (0.7 complications), with n = 0.023 and {alpha}/{beta} = 17.8 Gy. Conclusions: The spinal cord tolerance to the dosimetry of SRS is higher than predicted by the LKB model using any set of accepted parameters. Only a high {alpha}/{beta} value in the LKB model and only a large volume effect in the logistic model with Schultheiss data could explain the low number of complications observed. This finding emphasizes that radiobiological models
Hussein, M; Aldridge, S; Guerrero Urbano, T; Nisbet, A
2012-01-01
Objective The aim of this study was to investigate the effect of 6 and 15-MV photon energies on intensity-modulated radiation therapy (IMRT) prostate cancer treatment plan outcome and to compare the theoretical risks of secondary induced malignancies. Methods Separate prostate cancer IMRT plans were prepared for 6 and 15-MV beams. Organ-equivalent doses were obtained through thermoluminescent dosemeter measurements in an anthropomorphic Aldersen radiation therapy human phantom. The neutron dose contribution at 15 MV was measured using polyallyl-diglycol-carbonate neutron track etch detectors. Risk coefficients from the International Commission on Radiological Protection Report 103 were used to compare the risk of fatal secondary induced malignancies in out-of-field organs and tissues for 6 and 15 MV. For the bladder and the rectum, a comparative evaluation of the risk using three separate models was carried out. Dose–volume parameters for the rectum, bladder and prostate planning target volume were evaluated, as well as normal tissue complication probability (NTCP) and tumour control probability calculations. Results There is a small increased theoretical risk of developing a fatal cancer from 6 MV compared with 15 MV, taking into account all the organs. Dose–volume parameters for the rectum and bladder show that 15 MV results in better volume sparing in the regions below 70 Gy, but the volume exposed increases slightly beyond this in comparison with 6 MV, resulting in a higher NTCP for the rectum of 3.6% vs 3.0% (p=0.166). Conclusion The choice to treat using IMRT at 15 MV should not be excluded, but should be based on risk vs benefit while considering the age and life expectancy of the patient together with the relative risk of radiation-induced cancer and NTCPs. PMID:22010028
Simplifying Probability Elicitation and Uncertainty Modeling in Bayesian Networks
Paulson, Patrick R; Carroll, Thomas E; Sivaraman, Chitra; Neorr, Peter A; Unwin, Stephen D; Hossain, Shamina S
2011-04-16
In this paper we contribute two methods that simplify the demands of knowledge elicitation for particular types of Bayesian networks. The first method simplify the task of providing probabilities when the states that a random variable takes can be described by a new, fully ordered state set in which a state implies all the preceding states. The second method leverages Dempster-Shafer theory of evidence to provide a way for the expert to express the degree of ignorance that they feel about the estimates being provided.
A novel human error probability assessment using fuzzy modeling.
Ung, Shuen-Tai; Shen, Wei-Min
2011-05-01
Human error is one of the significant factors contributing to accidents. Traditional human error probability (HEP) studies based on fuzzy number concepts are one of the contributions addressing such a problem. It is particularly useful under circumstances where the lack of data exists. However, the degree of the discriminability of such studies may be questioned when applied under circumstances where experts have adequate information and specific values can be determined in the abscissa of the membership function of linguistic terms, that is, the fuzzy data of each scenario considered are close to each other. In this article, a novel HEP assessment aimed at solving such a difficulty is proposed. Under the framework, the fuzzy data are equipped with linguistic terms and membership values. By establishing a rule base for data combination, followed by the defuzzification and HEP transformation processes, the HEP results can be acquired. The methodology is first examined using a test case consisting of three different scenarios of which the fuzzy data are close to each other. The results generated are compared with the outcomes produced from the traditional fuzzy HEP studies using the same test case. It is concluded that the methodology proposed in this study has a higher degree of the discriminability and is capable of providing more reasonable results. Furthermore, in situations where the lack of data exists, the proposed approach is also capable of providing the range of the HEP results based on different risk viewpoints arbitrarily established as illustrated using a real-world example. PMID:21143260
Modeling Conditional Probabilities in Complex Educational Assessments. CSE Technical Report.
ERIC Educational Resources Information Center
Mislevy, Robert J.; Almond, Russell; Dibello, Lou; Jenkins, Frank; Steinberg, Linda; Yan, Duanli; Senturk, Deniz
An active area in psychometric research is coordinated task design and statistical analysis built around cognitive models. Compared with classical test theory and item response theory, there is often less information from observed data about the measurement-model parameters. On the other hand, there is more information from the grounding…
Application of Probability Methods to Assess Crash Modeling Uncertainty
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Stockwell, Alan E.; Hardy, Robin C.
2003-01-01
Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stress-strain behaviors, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the effects of finite element modeling assumptions on the predicted responses. The vertical drop test of a Fokker F28 fuselage section will be the focus of this paper. The results of a probabilistic analysis using finite element simulations will be compared with experimental data.
Application of Probability Methods to Assess Crash Modeling Uncertainty
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Stockwell, Alan E.; Hardy, Robin C.
2007-01-01
Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stress-strain behaviors, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the effects of finite element modeling assumptions on the predicted responses. The vertical drop test of a Fokker F28 fuselage section will be the focus of this paper. The results of a probabilistic analysis using finite element simulations will be compared with experimental data.
Takemura, Kazuhisa; Murakami, Hajime
2016-01-01
A probability weighting function (w(p)) is considered to be a nonlinear function of probability (p) in behavioral decision theory. This study proposes a psychophysical model of probability weighting functions derived from a hyperbolic time discounting model and a geometric distribution. The aim of the study is to show probability weighting functions from the point of view of waiting time for a decision maker. Since the expected value of a geometrically distributed random variable X is 1/p, we formulized the probability weighting function of the expected value model for hyperbolic time discounting as w(p) = (1 − k log p)−1. Moreover, the probability weighting function is derived from Loewenstein and Prelec's (1992) generalized hyperbolic time discounting model. The latter model is proved to be equivalent to the hyperbolic-logarithmic weighting function considered by Prelec (1998) and Luce (2001). In this study, we derive a model from the generalized hyperbolic time discounting model assuming Fechner's (1860) psychophysical law of time and a geometric distribution of trials. In addition, we develop median models of hyperbolic time discounting and generalized hyperbolic time discounting. To illustrate the fitness of each model, a psychological experiment was conducted to assess the probability weighting and value functions at the level of the individual participant. The participants were 50 university students. The results of individual analysis indicated that the expected value model of generalized hyperbolic discounting fitted better than previous probability weighting decision-making models. The theoretical implications of this finding are discussed. PMID:27303338
Takemura, Kazuhisa; Murakami, Hajime
2016-01-01
A probability weighting function (w(p)) is considered to be a nonlinear function of probability (p) in behavioral decision theory. This study proposes a psychophysical model of probability weighting functions derived from a hyperbolic time discounting model and a geometric distribution. The aim of the study is to show probability weighting functions from the point of view of waiting time for a decision maker. Since the expected value of a geometrically distributed random variable X is 1/p, we formulized the probability weighting function of the expected value model for hyperbolic time discounting as w(p) = (1 - k log p)(-1). Moreover, the probability weighting function is derived from Loewenstein and Prelec's (1992) generalized hyperbolic time discounting model. The latter model is proved to be equivalent to the hyperbolic-logarithmic weighting function considered by Prelec (1998) and Luce (2001). In this study, we derive a model from the generalized hyperbolic time discounting model assuming Fechner's (1860) psychophysical law of time and a geometric distribution of trials. In addition, we develop median models of hyperbolic time discounting and generalized hyperbolic time discounting. To illustrate the fitness of each model, a psychological experiment was conducted to assess the probability weighting and value functions at the level of the individual participant. The participants were 50 university students. The results of individual analysis indicated that the expected value model of generalized hyperbolic discounting fitted better than previous probability weighting decision-making models. The theoretical implications of this finding are discussed. PMID:27303338
Marcie, S; Fellah, M; Chami, S; Mekki, F
2015-01-01
Objective: The aim of this study is to assess and quantify patients' set-up errors using an electronic portal imaging device and to evaluate their dosimetric and biological impact in terms of generalized equivalent uniform dose (gEUD) on predictive models, such as the tumour control probability (TCP) and the normal tissue complication probability (NTCP). Methods: 20 patients treated for nasopharyngeal cancer were enrolled in the radiotherapy–oncology department of HCA. Systematic and random errors were quantified. The dosimetric and biological impact of these set-up errors on the target volume and the organ at risk (OARs) coverage were assessed using calculation of dose–volume histogram, gEUD, TCP and NTCP. For this purpose, an in-house software was developed and used. Results: The standard deviations (1SDs) of the systematic set-up and random set-up errors were calculated for the lateral and subclavicular fields and gave the following results: ∑ = 0.63 ± (0.42) mm and σ = 3.75 ± (0.79) mm, respectively. Thus a planning organ at risk volume (PRV) margin of 3 mm was defined around the OARs, and a 5-mm margin used around the clinical target volume. The gEUD, TCP and NTCP calculations obtained with and without set-up errors have shown increased values for tumour, where ΔgEUD (tumour) = 1.94% Gy (p = 0.00721) and ΔTCP = 2.03%. The toxicity of OARs was quantified using gEUD and NTCP. The values of ΔgEUD (OARs) vary from 0.78% to 5.95% in the case of the brainstem and the optic chiasm, respectively. The corresponding ΔNTCP varies from 0.15% to 0.53%, respectively. Conclusion: The quantification of set-up errors has a dosimetric and biological impact on the tumour and on the OARs. The developed in-house software using the concept of gEUD, TCP and NTCP biological models has been successfully used in this study. It can be used also to optimize the treatment plan established for our patients. Advances in knowledge: The g
Jakobi, Annika; Bandurska-Luque, Anna; Stützer, Kristin; Haase, Robert; Löck, Steffen; Wack, Linda-Jacqueline; Mönnich, David; Thorwarth, Daniela; and others
2015-08-01
Purpose: The purpose of this study was to determine, by treatment plan comparison along with normal tissue complication probability (NTCP) modeling, whether a subpopulation of patients with head and neck squamous cell carcinoma (HNSCC) could be identified that would gain substantial benefit from proton therapy in terms of NTCP. Methods and Materials: For 45 HNSCC patients, intensity modulated radiation therapy (IMRT) was compared to intensity modulated proton therapy (IMPT). Physical dose distributions were evaluated as well as the resulting NTCP values, using modern models for acute mucositis, xerostomia, aspiration, dysphagia, laryngeal edema, and trismus. Patient subgroups were defined based on primary tumor location. Results: Generally, IMPT reduced the NTCP values while keeping similar target coverage for all patients. Subgroup analyses revealed a higher individual reduction of swallowing-related side effects by IMPT for patients with tumors in the upper head and neck area, whereas the risk reduction of acute mucositis was more pronounced in patients with tumors in the larynx region. More patients with tumors in the upper head and neck area had a reduction in NTCP of more than 10%. Conclusions: Subgrouping can help to identify patients who may benefit more than others from the use of IMPT and, thus, can be a useful tool for a preselection of patients in the clinic where there are limited PT resources. Because the individual benefit differs within a subgroup, the relative merits should additionally be evaluated by individual treatment plan comparisons.
Fales, Roger
2010-10-01
In this work, a method for determining the reliability of dynamic systems is discussed. Using statistical information on system parameters, the goal is to determine the probability of a dynamic system achieving or not achieving frequency domain performance specifications such as low frequency tracking error, and bandwidth. An example system is considered with closed loop control. A performance specification is given and converted into a performance weight transfer function. The example system is found to have a 20% chance of not achieving the given performance specification. An example of a realistic higher order system model of an electro hydraulic valve with spring feedback and position measurement feedback is also considered. The spring rate and viscous friction are considered as random variables with normal distributions. It was found that nearly 6% of valve systems would not achieve the given frequency domain performance requirement. Uncertainty modeling is also considered. An uncertainty model for the hydraulic valve systems is presented with the same uncertain parameters as in the previous example. However, the uncertainty model was designed such that only 95% of plants would be covered by the uncertainty model. This uncertainty model was applied to the valve control system example in a robust performance test.
Aircraft detection based on probability model of structural elements
NASA Astrophysics Data System (ADS)
Chen, Long; Jiang, Zhiguo
2014-11-01
Detecting aircrafts is important in the field of remote sensing. In past decades, researchers used various approaches to detect aircrafts based on classifiers for overall aircrafts. However, with the development of high-resolution images, the internal structures of aircrafts should also be taken into consideration now. To address this issue, a novel aircrafts detection method for satellite images based on probabilistic topic model is presented. We model aircrafts as the connected structural elements rather than features. The proposed method contains two major steps: 1) Use Cascade-Adaboost classier to identify the structural elements of aircraft firstly. 2) Connect these structural elements to aircrafts, where the relationships between elements are estimated by hierarchical topic model. The model places strict spatial constraints on structural elements which can identify differences between similar features. The experimental results demonstrate the effectiveness of the approach.
Fleurence, Rachael L; Hollenbeak, Christopher S
2007-01-01
Economic modelling is increasingly being used to evaluate the cost effectiveness of health technologies. One of the requirements for good practice in modelling is appropriate application of rates and probabilities. In spite of previous descriptions of appropriate use of rates and probabilities, confusions persist beyond a simple understanding of their definitions. The objective of this article is to provide a concise guide to understanding the issues surrounding the use of rates and probabilities reported in the literature in economic models, and an understanding of when and how to transform them appropriately. The article begins by defining rates and probabilities and shows the essential difference between the two measures. Appropriate conversions between rates and probabilities are discussed, and simple examples are provided to illustrate the techniques and pitfalls. How the transformed rates and probabilities may be used in economic models is then described and some recommendations are suggested.
Hewson, Alex C; Bauer, Johannes
2010-03-24
We show that information on the probability density of local fluctuations can be obtained from a numerical renormalization group calculation of a reduced density matrix. We apply this approach to the Anderson-Holstein impurity model to calculate the ground state probability density ρ(x) for the displacement x of the local oscillator. From this density we can deduce an effective local potential for the oscillator and compare its form with that obtained from a semiclassical approximation as a function of the coupling strength. The method is extended to the infinite dimensional Holstein-Hubbard model using dynamical mean field theory. We use this approach to compare the probability densities for the displacement of the local oscillator in the normal, antiferromagnetic and charge ordered phases.
Modeling Outcomes from Probability Tasks: Sixth Graders Reasoning Together
ERIC Educational Resources Information Center
Alston, Alice; Maher, Carolyn
2003-01-01
This report considers the reasoning of sixth grade students as they explore problem tasks concerning the fairness of dice games. The particular focus is the students' interactions, verbal and non-verbal, as they build and justify representations that extend their basic understanding of number combinations in order to model the outcome set of a…
Physical model assisted probability of detection in nondestructive evaluation
Li, M.; Meeker, W. Q.; Thompson, R. B.
2011-06-23
Nondestructive evaluation is used widely in many engineering and industrial areas to detect defects or flaws such as cracks inside parts or structures during manufacturing or for products in service. The standard statistical model is a simple empirical linear regression between the (possibly transformed) signal response variables and the (possibly transformed) explanatory variables. For some applications, such a simple empirical approach is inadequate. An important alternative approach is to use knowledge of the physics of the inspection process to provide information about the underlying relationship between the response and explanatory variables. Use of such knowledge can greatly increase the power and accuracy of the statistical analysis and enable, when needed, proper extrapolation outside the range of the observed explanatory variables. This paper describes a set of physical model-assisted analyses to study the capability of two different ultrasonic testing inspection methods to detect synthetic hard alpha inclusion and flat-bottom hole defects in a titanium forging disk.
A simulation model for estimating probabilities of defects in welds
Chapman, O.J.V.; Khaleel, M.A.; Simonen, F.A.
1996-12-01
In recent work for the US Nuclear Regulatory Commission in collaboration with Battelle Pacific Northwest National Laboratory, Rolls-Royce and Associates, Ltd., has adapted an existing model for piping welds to address welds in reactor pressure vessels. This paper describes the flaw estimation methodology as it applies to flaws in reactor pressure vessel welds (but not flaws in base metal or flaws associated with the cladding process). Details of the associated computer software (RR-PRODIGAL) are provided. The approach uses expert elicitation and mathematical modeling to simulate the steps in manufacturing a weld and the errors that lead to different types of weld defects. The defects that may initiate in weld beads include center cracks, lack of fusion, slag, pores with tails, and cracks in heat affected zones. Various welding processes are addressed including submerged metal arc welding. The model simulates the effects of both radiographic and dye penetrant surface inspections. Output from the simulation gives occurrence frequencies for defects as a function of both flaw size and flaw location (surface connected and buried flaws). Numerical results are presented to show the effects of submerged metal arc versus manual metal arc weld processes.
Inferring Tree Causal Models of Cancer Progression with Probability Raising
Mauri, Giancarlo; Antoniotti, Marco; Mishra, Bud
2014-01-01
Existing techniques to reconstruct tree models of progression for accumulative processes, such as cancer, seek to estimate causation by combining correlation and a frequentist notion of temporal priority. In this paper, we define a novel theoretical framework called CAPRESE (CAncer PRogression Extraction with Single Edges) to reconstruct such models based on the notion of probabilistic causation defined by Suppes. We consider a general reconstruction setting complicated by the presence of noise in the data due to biological variation, as well as experimental or measurement errors. To improve tolerance to noise we define and use a shrinkage-like estimator. We prove the correctness of our algorithm by showing asymptotic convergence to the correct tree under mild constraints on the level of noise. Moreover, on synthetic data, we show that our approach outperforms the state-of-the-art, that it is efficient even with a relatively small number of samples and that its performance quickly converges to its asymptote as the number of samples increases. For real cancer datasets obtained with different technologies, we highlight biologically significant differences in the progressions inferred with respect to other competing techniques and we also show how to validate conjectured biological relations with progression models. PMID:25299648
Time‐dependent renewal‐model probabilities when date of last earthquake is unknown
Field, Edward H.; Jordan, Thomas H.
2015-01-01
We derive time-dependent, renewal-model earthquake probabilities for the case in which the date of the last event is completely unknown, and compare these with the time-independent Poisson probabilities that are customarily used as an approximation in this situation. For typical parameter values, the renewal-model probabilities exceed Poisson results by more than 10% when the forecast duration exceeds ~20% of the mean recurrence interval. We also derive probabilities for the case in which the last event is further constrained to have occurred before historical record keeping began (the historic open interval), which can only serve to increase earthquake probabilities for typically applied renewal models.We conclude that accounting for the historic open interval can improve long-term earthquake rupture forecasts for California and elsewhere.
Love, Jeffrey J.; Rigler, E. Joshua; Pulkkinen, Antti; Riley, Pete
2015-01-01
An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to −Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, −Dst≥850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42,2.41] times per century; a 100-yr magnetic storm is identified as having a −Dst≥880 nT (greater than Carrington) but a wide 95% confidence interval of [490,1187] nT.
NASA Astrophysics Data System (ADS)
Love, J. J.; Rigler, E. J.; Pulkkinen, A. A.; Riley, P.
2015-12-01
An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to -Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, -Dst > 850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42, 2.41] times per century; a 100-yr magnetic storm is identified as having a -Dst > 880 nT (greater than Carrington) but a wide 95% confidence interval of [490, 1187] nT. This work is partially motivated by United States National Science and Technology Council and Committee on Space Research and International Living with a Star priorities and strategic plans for the assessment and mitigation of space-weather hazards.
ERIC Educational Resources Information Center
Jenny, Mirjam A.; Rieskamp, Jörg; Nilsson, Håkan
2014-01-01
Judging whether multiple events will co-occur is an important aspect of everyday decision making. The underlying probabilities of occurrence are usually unknown and have to be inferred from experience. Using a rigorous, quantitative model comparison, we investigate how people judge the conjunctive probabilities of multiple events to co-occur. In 2…
Cold and hot cognition: quantum probability theory and realistic psychological modeling.
Corr, Philip J
2013-06-01
Typically, human decision making is emotionally "hot" and does not conform to "cold" classical probability (CP) theory. As quantum probability (QP) theory emphasises order, context, superimposition states, and nonlinear dynamic effects, one of its major strengths may be its power to unify formal modeling and realistic psychological theory (e.g., information uncertainty, anxiety, and indecision, as seen in the Prisoner's Dilemma).
Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno
2016-01-01
Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision.
Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno
2016-01-01
Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision. PMID:27303323
Application of the response probability density function technique to biodynamic models.
Hershey, R L; Higgins, T H
1978-01-01
A method has been developed, which we call the "response probability density function technique," which has applications in predicting the probability of injury in a wide range of biodynamic situations. The method, which was developed in connection with sonic boom damage prediction, utilized the probability density function of the excitation force and the probability density function of the sensitivity of the material being acted upon. The method is especially simple to use when both these probability density functions are lognormal. Studies thus far have shown that the stresses from sonic booms, as well as the strengths of glass and mortars, are distributed lognormally. Some biodynamic processes also have lognormal distributions and are, therefore, amenable to modeling by this technique. In particular, this paper discusses the application of the response probability density function technique to the analysis of the thoracic response to air blast and the prediction of skull fracture from head impact. PMID:623590
Discrete Latent Markov Models for Normally Distributed Response Data
ERIC Educational Resources Information Center
Schmittmann, Verena D.; Dolan, Conor V.; van der Maas, Han L. J.; Neale, Michael C.
2005-01-01
Van de Pol and Langeheine (1990) presented a general framework for Markov modeling of repeatedly measured discrete data. We discuss analogical single indicator models for normally distributed responses. In contrast to discrete models, which have been studied extensively, analogical continuous response models have hardly been considered. These…
NASA Astrophysics Data System (ADS)
Sangaletti Terçariol, César Augusto; de Moura Kiipper, Felipe; Souto Martinez, Alexandre
2007-03-01
Consider that the coordinates of N points are randomly generated along the edges of a d-dimensional hypercube (random point problem). The probability P(d,N)m,n that an arbitrary point is the mth nearest neighbour to its own nth nearest neighbour (Cox probabilities) plays an important role in spatial statistics. Also, it has been useful in the description of physical processes in disordered media. Here we propose a simpler derivation of Cox probabilities, where we stress the role played by the system dimensionality d. In the limit d → ∞, the distances between pair of points become independent (random link model) and closed analytical forms for the neighbourhood probabilities are obtained both for the thermodynamic limit and finite-size system. Breaking the distance symmetry constraint drives us to the random map model, for which the Cox probabilities are obtained for two cases: whether a point is its own nearest neighbour or not.
Justifying Database Normalization: A Cost/Benefit Model.
ERIC Educational Resources Information Center
Lee, Heeseok
1995-01-01
Proposes a cost/benefit model coupled with a decision tree for determining normal forms, which are used in information systems development processes to group data into well-refined structures. The three primary variables that impact the benefits and costs of normalization (reduced anomalies, storage requirements, and transaction response times)…
A model for the probability density function of downwelling irradiance under ocean waves.
Shen, Meng; Xu, Zao; Yue, Dick K P
2011-08-29
We present a statistical model that analytically quantifies the probability density function (PDF) of the downwelling light irradiance under random ocean waves modeling the surface as independent and identically distributed flat facets. The model can incorporate the separate effects of surface short waves and volume light scattering. The theoretical model captures the characteristics of the PDF, from skewed to near-Gaussian shape as the depth increases from shallow to deep water. The model obtains a closed-form asymptotic for the probability that diminishes at a rate between exponential and Gaussian with increasing extreme values. The model is validated by comparisons with existing field measurements and Monte Carlo simulation.
Not Quite Normal: Consequences of Violating the Assumption of Normality in Regression Mixture Models
ERIC Educational Resources Information Center
Van Horn, M. Lee; Smith, Jessalyn; Fagan, Abigail A.; Jaki, Thomas; Feaster, Daniel J.; Masyn, Katherine; Hawkins, J. David; Howe, George
2012-01-01
Regression mixture models, which have only recently begun to be used in applied research, are a new approach for finding differential effects. This approach comes at the cost of the assumption that error terms are normally distributed within classes. This study uses Monte Carlo simulations to explore the effects of relatively minor violations of…
Zero Density of Open Paths in the Lorentz Mirror Model for Arbitrary Mirror Probability
NASA Astrophysics Data System (ADS)
Kraemer, Atahualpa S.; Sanders, David P.
2014-09-01
We show, incorporating results obtained from numerical simulations, that in the Lorentz mirror model, the density of open paths in any finite box tends to 0 as the box size tends to infinity, for any mirror probability.
Simpson, Daniel R.; Song, William Y.; Moiseenko, Vitali; Rose, Brent S.; Yashar, Catheryn M.; Mundt, Arno J.; Mell, Loren K.
2012-05-01
Purpose: To test the hypothesis that increased bowel radiation dose is associated with acute gastrointestinal (GI) toxicity in cervical cancer patients undergoing concurrent chemotherapy and intensity-modulated radiation therapy (IMRT), using a previously derived normal tissue complication probability (NTCP) model. Methods: Fifty patients with Stage I-III cervical cancer undergoing IMRT and concurrent weekly cisplatin were analyzed. Acute GI toxicity was graded using the Radiation Therapy Oncology Group scale, excluding upper GI events. A logistic model was used to test correlations between acute GI toxicity and bowel dosimetric parameters. The primary objective was to test the association between Grade {>=}2 GI toxicity and the volume of bowel receiving {>=}45 Gy (V{sub 45}) using the logistic model. Results: Twenty-three patients (46%) had Grade {>=}2 GI toxicity. The mean (SD) V{sub 45} was 143 mL (99). The mean V{sub 45} values for patients with and without Grade {>=}2 GI toxicity were 176 vs. 115 mL, respectively. Twenty patients (40%) had V{sub 45} >150 mL. The proportion of patients with Grade {>=}2 GI toxicity with and without V{sub 45} >150 mL was 65% vs. 33% (p = 0.03). Logistic model parameter estimates V50 and {gamma} were 161 mL (95% confidence interval [CI] 60-399) and 0.31 (95% CI 0.04-0.63), respectively. On multivariable logistic regression, increased V{sub 45} was associated with an increased odds of Grade {>=}2 GI toxicity (odds ratio 2.19 per 100 mL, 95% CI 1.04-4.63, p = 0.04). Conclusions: Our results support the hypothesis that increasing bowel V{sub 45} is correlated with increased GI toxicity in cervical cancer patients undergoing IMRT and concurrent cisplatin. Reducing bowel V{sub 45} could reduce the risk of Grade {>=}2 GI toxicity by approximately 50% per 100 mL of bowel spared.
A removal model for estimating detection probabilities from point-count surveys
Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.
2000-01-01
We adapted a removal model to estimate detection probability during point count surveys. The model assumes one factor influencing detection during point counts is the singing frequency of birds. This may be true for surveys recording forest songbirds when most detections are by sound. The model requires counts to be divided into several time intervals. We used time intervals of 2, 5, and 10 min to develop a maximum-likelihood estimator for the detectability of birds during such surveys. We applied this technique to data from bird surveys conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. The overall detection probability for all birds was 75%. We found differences in detection probability among species. Species that sing frequently such as Winter Wren and Acadian Flycatcher had high detection probabilities (about 90%) and species that call infrequently such as Pileated Woodpecker had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. This method of estimating detectability during point count surveys offers a promising new approach to using count data to address questions of the bird abundance, density, and population trends.
A removal model for estimating detection probabilities from point-count surveys
Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.
2002-01-01
Use of point-count surveys is a popular method for collecting data on abundance and distribution of birds. However, analyses of such data often ignore potential differences in detection probability. We adapted a removal model to directly estimate detection probability during point-count surveys. The model assumes that singing frequency is a major factor influencing probability of detection when birds are surveyed using point counts. This may be appropriate for surveys in which most detections are by sound. The model requires counts to be divided into several time intervals. Point counts are often conducted for 10 min, where the number of birds recorded is divided into those first observed in the first 3 min, the subsequent 2 min, and the last 5 min. We developed a maximum-likelihood estimator for the detectability of birds recorded during counts divided into those intervals. This technique can easily be adapted to point counts divided into intervals of any length. We applied this method to unlimited-radius counts conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. We found differences in detection probability among species. Species that sing frequently such as Winter Wren (Troglodytes troglodytes) and Acadian Flycatcher (Empidonax virescens) had high detection probabilities (~90%) and species that call infrequently such as Pileated Woodpecker (Dryocopus pileatus) had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. We used the same approach to estimate detection probability and density for a subset of the observations with limited-radius point counts.
Normal seasonal variations for atmospheric radon concentration: a sinusoidal model.
Hayashi, Koseki; Yasuoka, Yumi; Nagahama, Hiroyuki; Muto, Jun; Ishikawa, Tetsuo; Omori, Yasutaka; Suzuki, Toshiyuki; Homma, Yoshimi; Mukai, Takahiro
2015-01-01
Anomalous radon readings in air have been reported before an earthquake activity. However, careful measurements of atmospheric radon concentrations during a normal period are required to identify anomalous variations in a precursor period. In this study, we obtained radon concentration data for 5 years (2003-2007) that can be considered a normal period and compared it with data from the precursory period of 2008 until March 2011, when the 2011 Tohoku-Oki Earthquake occurred. Then, we established a model for seasonal variation by fitting a sinusoidal model to the radon concentration data during the normal period, considering that the seasonal variation was affected by atmospheric turbulence. By determining the amplitude in the sinusoidal model, the normal variation of the radon concentration can be estimated. Thus, the results of this method can be applied to identify anomalous radon variations before an earthquake.
General properties of different models used to predict normal tissue complications due to radiation
Kuperman, V. Y.
2008-11-15
In the current study the author analyzes general properties of three different models used to predict normal tissue complications due to radiation: (1) Surviving fraction of normal cells in the framework of the linear quadratic (LQ) equation for cell kill, (2) the Lyman-Kutcher-Burman (LKB) model for normal tissue complication probability (NTCP), and (3) generalized equivalent uniform dose (gEUD). For all considered cases the author assumes fixed average dose to an organ of interest. The author's goal is to establish whether maximizing dose uniformity in the irradiated normal tissues is radiobiologically beneficial. Assuming that NTCP increases with increasing overall cell kill, it is shown that NTCP in the LQ model is maximized for uniform dose. Conversely, NTCP in the LKB and gEUD models is always smaller for a uniform dose to a normal organ than that for a spatially varying dose if parameter n in these models is small (i.e., n<1). The derived conflicting properties of the considered models indicate the need for more studies before these models can be utilized clinically for plan evaluation and/or optimization of dose distributions. It is suggested that partial-volume irradiation can be used to establish the validity of the considered models.
NASA Astrophysics Data System (ADS)
Maslennikova, Yu. S.; Nugmanov, I. S.
2016-08-01
The problem of probability density function estimation for a random process is one of the most common in practice. There are several methods to solve this problem. Presented laboratory work uses methods of the mathematical statistics to detect patterns in the realization of random process. On the basis of ergodic theory, we construct algorithm for estimating univariate probability density distribution function for a random process. Correlational analysis of realizations is applied to estimate the necessary size of the sample and the time of observation. Hypothesis testing for two probability distributions (normal and Cauchy) is used on the experimental data, using χ2 criterion. To facilitate understanding and clarity of the problem solved, we use ELVIS II platform and LabVIEW software package that allows us to make the necessary calculations, display results of the experiment and, most importantly, to control the experiment. At the same time students are introduced to a LabVIEW software package and its capabilities.
Trending in Probability of Collision Measurements via a Bayesian Zero-Inflated Beta Mixed Model
NASA Technical Reports Server (NTRS)
Vallejo, Jonathon; Hejduk, Matt; Stamey, James
2015-01-01
We investigate the performance of a generalized linear mixed model in predicting the Probabilities of Collision (Pc) for conjunction events. Specifically, we apply this model to the log(sub 10) transformation of these probabilities and argue that this transformation yields values that can be considered bounded in practice. Additionally, this bounded random variable, after scaling, is zero-inflated. Consequently, we model these values using the zero-inflated Beta distribution, and utilize the Bayesian paradigm and the mixed model framework to borrow information from past and current events. This provides a natural way to model the data and provides a basis for answering questions of interest, such as what is the likelihood of observing a probability of collision equal to the effective value of zero on a subsequent observation.
NASA Astrophysics Data System (ADS)
Pu, H. C.; Lin, C. H.
2016-05-01
To investigate the seismic behavior of crustal deformation, we deployed a dense seismic network at the Hsinchu area of northwestern Taiwan during the period between 2004 and 2006. Based on abundant local micro-earthquakes recorded at this seismic network, we have successfully determined 274 focal mechanisms among ∼1300 seismic events. It is very interesting to see that the dominant energy of both seismic strike-slip and normal faulting mechanisms repeatedly alternated with each other within two years. Also, the strike-slip and normal faulting earthquakes were largely accompanied with the surface slipping along N60°E and uplifting obtained from the continuous GPS data, individually. Those phenomena were probably resulted by the slow uplifts at the mid-crust beneath the northwestern Taiwan area. As the deep slow uplift was active below 10 km in depth along either the boundary fault or blind fault, the push of the uplifting material would simultaneously produce both of the normal faulting earthquakes in the shallow depths (0-10 km) and the slight surface uplifting. As the deep slow uplift was stop, instead, the strike-slip faulting earthquakes would be dominated as usual due to strongly horizontal plate convergence in the Taiwan. Since the normal faulting earthquakes repeatedly dominated in every 6 or 7 months between 2004 and 2006, it may conclude that slow slip events in the mid crust were frequent to release accumulated tectonic stress in the Hsinchu area.
Height probabilities in the Abelian sandpile model on the generalized finite Bethe lattice
NASA Astrophysics Data System (ADS)
Chen, Haiyan; Zhang, Fuji
2013-08-01
In this paper, we study the sandpile model on the generalized finite Bethe lattice with a particular boundary condition. Using a combinatorial method, we give the exact expressions for all single-site probabilities and some two-site joint probabilities. As a by-product, we prove that the height probabilities of bulk vertices are all the same for the Bethe lattice with certain given boundary condition, which was found from numerical evidence by Grassberger and Manna ["Some more sandpiles," J. Phys. (France) 51, 1077-1098 (1990)], 10.1051/jphys:0199000510110107700 but without a proof.
Reasenberg, P.A.; Hanks, T.C.; Bakun, W.H.
2003-01-01
The moment magnitude M 7.8 earthquake in 1906 profoundly changed the rate of seismic activity over much of northern California. The low rate of seismic activity in the San Francisco Bay region (SFBR) since 1906, relative to that of the preceding 55 yr, is often explained as a stress-shadow effect of the 1906 earthquake. However, existing elastic and visco-elastic models of stress change fail to fully account for the duration of the lowered rate of earthquake activity. We use variations in the rate of earthquakes as a basis for a simple empirical model for estimating the probability of M ???6.7 earthquakes in the SFBR. The model preserves the relative magnitude distribution of sources predicted by the Working Group on California Earthquake Probabilities' (WGCEP, 1999; WGCEP, 2002) model of characterized ruptures on SFBR faults and is consistent with the occurrence of the four M ???6.7 earthquakes in the region since 1838. When the empirical model is extrapolated 30 yr forward from 2002, it gives a probability of 0.42 for one or more M ???6.7 in the SFBR. This result is lower than the probability of 0.5 estimated by WGCEP (1988), lower than the 30-yr Poisson probability of 0.60 obtained by WGCEP (1999) and WGCEP (2002), and lower than the 30-yr time-dependent probabilities of 0.67, 0.70, and 0.63 obtained by WGCEP (1990), WGCEP (1999), and WGCEP (2002), respectively, for the occurrence of one or more large earthquakes. This lower probability is consistent with the lack of adequate accounting for the 1906 stress-shadow in these earlier reports. The empirical model represents one possible approach toward accounting for the stress-shadow effect of the 1906 earthquake. However, the discrepancy between our result and those obtained with other modeling methods underscores the fact that the physics controlling the timing of earthquakes is not well understood. Hence, we advise against using the empirical model alone (or any other single probability model) for estimating the
Assessing the Comprehensive Seismic Earth Model using normal mode data
NASA Astrophysics Data System (ADS)
Koelemeijer, Paula; Afanasiev, Michael; Fichtner, Andreas; Gokhberg, Alexey
2016-04-01
Advances in computational resources and numerical methods allow the simulation of realistic seismic wave propagation through complex media, while ensuring that the complete wave field is correctly represented in synthetic seismograms. This full waveform inversion is widely applied on regional and continental scales, where particularly dense data sampled can be achieved leading to an increased resolution in the obtained model images. On a global scale, however, full waveform tomography is still and will continue to be limited to longer length scales due to the large computational costs. Normal mode tomography provides an alternative fast full waveform approach for imaging seismic structures in a global way. Normal modes are not limited by the poor station-earthquake distribution and provide sensitivity to density structure. Using normal modes, a more robust long wavelength background model can be obtained, leading to more accurate absolute velocity models for tectonic and mineral physics interpretations. In addition, it is vital to combine all seismic data types across accessible periods to obtain a more complete, consistent and interpretable image of the Earth's interior. Here, we aim to combine the globally sensitive long period normal modes with shorter period full waveform modelling within the multi-scale framework of the Comprehensive Seismic Earth Model (CSEM). The multi-scale inversion framework of the CSEM allows exploitation of the full waveform capacity on both sides of the seismic spectrum. As the CSEM includes high-resolution subregions with velocity variations at much shorter wavelengths than normal modes could constrain, the question arises whether these small-scale variations are noticeable in normal mode data, and which modes respond in particular. We report here on experiments in which we address these questions. We separately investigate the effects of small-scale variations in shear-wave velocity and compressional wave velocity compared to the
Ye, Ming; Neuman, Shlomo P.; Meyer, Philip D.; Pohlmann, Karl
2005-12-24
Previous application of Maximum Likelihood Bayesian Model Averaging (MLBMA, Neuman [2002, 2003]) to alternative variogram models of log air permeability data in fractured tuff has demonstrated its effectiveness in quantifying conceptual model uncertainty and enhancing predictive capability [Ye et al., 2004]. A question remained how best to ascribe prior probabilities to competing models. In this paper we examine the extent to which lead statistics of posterior log permeability predictions are sensitive to prior probabilities of seven corresponding variogram models. We then explore the feasibility of quantifying prior model probabilities by (a) maximizing Shannon's entropy H [Shannon, 1948] subject to constraints reflecting a single analyst's (or a group of analysts?) prior perception about how plausible each alternative model (or a group of models) is relative to others, and (b) selecting a posteriori the most likely among such maxima corresponding to alternative prior perceptions of various analysts or groups of analysts. Another way to select among alternative prior model probability sets, which however is not guaranteed to yield optimum predictive performance (though it did so in our example) and would therefore not be our preferred option, is a min-max approach according to which one selects a priori the set corresponding to the smallest value of maximum entropy. Whereas maximizing H subject to the prior perception of a single analyst (or group) maximizes the potential for further information gain through conditioning, selecting the smallest among such maxima gives preference to the most informed prior perception among those of several analysts (or groups). We use the same variogram models and log permeability data as Ye et al. [2004] to demonstrate that our proposed approach yields the least amount of posterior entropy (residual uncertainty after conditioning) and enhances predictive model performance as compared to (a) the non-informative neutral case in
Franceschetti, Donald R; Gire, Elizabeth
2013-06-01
Quantum probability theory offers a viable alternative to classical probability, although there are some ambiguities inherent in transferring the quantum formalism to a less determined realm. A number of physicists are now looking at the applicability of quantum ideas to the assessment of physics learning, an area particularly suited to quantum probability ideas.
Construct Reliability of the Probability of Adoption of Change (PAC) Model.
ERIC Educational Resources Information Center
Creamer, E. G.; And Others
1991-01-01
Describes Probability of Adoption of Change (PAC) model, theoretical paradigm for explaining likelihood of successful adoption of planned change initiatives in student affairs. Reports on PAC construct reliability from survey of 62 Chief Student Affairs Officers. Discusses two refinements to the model: merger of leadership and top level support…
Supply Responses of the Unemployed: A Probability Model of Reemployment. Final Report.
ERIC Educational Resources Information Center
Toikka, Richard S.
The subject of this study is the process by which unemployed workers are reemployed after a layoff. A theoretical model of job search is formulated with asking price based on number of job vacancies, distribution of offers, value placed on nonmarket activity, cost of search activity, and interest rate. A probability model of reemployment is…
A Monte Carlo model for determining copperhead probability of acquisition and maneuver
NASA Astrophysics Data System (ADS)
Starks, M.
1980-08-01
This report documents AMSAA's Probability of Acquisition and Maneuver (PAM) model. The model is used to develop performance estimates for COPPERHEAD and related weapon systems. A mathematical method for modeling the acquisition and maneuver portions of a COPPERHEAD trajectory is presented. In addition, the report contains a FORTRAN implementation of the model, a description of the required inputs, and a sample case with input and output.
Modeling and simulation of normal and hemiparetic gait
NASA Astrophysics Data System (ADS)
Luengas, Lely A.; Camargo, Esperanza; Sanchez, Giovanni
2015-09-01
Gait is the collective term for the two types of bipedal locomotion, walking and running. This paper is focused on walking. The analysis of human gait is of interest to many different disciplines, including biomechanics, human-movement science, rehabilitation and medicine in general. Here we present a new model that is capable of reproducing the properties of walking, normal and pathological. The aim of this paper is to establish the biomechanical principles that underlie human walking by using Lagrange method. The constraint forces of Rayleigh dissipation function, through which to consider the effect on the tissues in the gait, are included. Depending on the value of the factor present in the Rayleigh dissipation function, both normal and pathological gait can be simulated. First of all, we apply it in the normal gait and then in the permanent hemiparetic gait. Anthropometric data of adult person are used by simulation, and it is possible to use anthropometric data for children but is necessary to consider existing table of anthropometric data. Validation of these models includes simulations of passive dynamic gait that walk on level ground. The dynamic walking approach provides a new perspective of gait analysis, focusing on the kinematics and kinetics of gait. There have been studies and simulations to show normal human gait, but few of them have focused on abnormal, especially hemiparetic gait. Quantitative comparisons of the model predictions with gait measurements show that the model can reproduce the significant characteristics of normal gait.
ERIC Educational Resources Information Center
So, Tak-Shing Harry; Peng, Chao-Ying Joanne
This study compared the accuracy of predicting two-group membership obtained from K-means clustering with those derived from linear probability modeling, linear discriminant function, and logistic regression under various data properties. Multivariate normally distributed populations were simulated based on combinations of population proportions,…
NASA Technical Reports Server (NTRS)
Deiwert, G. S.; Yoshikawa, K. K.
1975-01-01
A semiclassical model proposed by Pearson and Hansen (1974) for computing collision-induced transition probabilities in diatomic molecules is tested by the direct-simulation Monte Carlo method. Specifically, this model is described by point centers of repulsion for collision dynamics, and the resulting classical trajectories are used in conjunction with the Schroedinger equation for a rigid-rotator harmonic oscillator to compute the rotational energy transition probabilities necessary to evaluate the rotation-translation exchange phenomena. It is assumed that a single, average energy spacing exists between the initial state and possible final states for a given collision.
Suitable models for face geometry normalization in facial expression recognition
NASA Astrophysics Data System (ADS)
Sadeghi, Hamid; Raie, Abolghasem A.
2015-01-01
Recently, facial expression recognition has attracted much attention in machine vision research because of its various applications. Accordingly, many facial expression recognition systems have been proposed. However, the majority of existing systems suffer from a critical problem: geometric variability. It directly affects the performance of geometric feature-based facial expression recognition approaches. Furthermore, it is a crucial challenge in appearance feature-based techniques. This variability appears in both neutral faces and facial expressions. Appropriate face geometry normalization can improve the accuracy of each facial expression recognition system. Therefore, this paper proposes different geometric models or shapes for normalization. Face geometry normalization removes geometric variability of facial images and consequently, appearance feature extraction methods can be accurately utilized to represent facial images. Thus, some expression-based geometric models are proposed for facial image normalization. Next, local binary patterns and local phase quantization are used for appearance feature extraction. A combination of an effective geometric normalization with accurate appearance representations results in more than a 4% accuracy improvement compared to several state-of-the-arts in facial expression recognition. Moreover, utilizing the model of facial expressions which have larger mouth and eye region sizes gives higher accuracy due to the importance of these regions in facial expression.
Probability of stimulus detection in a model population of rapidly adapting fibers.
Güçlü, Burak; Bolanowski, Stanley J
2004-01-01
The goal of this study is to establish a link between somatosensory physiology and psychophysics at the probabilistic level. The model for a population of monkey rapidly adapting (RA) mechanoreceptive fibers by Güçlü and Bolanowski (2002) was used to study the probability of stimulus detection when a 40 Hz sinusoidal stimulation is applied with a constant contactor size (2 mm radius) on the terminal phalanx. In the model, the detection was assumed to be mediated by one or more active fibers. Two hypothetical receptive field organizations (uniformly random and gaussian) with varying average innervation densities were considered. At a given stimulus-contactor location, changing the stimulus amplitude generates sigmoid probability-of-detection curves for both receptive field organizations. The psychophysical results superimposed on these probability curves suggest that 5 to 10 active fibers may be required for detection. The effects of the contactor location on the probability of detection reflect the pattern of innervation in the model. However, the psychophysical data do not match with the predictions from the populations with uniform or gaussian distributed receptive field centers. This result may be due to some unknown mechanical factors along the terminal phalanx, or simply because a different receptive field organization is present. It has been reported that human observers can detect one single spike in an RA fiber. By considering the probability of stimulus detection across subjects and RA populations, this article proves that more than one active fiber is indeed required for detection.
The Probability of the Collapse of the Thermohaline Circulation in an Intermediate Complexity Model
NASA Astrophysics Data System (ADS)
Challenor, P.; Hankin, R.; Marsh, R.
2005-12-01
If the thermohaline circulation were to collapse we could see very rapid climate changes, with North West Europe becoming much cooler and widespread impacts across the globe. The risk of such an event has two aspects: the first is the impact of a collapse in the circulation and the second is the probability that it will happen. In this paper we look at latter problem. In particular we investigate the probability that the thermohaline circulation will collapse by the end of the century. To calculate the probability of thermohaline collapse we use a Monte Carl method. We simulate from a climate model with uncertain parameters and estimate the probability from the number of times the model collapses compared to the number of runs. We use an intermediate complexity climate model, C-GOLDSTEIN, which includes a 3-d ocean, an energy balance atmosphere and, in the version we use, a parameterised carbon cycle. Although C-GOLDSTEIN runs quickly for a climate model it is still too slow to allow the thousands of runs needed for the Monte Carlo calculations. We therefore build an emulator of the model. An emulator is a statistical approximation to the full climate model that gives an estimate of the model output and an uncertainty measure. We use a Gaussian process as our emulator. A limited number of model runs are used to build the emulator which is then used for the simulations. We produce estimates of the probability of the collapse of the thermohaline circulation corresponding to the indicative SRES emission scenarios: A1, A1FI, A1T, A2, B1 and B2.
Fitting the distribution of dry and wet spells with alternative probability models
NASA Astrophysics Data System (ADS)
Deni, Sayang Mohd; Jemain, Abdul Aziz
2009-06-01
The development of the rainfall occurrence model is greatly important not only for data-generation purposes, but also in providing informative resources for future advancements in water-related sectors, such as water resource management and the hydrological and agricultural sectors. Various kinds of probability models had been introduced to a sequence of dry (wet) days by previous researchers in the field. Based on the probability models developed previously, the present study is aimed to propose three types of mixture distributions, namely, the mixture of two log series distributions (LSD), the mixture of the log series Poisson distribution (MLPD), and the mixture of the log series and geometric distributions (MLGD), as the alternative probability models to describe the distribution of dry (wet) spells in daily rainfall events. In order to test the performance of the proposed new models with the other nine existing probability models, 54 data sets which had been published by several authors were reanalyzed in this study. Also, the new data sets of daily observations from the six selected rainfall stations in Peninsular Malaysia for the period 1975-2004 were used. In determining the best fitting distribution to describe the observed distribution of dry (wet) spells, a Chi-square goodness-of-fit test was considered. The results revealed that the new method proposed that MLGD and MLPD showed a better fit as more than half of the data sets successfully fitted the distribution of dry and wet spells. However, the existing models, such as the truncated negative binomial and the modified LSD, were also among the successful probability models to represent the sequence of dry (wet) days in daily rainfall occurrence.
Coupled escape probability for an asymmetric spherical case: Modeling optically thick comets
Gersch, Alan M.; A'Hearn, Michael F.
2014-05-20
We have adapted Coupled Escape Probability, a new exact method of solving radiative transfer problems, for use in asymmetrical spherical situations. Our model is intended specifically for use in modeling optically thick cometary comae, although not limited to such use. This method enables the accurate modeling of comets' spectra even in the potentially optically thick regions nearest the nucleus, such as those seen in Deep Impact observations of 9P/Tempel 1 and EPOXI observations of 103P/Hartley 2.
NASA Technical Reports Server (NTRS)
Courey, Karim; Wright, Clara; Asfour, Shihab; Onar, Arzu; Bayliss, Jon; Ludwig, Larry
2009-01-01
In this experiment, an empirical model to quantify the probability of occurrence of an electrical short circuit from tin whiskers as a function of voltage was developed. This empirical model can be used to improve existing risk simulation models. FIB and TEM images of a tin whisker confirm the rare polycrystalline structure on one of the three whiskers studied. FIB cross-section of the card guides verified that the tin finish was bright tin.
Results from probability-based, simplified, off-shore Louisiana CSEM hydrocarbon reservoir modeling
NASA Astrophysics Data System (ADS)
Stalnaker, J. L.; Tinley, M.; Gueho, B.
2009-12-01
Perhaps the biggest impediment to the commercial application of controlled-source electromagnetic (CSEM) geophysics marine hydrocarbon exploration is the inefficiency of modeling and data inversion. If an understanding of the typical (in a statistical sense) geometrical and electrical nature of a reservoir can be attained, then it is possible to derive therefrom a simplified yet accurate model of the electromagnetic interactions that produce a measured marine CSEM signal, leading ultimately to efficient modeling and inversion. We have compiled geometric and resistivity measurements from roughly 100 known, producing off-shore Louisiana Gulf of Mexico reservoirs. Recognizing that most reservoirs could be recreated roughly from a sectioned hemi-ellipsoid, we devised a unified, compact reservoir geometry description. Each reservoir was initially fit to the ellipsoid by eye, though we plan in the future to perform a more rigorous least-squares fit. We created, using kernel density estimation, initial probabilistic descriptions of reservoir parameter distributions, with the understanding that additional information would not fundamentally alter our results, but rather increase accuracy. From the probabilistic description, we designed an approximate model consisting of orthogonally oriented current segments distributed across the ellipsoid--enough to define the shape, yet few enough to be resolved during inversion. The moment and length of the currents are mapped to geometry and resistivity of the ellipsoid. The probability density functions (pdfs) derived from reservoir statistics serve as a workbench. We first use the pdfs in a Monte Carlo simulation designed to assess the detectability off-shore Louisiana reservoirs using magnitude versus offset (MVO) anomalies. From the pdfs, many reservoir instances are generated (using rejection sampling) and each normalized MVO response is calculated. The response strength is summarized by numerically computing MVO power, and that
Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique
Glosup, J.G.; Axelrod M.C.
1994-11-15
The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of our interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method.
Application of damping mechanism model and stacking fault probability in Fe-Mn alloy
Huang, S.K.; Wen, Y.H.; Li, N. Teng, J.; Ding, S.; Xu, Y.G.
2008-06-15
In this paper, the damping mechanism model of Fe-Mn alloy was analyzed using dislocation theory. Moreover, as an important parameter in Fe-Mn based alloy, the effect of stacking fault probability on the damping capacity of Fe-19.35Mn alloy after deep-cooling or tensile deformation was also studied. The damping capacity was measured using reversal torsion pendulum. The stacking fault probability of {gamma}-austenite and {epsilon}-martensite was determined by means of X-ray diffraction (XRD) profile analysis. The microstructure was observed using scanning electronic microscope (SEM). The results indicated that with the strain amplitude increasing above a critical value, the damping capacity of Fe-19.35Mn alloy increased rapidly which could be explained using the breakaway model of Shockley partial dislocations. Deep-cooling and suitable tensile deformation could improve the damping capacity owning to the increasing of stacking fault probability of Fe-19.35Mn alloy.
Normal versus Noncentral Chi-Square Asymptotics of Misspecified Models
ERIC Educational Resources Information Center
Chun, So Yeon; Shapiro, Alexander
2009-01-01
The noncentral chi-square approximation of the distribution of the likelihood ratio (LR) test statistic is a critical part of the methodology in structural equation modeling. Recently, it was argued by some authors that in certain situations normal distributions may give a better approximation of the distribution of the LR test statistic. The main…
Durazzo, Timothy C; Korecka, Magdalena; Trojanowski, John Q; Weiner, Michael W; O' Hara, Ruth; Ashford, John W; Shaw, Leslie M
2016-07-25
Neurodegenerative diseases and chronic cigarette smoking are associated with increased cerebral oxidative stress (OxS). Elevated F2-isoprostane levels in biological fluid is a recognized marker of OxS. This study assessed the association of active cigarette smoking with F2-isoprostane in concentrations in cognitively-normal elders (CN), and those with mild cognitive impairment (MCI) and probable Alzheimer's disease (AD). Smoking and non-smoking CN (n = 83), MCI (n = 164), and probable AD (n = 101) were compared on cerebrospinal fluid (CSF) iPF2α-III and 8,12, iso-iPF2α-VI F2-isoprostane concentrations. Associations between F2-isoprostane levels and hippocampal volumes were also evaluated. In CN and AD, smokers had higher iPF2α-III concentration; overall, smoking AD showed the highest iPF2α-III concentration across groups. Smoking and non-smoking MCI did not differ on iPF2α-III concentration. No group differences were apparent on 8,12, iso-iPF2α-VI concentration, but across AD, higher 8,12, iso-iPF2α-VI level was related to smaller left and total hippocampal volumes. Results indicate that active cigarette smoking in CN and probable AD is associated with increased central nervous system OxS. Further investigation of factors mediating/moderating the absence of smoking effects on CSF F2-isoprostane levels in MCI is warranted. In AD, increasing magnitude of OxS appeared to be related to smaller hippocampal volume. This study contributes additional novel information to the mounting body of evidence that cigarette smoking is associated with adverse effects on the human central nervous system across the lifespan. PMID:27472882
Impact on the Extreme Value of Ice Thickness of Conductors From Probability Distribution Models
NASA Astrophysics Data System (ADS)
Gao, Ke-Li; Yang, Jia-Lun; Zhu, Kuan-Jun; Zhang, Feng; Cheng, Yong-Feng; Liu, Bin; Liu, Cao-Lan; Gao, Zheng-Xu
Probability distribution model can affect the extreme value of standard ice thickness on conductors of transmission lines for different return periods. This paper discusses the impact of three probability distribution models on calculation of standard ice thickness, which are Pearson III distribution model (P-III), generalized extreme value distribution model (GEV), and generalized Pareto distribution model (GPD), respectively. The historic icing data have been collected in Lvcongpo Mountain from Hubei province, including icing data from north-south direction, west-east direction, bigger data from two directions, and smaller data from two directions. The analyzing results indicate that GPD is more suitable than P-III and GEV for the icing data collected from Lvcongpo Mountain, which is helpful to reasonably determine the design ice thickness of conductors of transmission lines.
NASA Astrophysics Data System (ADS)
Tian, Chuan; Sun, Di-Hua
2010-12-01
Considering the effects that the probability of traffic interruption and the friction between two lanes have on the car-following behaviour, this paper establishes a new two-lane microscopic car-following model. Based on this microscopic model, a new macroscopic model was deduced by the relevance relation of microscopic and macroscopic scale parameters for the two-lane traffic flow. Terms related to lane change are added into the continuity equations and velocity dynamic equations to investigate the lane change rate. Numerical results verify that the proposed model can be efficiently used to reflect the effect of the probability of traffic interruption on the shock, rarefaction wave and lane change behaviour on two-lane freeways. The model has also been applied in reproducing some complex traffic phenomena caused by traffic accident interruption.
Logit-normal mixed model for Indian monsoon precipitation
NASA Astrophysics Data System (ADS)
Dietz, L. R.; Chatterjee, S.
2014-09-01
Describing the nature and variability of Indian monsoon precipitation is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Four GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data. The logit-normal model was applied to light, moderate, and extreme rainfall. Findings indicated that physical constructs were preserved by the models, and random effects were significant in many cases. We also found GLMM estimation methods were sensitive to tuning parameters and assumptions and therefore, recommend use of multiple methods in applications. This work provides a novel use of GLMM and promotes its addition to the gamut of tools for analysis in studying climate phenomena.
Modelling detection probabilities to evaluate management and control tools for an invasive species
Christy, M.T.; Yackel Adams, A.A.; Rodda, G.H.; Savidge, J.A.; Tyrrell, C.L.
2010-01-01
For most ecologists, detection probability (p) is a nuisance variable that must be modelled to estimate the state variable of interest (i.e. survival, abundance, or occupancy). However, in the realm of invasive species control, the rate of detection and removal is the rate-limiting step for management of this pervasive environmental problem. For strategic planning of an eradication (removal of every individual), one must identify the least likely individual to be removed, and determine the probability of removing it. To evaluate visual searching as a control tool for populations of the invasive brown treesnake Boiga irregularis, we designed a mark-recapture study to evaluate detection probability as a function of time, gender, size, body condition, recent detection history, residency status, searcher team and environmental covariates. We evaluated these factors using 654 captures resulting from visual detections of 117 snakes residing in a 5-ha semi-forested enclosure on Guam, fenced to prevent immigration and emigration of snakes but not their prey. Visual detection probability was low overall (= 0??07 per occasion) but reached 0??18 under optimal circumstances. Our results supported sex-specific differences in detectability that were a quadratic function of size, with both small and large females having lower detection probabilities than males of those sizes. There was strong evidence for individual periodic changes in detectability of a few days duration, roughly doubling detection probability (comparing peak to non-elevated detections). Snakes in poor body condition had estimated mean detection probabilities greater than snakes with high body condition. Search teams with high average detection rates exhibited detection probabilities about twice that of search teams with low average detection rates. Surveys conducted with bright moonlight and strong wind gusts exhibited moderately decreased probabilities of detecting snakes. Synthesis and applications. By
Zhu, Lin; Dai, Zhenxue; Gong, Huili; Gable, Carl; Teatini, Pietro
2015-06-12
Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in an accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.
Cool, Geneviève; Lebel, Alexandre; Sadiq, Rehan; Rodriguez, Manuel J
2015-12-01
The regional variability of the probability of occurrence of high total trihalomethane (TTHM) levels was assessed using multilevel logistic regression models that incorporate environmental and infrastructure characteristics. The models were structured in a three-level hierarchical configuration: samples (first level), drinking water utilities (DWUs, second level) and natural regions, an ecological hierarchical division from the Quebec ecological framework of reference (third level). They considered six independent variables: precipitation, temperature, source type, seasons, treatment type and pH. The average probability of TTHM concentrations exceeding the targeted threshold was 18.1%. The probability was influenced by seasons, treatment type, precipitations and temperature. The variance at all levels was significant, showing that the probability of TTHM concentrations exceeding the threshold is most likely to be similar if located within the same DWU and within the same natural region. However, most of the variance initially attributed to natural regions was explained by treatment types and clarified by spatial aggregation on treatment types. Nevertheless, even after controlling for treatment type, there was still significant regional variability of the probability of TTHM concentrations exceeding the threshold. Regional variability was particularly important for DWUs using chlorination alone since they lack the appropriate treatment required to reduce the amount of natural organic matter (NOM) in source water prior to disinfection. Results presented herein could be of interest to authorities in identifying regions with specific needs regarding drinking water quality and for epidemiological studies identifying geographical variations in population exposure to disinfection by-products (DBPs).
Zhu, Lin; Dai, Zhenxue; Gong, Huili; Gable, Carl; Teatini, Pietro
2015-06-12
Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less
Bailey, Larissa L.; Reid, Janice A.; Forsman, Eric D.; Nichols, James D.
2009-01-01
Barred owls (Strix varia) have recently expanded their range and now encompass the entire range of the northern spotted owl (Strix occidentalis caurina). This expansion has led to two important issues of concern for management of northern spotted owls: (1) possible competitive interactions between the two species that could contribute to population declines of northern spotted owls, and (2) possible changes in vocalization behavior and detection probabilities of northern spotted owls induced by presence of barred owls. We used a two-species occupancy model to investigate whether there was evidence of competitive exclusion between the two species at study locations in Oregon, USA. We simultaneously estimated detection probabilities for both species and determined if the presence of one species influenced the detection of the other species. Model selection results and associated parameter estimates provided no evidence that barred owls excluded spotted owls from territories. We found strong evidence that detection probabilities differed for the two species, with higher probabilities for northern spotted owls that are the object of current surveys. Non-detection of barred owls is very common in surveys for northern spotted owls, and detection of both owl species was negatively influenced by the presence of the congeneric species. Our results suggest that analyses directed at hypotheses of barred owl effects on demographic or occupancy vital rates of northern spotted owls need to deal adequately with imperfect and variable detection probabilities for both species.
Cool, Geneviève; Lebel, Alexandre; Sadiq, Rehan; Rodriguez, Manuel J
2015-12-01
The regional variability of the probability of occurrence of high total trihalomethane (TTHM) levels was assessed using multilevel logistic regression models that incorporate environmental and infrastructure characteristics. The models were structured in a three-level hierarchical configuration: samples (first level), drinking water utilities (DWUs, second level) and natural regions, an ecological hierarchical division from the Quebec ecological framework of reference (third level). They considered six independent variables: precipitation, temperature, source type, seasons, treatment type and pH. The average probability of TTHM concentrations exceeding the targeted threshold was 18.1%. The probability was influenced by seasons, treatment type, precipitations and temperature. The variance at all levels was significant, showing that the probability of TTHM concentrations exceeding the threshold is most likely to be similar if located within the same DWU and within the same natural region. However, most of the variance initially attributed to natural regions was explained by treatment types and clarified by spatial aggregation on treatment types. Nevertheless, even after controlling for treatment type, there was still significant regional variability of the probability of TTHM concentrations exceeding the threshold. Regional variability was particularly important for DWUs using chlorination alone since they lack the appropriate treatment required to reduce the amount of natural organic matter (NOM) in source water prior to disinfection. Results presented herein could be of interest to authorities in identifying regions with specific needs regarding drinking water quality and for epidemiological studies identifying geographical variations in population exposure to disinfection by-products (DBPs). PMID:26563233
Modelling secondary microseismic noise by normal mode summation
NASA Astrophysics Data System (ADS)
Gualtieri, L.; Stutzmann, E.; Capdeville, Y.; Ardhuin, F.; Schimmel, M.; Mangeney, A.; Morelli, A.
2013-06-01
Secondary microseisms recorded by seismic stations are generated in the ocean by the interaction of ocean gravity waves. We present here the theory for modelling secondary microseismic noise by normal mode summation. We show that the noise sources can be modelled by vertical forces and how to derive them from a realistic ocean wave model. We then show how to compute bathymetry excitation effect in a realistic earth model by using normal modes and a comparison with Longuet-Higgins approach. The strongest excitation areas in the oceans depends on the bathymetry and period and are different for each seismic mode. Seismic noise is then modelled by normal mode summation considering varying bathymetry. We derive an attenuation model that enables to fit well the vertical component spectra whatever the station location. We show that the fundamental mode of Rayleigh waves is the dominant signal in seismic noise. There is a discrepancy between real and synthetic spectra on the horizontal components that enables to estimate the amount of Love waves for which a different source mechanism is needed. Finally, we investigate noise generated in all the oceans around Africa and show that most of noise recorded in Algeria (TAM station) is generated in the Northern Atlantic and that there is a seasonal variability of the contribution of each ocean and sea.
ERIC Educational Resources Information Center
Calvert, Carol Elaine
2014-01-01
This case study relates to distance learning students on open access courses. It demonstrates the use of predictive analytics to generate a model of the probabilities of success and retention at different points, or milestones, in a student journey. A core set of explanatory variables has been established and their varying relative importance at…
Blind Students' Learning of Probability through the Use of a Tactile Model
ERIC Educational Resources Information Center
Vita, Aida Carvalho; Kataoka, Verônica Yumi
2014-01-01
The objective of this paper is to discuss how blind students learn basic concepts of probability using the tactile model proposed by Vita (2012). Among the activities were part of the teaching sequence "Jefferson's Random Walk", in which students built a tree diagram (using plastic trays, foam cards, and toys), and pictograms in 3D…
Dong, Jing; Mahmassani, Hani S.
2011-01-01
This paper proposes a methodology to produce random flow breakdown endogenously in a mesoscopic operational model, by capturing breakdown probability and duration. Based on previous research findings that probability of flow breakdown can be represented as a function of flow rate and the duration can be characterized by a hazard model. By generating random flow breakdown at various levels and capturing the traffic characteristics at the onset of the breakdown, the stochastic network simulation model provides a tool for evaluating travel time variability. The proposed model can be used for (1) providing reliability related traveler information; (2) designing ITS (intelligent transportation systems) strategies to improve reliability; and (3) evaluating reliability-related performance measures of the system.
Schmidt, W.; Niemeyer, J. C.; Ciaraldi-Schoolmann, F.; Roepke, F. K.; Hillebrandt, W.
2010-02-20
The delayed detonation model describes the observational properties of the majority of Type Ia supernovae very well. Using numerical data from a three-dimensional deflagration model for Type Ia supernovae, the intermittency of the turbulent velocity field and its implications on the probability of a deflagration-to-detonation (DDT) transition are investigated. From structure functions of the turbulent velocity fluctuations, we determine intermittency parameters based on the log-normal and the log-Poisson models. The bulk of turbulence in the ash regions appears to be less intermittent than predicted by the standard log-normal model and the She-Leveque model. On the other hand, the analysis of the turbulent velocity fluctuations in the vicinity of the flame front by Roepke suggests a much higher probability of large velocity fluctuations on the grid scale in comparison to the log-normal intermittency model. Following Pan et al., we computed probability density functions for a DDT for the different distributions. The determination of the total number of regions at the flame surface, in which DDTs can be triggered, enables us to estimate the total number of events. Assuming that a DDT can occur in the stirred flame regime, as proposed by Woosley et al., the log-normal model would imply a delayed detonation between 0.7 and 0.8 s after the beginning of the deflagration phase for the multi-spot ignition scenario used in the simulation. However, the probability drops to virtually zero if a DDT is further constrained by the requirement that the turbulent velocity fluctuations reach about 500 km s{sup -1}. Under this condition, delayed detonations are only possible if the distribution of the velocity fluctuations is not log-normal. From our calculations follows that the distribution obtained by Roepke allow for multiple DDTs around 0.8 s after ignition at a transition density close to 1 x 10{sup 7} g cm{sup -3}.
Logit-normal mixed model for Indian Monsoon rainfall extremes
NASA Astrophysics Data System (ADS)
Dietz, L. R.; Chatterjee, S.
2014-03-01
Describing the nature and variability of Indian monsoon rainfall extremes is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Several GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data procured from the National Climatic Data Center. The logit-normal model was applied with fixed covariates of latitude, longitude, elevation, daily minimum and maximum temperatures with a random intercept by weather station. In general, the estimation methods concurred in their suggestion of a relationship between the El Niño Southern Oscillation (ENSO) and extreme rainfall variability estimates. This work provides a valuable starting point for extending GLMM to incorporate the intricate dependencies in extreme climate events.
Rate coefficients, binding probabilities, and related quantities for area reactivity models
Prüstel, Thorsten; Meier-Schellersheim, Martin
2014-01-01
We further develop the general theory of the area reactivity model that describes the diffusion-influenced reaction of an isolated receptor-ligand pair in terms of a generalized Feynman-Kac equation and that provides an alternative to the classical contact reactivity model. Analyzing both the irreversible and reversible reaction, we derive the equation of motion of the survival probability as well as several relationships between single pair quantities and the reactive flux at the encounter distance. Building on these relationships, we derive the equation of motion of the many-particle survival probability for irreversible pseudo-first-order reactions. Moreover, we show that the usual definition of the rate coefficient as the reactive flux is deficient in the area reactivity model. Numerical tests for our findings are provided through Brownian Dynamics simulations. We calculate exact and approximate expressions for the irreversible rate coefficient and show that this quantity behaves differently from its classical counterpart. Furthermore, we derive approximate expressions for the binding probability as well as the average lifetime of the bound state and discuss on- and off-rates in this context. Throughout our approach, we point out similarities and differences between the area reactivity model and its classical counterpart, the contact reactivity model. The presented analysis and obtained results provide a theoretical framework that will facilitate the comparison of experiment and model predictions. PMID:25416882
Assessment of uncertainty in chemical models by Bayesian probabilities: Why, when, how?
Sahlin, Ullrika
2015-07-01
A prediction of a chemical property or activity is subject to uncertainty. Which type of uncertainties to consider, whether to account for them in a differentiated manner and with which methods, depends on the practical context. In chemical modelling, general guidance of the assessment of uncertainty is hindered by the high variety in underlying modelling algorithms, high-dimensionality problems, the acknowledgement of both qualitative and quantitative dimensions of uncertainty, and the fact that statistics offers alternative principles for uncertainty quantification. Here, a view of the assessment of uncertainty in predictions is presented with the aim to overcome these issues. The assessment sets out to quantify uncertainty representing error in predictions and is based on probability modelling of errors where uncertainty is measured by Bayesian probabilities. Even though well motivated, the choice to use Bayesian probabilities is a challenge to statistics and chemical modelling. Fully Bayesian modelling, Bayesian meta-modelling and bootstrapping are discussed as possible approaches. Deciding how to assess uncertainty is an active choice, and should not be constrained by traditions or lack of validated and reliable ways of doing it.
Rate coefficients, binding probabilities, and related quantities for area reactivity models.
Prüstel, Thorsten; Meier-Schellersheim, Martin
2014-11-21
We further develop the general theory of the area reactivity model that describes the diffusion-influenced reaction of an isolated receptor-ligand pair in terms of a generalized Feynman-Kac equation and that provides an alternative to the classical contact reactivity model. Analyzing both the irreversible and reversible reaction, we derive the equation of motion of the survival probability as well as several relationships between single pair quantities and the reactive flux at the encounter distance. Building on these relationships, we derive the equation of motion of the many-particle survival probability for irreversible pseudo-first-order reactions. Moreover, we show that the usual definition of the rate coefficient as the reactive flux is deficient in the area reactivity model. Numerical tests for our findings are provided through Brownian Dynamics simulations. We calculate exact and approximate expressions for the irreversible rate coefficient and show that this quantity behaves differently from its classical counterpart. Furthermore, we derive approximate expressions for the binding probability as well as the average lifetime of the bound state and discuss on- and off-rates in this context. Throughout our approach, we point out similarities and differences between the area reactivity model and its classical counterpart, the contact reactivity model. The presented analysis and obtained results provide a theoretical framework that will facilitate the comparison of experiment and model predictions. PMID:25416882
Multistate modeling of habitat dynamics: Factors affecting Florida scrub transition probabilities
Breininger, D.R.; Nichols, J.D.; Duncan, B.W.; Stolen, Eric D.; Carter, G.M.; Hunt, D.K.; Drese, J.H.
2010-01-01
Many ecosystems are influenced by disturbances that create specific successional states and habitat structures that species need to persist. Estimating transition probabilities between habitat states and modeling the factors that influence such transitions have many applications for investigating and managing disturbance-prone ecosystems. We identify the correspondence between multistate capture-recapture models and Markov models of habitat dynamics. We exploit this correspondence by fitting and comparing competing models of different ecological covariates affecting habitat transition probabilities in Florida scrub and flatwoods, a habitat important to many unique plants and animals. We subdivided a large scrub and flatwoods ecosystem along central Florida's Atlantic coast into 10-ha grid cells, which approximated average territory size of the threatened Florida Scrub-Jay (Aphelocoma coerulescens), a management indicator species. We used 1.0-m resolution aerial imagery for 1994, 1999, and 2004 to classify grid cells into four habitat quality states that were directly related to Florida Scrub-Jay source-sink dynamics and management decision making. Results showed that static site features related to fire propagation (vegetation type, edges) and temporally varying disturbances (fires, mechanical cutting) best explained transition probabilities. Results indicated that much of the scrub and flatwoods ecosystem was resistant to moving from a degraded state to a desired state without mechanical cutting, an expensive restoration tool. We used habitat models parameterized with the estimated transition probabilities to investigate the consequences of alternative management scenarios on future habitat dynamics. We recommend this multistate modeling approach as being broadly applicable for studying ecosystem, land cover, or habitat dynamics. The approach provides maximum-likelihood estimates of transition parameters, including precision measures, and can be used to assess
Dependence in probabilistic modeling, Dempster-Shafer theory, and probability bounds analysis.
Oberkampf, William Louis; Tucker, W. Troy; Zhang, Jianzhong; Ginzburg, Lev; Berleant, Daniel J.; Ferson, Scott; Hajagos, Janos; Nelsen, Roger B.
2004-10-01
This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.
Probability of Detection (POD) as a statistical model for the validation of qualitative methods.
Wehling, Paul; LaBudde, Robert A; Brunelle, Sharon L; Nelson, Maria T
2011-01-01
A statistical model is presented for use in validation of qualitative methods. This model, termed Probability of Detection (POD), harmonizes the statistical concepts and parameters between quantitative and qualitative method validation. POD characterizes method response with respect to concentration as a continuous variable. The POD model provides a tool for graphical representation of response curves for qualitative methods. In addition, the model allows comparisons between candidate and reference methods, and provides calculations of repeatability, reproducibility, and laboratory effects from collaborative study data. Single laboratory study and collaborative study examples are given.
Impact of stray charge on interconnect wire via probability model of double-dot system
NASA Astrophysics Data System (ADS)
Xiangye, Chen; Li, Cai; Qiang, Zeng; Xinqiao, Wang
2016-02-01
The behavior of quantum cellular automata (QCA) under the influence of a stray charge is quantified. A new time-independent switching paradigm, a probability model of the double-dot system, is developed. Superiority in releasing the calculation operation is presented by the probability model compared to previous stray charge analysis utilizing ICHA or full-basis calculation. Simulation results illustrate that there is a 186-nm-wide region surrounding a QCA wire where a stray charge will cause the target cell to switch unsuccessfully. The failure is exhibited by two new states' dominating the target cell. Therefore, a bistable saturation model is no longer applicable for stray charge analysis. Project supported by the National Natural Science Foundation of China (No. 61172043) and the Key Program of Shaanxi Provincial Natural Science for Basic Research (No. 2011JZ015).
Li, Ning; Liu, Xueqin; Xie, Wei; Wu, Jidong; Zhang, Peng
2013-01-01
New features of natural disasters have been observed over the last several years. The factors that influence the disasters' formation mechanisms, regularity of occurrence and main characteristics have been revealed to be more complicated and diverse in nature than previously thought. As the uncertainty involved increases, the variables need to be examined further. This article discusses the importance and the shortage of multivariate analysis of natural disasters and presents a method to estimate the joint probability of the return periods and perform a risk analysis. Severe dust storms from 1990 to 2008 in Inner Mongolia were used as a case study to test this new methodology, as they are normal and recurring climatic phenomena on Earth. Based on the 79 investigated events and according to the dust storm definition with bivariate, the joint probability distribution of severe dust storms was established using the observed data of maximum wind speed and duration. The joint return periods of severe dust storms were calculated, and the relevant risk was analyzed according to the joint probability. The copula function is able to simulate severe dust storm disasters accurately. The joint return periods generated are closer to those observed in reality than the univariate return periods and thus have more value in severe dust storm disaster mitigation, strategy making, program design, and improvement of risk management. This research may prove useful in risk-based decision making. The exploration of multivariate analysis methods can also lay the foundation for further applications in natural disaster risk analysis. PMID:22616629
Li, Ning; Liu, Xueqin; Xie, Wei; Wu, Jidong; Zhang, Peng
2013-01-01
New features of natural disasters have been observed over the last several years. The factors that influence the disasters' formation mechanisms, regularity of occurrence and main characteristics have been revealed to be more complicated and diverse in nature than previously thought. As the uncertainty involved increases, the variables need to be examined further. This article discusses the importance and the shortage of multivariate analysis of natural disasters and presents a method to estimate the joint probability of the return periods and perform a risk analysis. Severe dust storms from 1990 to 2008 in Inner Mongolia were used as a case study to test this new methodology, as they are normal and recurring climatic phenomena on Earth. Based on the 79 investigated events and according to the dust storm definition with bivariate, the joint probability distribution of severe dust storms was established using the observed data of maximum wind speed and duration. The joint return periods of severe dust storms were calculated, and the relevant risk was analyzed according to the joint probability. The copula function is able to simulate severe dust storm disasters accurately. The joint return periods generated are closer to those observed in reality than the univariate return periods and thus have more value in severe dust storm disaster mitigation, strategy making, program design, and improvement of risk management. This research may prove useful in risk-based decision making. The exploration of multivariate analysis methods can also lay the foundation for further applications in natural disaster risk analysis.
A Probability Model of Decompression Sickness at 4.3 Psia after Exercise Prebreathe
NASA Technical Reports Server (NTRS)
Conkin, Johnny; Gernhardt, Michael L.; Powell, Michael R.; Pollock, Neal
2004-01-01
Exercise PB can reduce the risk of decompression sickness on ascent to 4.3 psia when performed at the proper intensity and duration. Data are from seven tests. PB times ranged from 90 to 150 min. High intensity, short duration dual-cycle ergometry was done during the PB. This was done alone, or combined with intermittent low intensity exercise or periods of rest for the remaining PB. Nonambulating men and women performed light exercise from a semi-recumbent position at 4.3 psia for four hrs. The Research Model with age tested the probability that DCS increases with advancing age. The NASA Model with gender hypothesized that the probability of DCS increases if gender is female. Accounting for exercise and rest during PB with a variable half-time compartment for computed tissue N2 pressure advances our probability modeling of hypobaric DCS. Both models show that a small increase in exercise intensity during PB reduces the risk of DCS, and a larger increase in exercise intensity dramatically reduces risk. These models support the hypothesis that aerobic fitness is an important consideration for the risk of hypobaric DCS when exercise is performed during the PB.
Gray, David R
2010-12-01
As global trade increases so too does the probability of introduction of alien species to new locations. Estimating the probability of an alien species introduction and establishment following introduction is a necessary step in risk estimation (probability of an event times the consequences, in the currency of choice, of the event should it occur); risk estimation is a valuable tool for reducing the risk of biological invasion with limited resources. The Asian gypsy moth, Lymantria dispar (L.), is a pest species whose consequence of introduction and establishment in North America and New Zealand warrants over US$2 million per year in surveillance expenditure. This work describes the development of a two-dimensional phenology model (GLS-2d) that simulates insect development from source to destination and estimates: (1) the probability of introduction from the proportion of the source population that would achieve the next developmental stage at the destination and (2) the probability of establishment from the proportion of the introduced population that survives until a stable life cycle is reached at the destination. The effect of shipping schedule on the probabilities of introduction and establishment was examined by varying the departure date from 1 January to 25 December by weekly increments. The effect of port efficiency was examined by varying the length of time that invasion vectors (shipping containers and ship) were available for infection. The application of GLS-2d is demonstrated using three common marine trade routes (to Auckland, New Zealand, from Kobe, Japan, and to Vancouver, Canada, from Kobe and from Vladivostok, Russia).
Robust rate-control for wavelet-based image coding via conditional probability models.
Gaubatz, Matthew D; Hemami, Sheila S
2007-03-01
Real-time rate-control for wavelet image coding requires characterization of the rate required to code quantized wavelet data. An ideal robust solution can be used with any wavelet coder and any quantization scheme. A large number of wavelet quantization schemes (perceptual and otherwise) are based on scalar dead-zone quantization of wavelet coefficients. A key to performing rate-control is, thus, fast, accurate characterization of the relationship between rate and quantization step size, the R-Q curve. A solution is presented using two invocations of the coder that estimates the slope of each R-Q curve via probability modeling. The method is robust to choices of probability models, quantization schemes and wavelet coders. Because of extreme robustness to probability modeling, a fast approximation to spatially adaptive probability modeling can be used in the solution, as well. With respect to achieving a target rate, the proposed approach and associated fast approximation yield average percentage errors around 0.5% and 1.0% on images in the test set. By comparison, 2-coding-pass rho-domain modeling yields errors around 2.0%, and post-compression rate-distortion optimization yields average errors of around 1.0% at rates below 0.5 bits-per-pixel (bpp) that decrease down to about 0.5% at 1.0 bpp; both methods exhibit more competitive performance on the larger images. The proposed method and fast approximation approach are also similar in speed to the other state-of-the-art methods. In addition to possessing speed and accuracy, the proposed method does not require any training and can maintain precise control over wavelet step sizes, which adds flexibility to a wavelet-based image-coding system.
NASA Astrophysics Data System (ADS)
Denzler, Stefan M.; Dacorogna, Michel M.; Muller, Ulrich A.; McNeil, Alexander J.
2005-05-01
Credit risk models like Moody's KMV are now well established in the market and give bond managers reliable default probabilities for individual firms. Until now it has been hard to relate those probabilities to the actual credit spreads observed on the market for corporate bonds. Inspired by the existence of scaling laws in financial markets by Dacorogna et al. 2001 and DiMatteo et al. 2005 deviating from the Gaussian behavior, we develop a model that quantitatively links those default probabilities to credit spreads (market prices). The main input quantities to this study are merely industry yield data of different times to maturity and expected default frequencies (EDFs) of Moody's KMV. The empirical results of this paper clearly indicate that the model can be used to calculate approximate credit spreads (market prices) from EDFs, independent of the time to maturity and the industry sector under consideration. Moreover, the model is effective in an out-of-sample setting, it produces consistent results on the European bond market where data are scarce and can be adequately used to approximate credit spreads on the corporate level.
Bonilla, L.L.
1987-02-01
A nonlinear Fokker-Planck equation is derived to describe the cooperative behavior of general stochastic systems interacting via mean-field couplings, in the limit of a infinite number of such systems. Disordered systems are also considered. In the weak-noise limit; a general result yields the possibility of having bifurcations from stationary solutions of the nonlinear Fokker-Planck equation into stable time-dependent solutions. The latter are interpreted as nonequilibrium probability distributions (states), and the bifurcations to them as nonequilibrium phase transitions. In the thermodynamic limit, results for three models are given for illustrative purposes. A model of self-synchronization of nonlinear oscillators presents a Hopf bifurcation to a time-periodic probability density, which can be analyzed for any value of the noise. The effects of disorder are illustrated by a simplified version of the Sompolinsky-Zippelius model of spin-glasses. Finally, results for the Fukuyama-Lee-Fisher model of charge-density waves are given. A singular perturbation analysis shows that the depinning transition is a bifurcation problem modified by the disorder noise due to impurities. Far from the bifurcation point, the CDW is either pinned or free, obeying (to leading order) the Gruener-Zawadowki-Chaikin equation. Near the bifurcation, the disorder noise drastically modifies the pattern, giving a quenched average of the CDW current which is constant. Critical exponents are found to depend on the noise, and they are larger than Fisher's values for the two probability distributions considered.
Syntactic error modeling and scoring normalization in speech recognition
NASA Astrophysics Data System (ADS)
Olorenshaw, Lex
1991-03-01
The objective was to develop the speech recognition system to be able to detect speech which is pronounced incorrectly, given that the text of the spoken speech is known to the recognizer. Research was performed in the following areas: (1) syntactic error modeling; (2) score normalization; and (3) phoneme error modeling. The study into the types of errors that a reader makes will provide the basis for creating tests which will approximate the use of the system in the real world. NASA-Johnson will develop this technology into a 'Literacy Tutor' in order to bring innovative concepts to the task of teaching adults to read.
Syntactic error modeling and scoring normalization in speech recognition
NASA Technical Reports Server (NTRS)
Olorenshaw, Lex
1991-01-01
The objective was to develop the speech recognition system to be able to detect speech which is pronounced incorrectly, given that the text of the spoken speech is known to the recognizer. Research was performed in the following areas: (1) syntactic error modeling; (2) score normalization; and (3) phoneme error modeling. The study into the types of errors that a reader makes will provide the basis for creating tests which will approximate the use of the system in the real world. NASA-Johnson will develop this technology into a 'Literacy Tutor' in order to bring innovative concepts to the task of teaching adults to read.
Statistical Surrogate Models for Estimating Probability of High-Consequence Climate Change
NASA Astrophysics Data System (ADS)
Field, R.; Constantine, P.; Boslough, M.
2011-12-01
We have posed the climate change problem in a framework similar to that used in safety engineering, by acknowledging that probabilistic risk assessments focused on low-probability, high-consequence climate events are perhaps more appropriate than studies focused simply on best estimates. To properly explore the tails of the distribution requires extensive sampling, which is not possible with existing coupled atmospheric models due to the high computational cost of each simulation. We have developed specialized statistical surrogate models (SSMs) that can be used to make predictions about the tails of the associated probability distributions. A SSM is different than a deterministic surrogate model in that it represents each climate variable of interest as a space/time random field, that is, a random variable for every fixed location in the atmosphere at all times. The SSM can be calibrated to available spatial and temporal data from existing climate databases, or to a collection of outputs from general circulation models. Because of its reduced size and complexity, the realization of a large number of independent model outputs from a SSM becomes computationally straightforward, so that quantifying the risk associated with low-probability, high-consequence climate events becomes feasible. A Bayesian framework was also developed to provide quantitative measures of confidence, via Bayesian credible intervals, to assess these risks. To illustrate the use of the SSM, we considered two collections of NCAR CCSM 3.0 output data. The first collection corresponds to average December surface temperature for years 1990-1999 based on a collection of 8 different model runs obtained from the Program for Climate Model Diagnosis and Intercomparison (PCMDI). We calibrated the surrogate model to the available model data and make various point predictions. We also analyzed average precipitation rate in June, July, and August over a 54-year period assuming a cyclic Y2K ocean model. We
NASA Astrophysics Data System (ADS)
Smith, Leonard A.
2010-05-01
This contribution concerns "deep" or "second-order" uncertainty, such as the uncertainty in our probability forecasts themselves. It asks the question: "Is it rational to take (or offer) bets using model-based probabilities as if they were objective probabilities?" If not, what alternative approaches for determining odds, perhaps non-probabilistic odds, might prove useful in practice, given the fact we know our models are imperfect? We consider the case where the aim is to provide sustainable odds: not to produce a profit but merely to rationally expect to break even in the long run. In other words, to run a quantified risk of ruin that is relatively small. Thus the cooperative insurance schemes of coastal villages provide a more appropriate parallel than a casino. A "better" probability forecast would lead to lower premiums charged and less volatile fluctuations in the cash reserves of the village. Note that the Bayesian paradigm does not constrain one to interpret model distributions as subjective probabilities, unless one believes the model to be empirically adequate for the task at hand. In geophysics, this is rarely the case. When a probability forecast is interpreted as the objective probability of an event, the odds on that event can be easily computed as one divided by the probability of the event, and one need not favour taking either side of the wager. (Here we are using "odds-for" not "odds-to", the difference being whether of not the stake is returned; odds of one to one are equivalent to odds of two for one.) The critical question is how to compute sustainable odds based on information from imperfect models. We suggest that this breaks the symmetry between the odds-on an event and the odds-against it. While a probability distribution can always be translated into odds, interpreting the odds on a set of events might result in "implied-probabilities" that sum to more than one. And/or the set of odds may be incomplete, not covering all events. We ask
How to model a negligible probability under the WTO sanitary and phytosanitary agreement?
Powell, Mark R
2013-06-01
Since the 1997 EC--Hormones decision, World Trade Organization (WTO) Dispute Settlement Panels have wrestled with the question of what constitutes a negligible risk under the Sanitary and Phytosanitary Agreement. More recently, the 2010 WTO Australia--Apples Panel focused considerable attention on the appropriate quantitative model for a negligible probability in a risk assessment. The 2006 Australian Import Risk Analysis for Apples from New Zealand translated narrative probability statements into quantitative ranges. The uncertainty about a "negligible" probability was characterized as a uniform distribution with a minimum value of zero and a maximum value of 10(-6) . The Australia - Apples Panel found that the use of this distribution would tend to overestimate the likelihood of "negligible" events and indicated that a triangular distribution with a most probable value of zero and a maximum value of 10⁻⁶ would correct the bias. The Panel observed that the midpoint of the uniform distribution is 5 × 10⁻⁷ but did not consider that the triangular distribution has an expected value of 3.3 × 10⁻⁷. Therefore, if this triangular distribution is the appropriate correction, the magnitude of the bias found by the Panel appears modest. The Panel's detailed critique of the Australian risk assessment, and the conclusions of the WTO Appellate Body about the materiality of flaws found by the Panel, may have important implications for the standard of review for risk assessments under the WTO SPS Agreement. PMID:22985254
How to model a negligible probability under the WTO sanitary and phytosanitary agreement?
Powell, Mark R
2013-06-01
Since the 1997 EC--Hormones decision, World Trade Organization (WTO) Dispute Settlement Panels have wrestled with the question of what constitutes a negligible risk under the Sanitary and Phytosanitary Agreement. More recently, the 2010 WTO Australia--Apples Panel focused considerable attention on the appropriate quantitative model for a negligible probability in a risk assessment. The 2006 Australian Import Risk Analysis for Apples from New Zealand translated narrative probability statements into quantitative ranges. The uncertainty about a "negligible" probability was characterized as a uniform distribution with a minimum value of zero and a maximum value of 10(-6) . The Australia - Apples Panel found that the use of this distribution would tend to overestimate the likelihood of "negligible" events and indicated that a triangular distribution with a most probable value of zero and a maximum value of 10⁻⁶ would correct the bias. The Panel observed that the midpoint of the uniform distribution is 5 × 10⁻⁷ but did not consider that the triangular distribution has an expected value of 3.3 × 10⁻⁷. Therefore, if this triangular distribution is the appropriate correction, the magnitude of the bias found by the Panel appears modest. The Panel's detailed critique of the Australian risk assessment, and the conclusions of the WTO Appellate Body about the materiality of flaws found by the Panel, may have important implications for the standard of review for risk assessments under the WTO SPS Agreement.
A Markov chain probability model to describe wet and dry patterns of weather at Colombo
NASA Astrophysics Data System (ADS)
Sonnadara, D. U. J.; Jayewardene, D. R.
2015-01-01
The hypothesis that the wet and dry patterns of daily precipitation observed in Colombo can be modeled by a first order Markov chain model was tested using daily rainfall data for a 60-year period (1941-2000). The probability of a day being wet or dry was defined with respect to the status of the previous day. Probabilities were assumed to be stationary within a given month. Except for isolated single events, the model is shown to describe the observed sequence of wet and dry spells satisfactorily depending on the season. The accuracy of modeling wet spells is high compared to dry spells. When the model-predicted mean length of wet spells for each month was compared with the estimated values from the data set, a reasonable agreement between the model prediction and estimation is seen (within ±0.1). In general, the data show a higher disagreement for the months having longer dry spells. The mean annual duration of wet spells is 2.6 days while the mean annual duration of dry spells is 3.8 days. It is shown that the model can be used to explore the return periods of long wet and dry spells. We conclude from the study that the Markov chain of order 1 is adequate to describe wet and dry patterns of weather in Colombo.
Kazui, Hiroaki; Kanemoto, Hideki; Yoshiyama, Kenji; Kishima, Haruhiko; Suzuki, Yukiko; Sato, Shunsuke; Suehiro, Takashi; Azuma, Shingo; Yoshimine, Toshiki; Tanaka, Toshihisa
2016-10-15
We examined the effect of the pathology of Alzheimer's disease (AD) on improvement of clinical symptoms after shunt surgery in patients with idiopathic normal pressure hydrocephalus (iNPH). Forty-four iNPH patients were classified into 18 patients with (iNPH/AD+) and 26 patients without (iNPH/AD-) combination with low amyloid β42 and high total tau in cerebrospinal fluid (CSF). We compared improvements after lumbo-peritoneal shunt surgery (LPS) between the two groups in Timed Up & Go Test, 10-m reciprocating walking test, Digit Symbol Substitution Test, attention test, delayed recall test, Mini-Mental State Examination, iNPH grading scale, Neuropsychiatric Inventory, Zarit Burden Interview, and other evaluations. Three months after LPS, gait, urination, overall cognition, psychomotor speed, attention, and neuropsychiatric symptoms significantly improved in both groups, but the improvement in delayed recall and reduction of caregiver burden were significantly greater in iNPH/AD- than iNPH/AD+. In addition, improvement in delayed recall score after LPS was significantly and negatively correlated with the probability of AD as judged by amyloid β42 and total tau levels in CSF. Three months after LPS, almost all of the triad symptoms decreased in iNPH patients with and without AD pathology but memory improved only in iNPH patients without AD pathology. PMID:27653897
Kurugol, Sila; Freiman, Moti; Afacan, Onur; Perez-Rossello, Jeannette M; Callahan, Michael J; Warfield, Simon K
2016-08-01
Quantitative diffusion-weighted MR imaging (DW-MRI) of the body enables characterization of the tissue microenvironment by measuring variations in the mobility of water molecules. The diffusion signal decay model parameters are increasingly used to evaluate various diseases of abdominal organs such as the liver and spleen. However, previous signal decay models (i.e., mono-exponential, bi-exponential intra-voxel incoherent motion (IVIM) and stretched exponential models) only provide insight into the average of the distribution of the signal decay rather than explicitly describe the entire range of diffusion scales. In this work, we propose a probability distribution model of incoherent motion that uses a mixture of Gamma distributions to fully characterize the multi-scale nature of diffusion within a voxel. Further, we improve the robustness of the distribution parameter estimates by integrating spatial homogeneity prior into the probability distribution model of incoherent motion (SPIM) and by using the fusion bootstrap solver (FBM) to estimate the model parameters. We evaluated the improvement in quantitative DW-MRI analysis achieved with the SPIM model in terms of accuracy, precision and reproducibility of parameter estimation in both simulated data and in 68 abdominal in-vivo DW-MRIs. Our results show that the SPIM model not only substantially reduced parameter estimation errors by up to 26%; it also significantly improved the robustness of the parameter estimates (paired Student's t-test, p < 0.0001) by reducing the coefficient of variation (CV) of estimated parameters compared to those produced by previous models. In addition, the SPIM model improves the parameter estimates reproducibility for both intra- (up to 47%) and inter-session (up to 30%) estimates compared to those generated by previous models. Thus, the SPIM model has the potential to improve accuracy, precision and robustness of quantitative abdominal DW-MRI analysis for clinical applications. PMID
Construction of Coarse-Grained Models by Reproducing Equilibrium Probability Density Function
NASA Astrophysics Data System (ADS)
Lu, Shi-Jing; Zhou, Xin
2015-01-01
The present work proposes a novel methodology for constructing coarse-grained (CG) models, which aims at minimizing the difference between CG model and the corresponding original system. The difference is defined as a functional of their equilibrium conformational probability densities, then is estimated from equilibrium averages of many independent physical quantities denoted as basis functions. An orthonormalization strategy is adopted to get the independent basis functions from sufficiently preselected interesting physical quantities of the system. Thus the current method is named as probability density matching coarse-graining (PMCG) scheme, which effectively takes into account the overall characteristics of the original systems to construct CG model, and it is a natural improvement of the usual CG scheme wherein some physical quantities are intuitively chosen without considering their correlations. We verify the general PMCG framework in constructing a one-site CG water model from TIP3P model. Both structure of liquids and pressure of the TIP3P water system are found to be well reproduced at the same time in the constructed CG model.
Gruber, Susan; Logan, Roger W; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A
2015-01-15
Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However, a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V-fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results.
A spatial model of bird abundance as adjusted for detection probability
Gorresen, P.M.; Mcmillan, G.P.; Camp, R.J.; Pratt, T.K.
2009-01-01
Modeling the spatial distribution of animals can be complicated by spatial and temporal effects (i.e. spatial autocorrelation and trends in abundance over time) and other factors such as imperfect detection probabilities and observation-related nuisance variables. Recent advances in modeling have demonstrated various approaches that handle most of these factors but which require a degree of sampling effort (e.g. replication) not available to many field studies. We present a two-step approach that addresses these challenges to spatially model species abundance. Habitat, spatial and temporal variables were handled with a Bayesian approach which facilitated modeling hierarchically structured data. Predicted abundance was subsequently adjusted to account for imperfect detection and the area effectively sampled for each species. We provide examples of our modeling approach for two endemic Hawaiian nectarivorous honeycreepers: 'i'iwi Vestiaria coccinea and 'apapane Himatione sanguinea. ?? 2009 Ecography.
Normalized Texture Motifs and Their Application to Statistical Object Modeling
Newsam, S D
2004-03-09
A fundamental challenge in applying texture features to statistical object modeling is recognizing differently oriented spatial patterns. Rows of moored boats in remote sensed images of harbors should be consistently labeled regardless of the orientation of the harbors, or of the boats within the harbors. This is not straightforward to do, however, when using anisotropic texture features to characterize the spatial patterns. We here propose an elegant solution, termed normalized texture motifs, that uses a parametric statistical model to characterize the patterns regardless of their orientation. The models are learned in an unsupervised fashion from arbitrarily orientated training samples. The proposed approach is general enough to be used with a large category of orientation-selective texture features.
Neurophysiological model of the normal and abnormal human pupil
NASA Technical Reports Server (NTRS)
Krenz, W.; Robin, M.; Barez, S.; Stark, L.
1985-01-01
Anatomical, experimental, and computer simulation studies were used to determine the structure of the neurophysiological model of the pupil size control system. The computer simulation of this model demonstrates the role played by each of the elements in the neurological pathways influencing the size of the pupil. Simulations of the effect of drugs and common abnormalities in the system help to illustrate the workings of the pathways and processes involved. The simulation program allows the user to select pupil condition (normal or an abnormality), specific site along the neurological pathway (retina, hypothalamus, etc.) drug class input (barbiturate, narcotic, etc.), stimulus/response mode, display mode, stimulus type and input waveform, stimulus or background intensity and frequency, the input and output conditions, and the response at the neuroanatomical site. The model can be used as a teaching aid or as a tool for testing hypotheses regarding the system.
Physical models for the normal YORP and diurnal Yarkovsky effects
NASA Astrophysics Data System (ADS)
Golubov, O.; Kravets, Y.; Krugly, Yu. N.; Scheeres, D. J.
2016-06-01
We propose an analytic model for the normal Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) and diurnal Yarkovsky effects experienced by a convex asteroid. Both the YORP torque and the Yarkovsky force are expressed as integrals of a universal function over the surface of an asteroid. Although in general this function can only be calculated numerically from the solution of the heat conductivity equation, approximate solutions can be obtained in quadratures for important limiting cases. We consider three such simplified models: Rubincam's approximation (zero heat conductivity), low thermal inertia limit (including the next order correction and thus valid for small heat conductivity), and high thermal inertia limit (valid for large heat conductivity). All three simplified models are compared with the exact solution.
NASA Astrophysics Data System (ADS)
England, J. F.
2006-12-01
Estimates of extreme floods and probabilities are needed in dam safety risk analysis. A multidisciplinary approach was developed to estimate extreme floods that integrated four main elements: radar hydrometeorology, stochastic storm transposition, paleoflood data, and 2d distributed rainfall-runoff modeling. The research focused on developing and applying a two-dimensional, distributed model to simulate extreme floods on the 12,000 km2 Arkansas River above Pueblo, Colorado with return periods up to 10,000 years. The four objectives were to: (1) develop a two-dimensional model suitable for large watersheds (area greater than 2,500 km2); (2) calibrate and validate the model to the June 1921 and May 1894 floods on the Arkansas River; (3) develop a flood frequency curve with the model using the stochastic storm transposition technique; and (4) conduct a sensitivity analysis for initial soil saturation, storm duration and area, and compare the flood frequency curve with gage and paleoflood data. The Two-dimensional Runoff, Erosion and EXport (TREX) model was developed as part of this research. Basin-average rainfall depths and probabilities were estimated using DAD data and stochastic storm transposition with elliptical storms for input to TREX. From these extreme rainstorms, the TREX model was used to estimate a flood frequency curve for this large watershed. Model-generated peak flows were as large as 90,000 to 282,000 ft3/s at Pueblo for 100- to 10,000-year return periods, respectively. Model-generated frequency curves were generally comparable to peak flow and paleoflood data-based frequency curves after radar-based storm location and area limits were applied. The model provides a unique physically-based method for determining flood frequency curves under varied scenarios of antecedent moisture conditions, space and time variability of rainfall and watershed characteristics, and storm center locations.
NASA Astrophysics Data System (ADS)
Li, Zhanling; Li, Zhanjie; Li, Chengcheng
2014-05-01
Probability modeling of hydrological extremes is one of the major research areas in hydrological science. Most basins in humid and semi-humid south and east of China are concerned for probability modeling analysis of high flow extremes. While, for the inland river basin which occupies about 35% of the country area, there is a limited presence of such studies partly due to the limited data availability and a relatively low mean annual flow. The objective of this study is to carry out probability modeling of high flow extremes in the upper reach of Heihe River basin, the second largest inland river basin in China, by using the peak over threshold (POT) method and Generalized Pareto Distribution (GPD), in which the selection of threshold and inherent assumptions for POT series are elaborated in details. For comparison, other widely used probability distributions including generalized extreme value (GEV), Lognormal, Log-logistic and Gamma are employed as well. Maximum likelihood estimate is used for parameter estimations. Daily flow data at Yingluoxia station from 1978 to 2008 are used. Results show that, synthesizing the approaches of mean excess plot, stability features of model parameters, return level plot and the inherent independence assumption of POT series, an optimum threshold of 340m3/s is finally determined for high flow extremes in Yingluoxia watershed. The resulting POT series is proved to be stationary and independent based on Mann-Kendall test, Pettitt test and autocorrelation test. In terms of Kolmogorov-Smirnov test, Anderson-Darling test and several graphical diagnostics such as quantile and cumulative density function plots, GPD provides the best fit to high flow extremes in the study area. The estimated high flows for long return periods demonstrate that, as the return period increasing, the return level estimates are probably more uncertain. The frequency of high flow extremes exhibits a very slight but not significant decreasing trend from 1978 to
Modeling secondary microseismic noise by normal mode summation
NASA Astrophysics Data System (ADS)
Gualtieri, Lucia; Stutzmann, Eleonore; Capdeville, Yann; Ardhuin, Fabrice; Schimmel, Martin; Mangenay, Anne; Morelli, Andrea
2013-04-01
Seismic noise is the continuous oscillation of the ground recorded by seismic stations in the period band 5-20s. In particular, secondary microseisms occur in the period band 5-12s and are generated in the ocean by the interaction of ocean gravity waves. We present the theory for modeling secondary microseismic noise by normal mode summation. We show that the noise sources can be modeled by vertical forces and how to derive them from a realistic ocean wave model. During the computation we take into account the bathymetry. We show how to compute bathymetry excitation effect in a realistic Earth model using normal modes and a comparison with Longuet-Higgins (1950) approach. The strongest excitation areas in the oceans depends on the bathymetry and period and are different for each seismic mode. We derive an attenuation model than enables to fit well the vertical component spectra whatever the station location. We show that the fundamental mode of Rayleigh wave is the dominant signal in seismic noise and it is sufficient to reproduce the main features of noise spectra amplitude. We also model horizontal components. There is a discrepancy between real and synthetic spectra on the horizontal components that enables to estimate the amount of Love waves for which a different source mechanism is needed. Finally, we investigate noise generated in all the oceans around Africa and show that most of noise recorded in Algeria (TAM station) is generated in the Northern Atlantic and that there is a seasonal variability of the contribution of each ocean and sea. Moreover, we also show that the Mediterranean Sea contributes significantly to the short period noise in winter.
Greis, Tillman; Helmholz, Kathrin; Schöniger, Hans Matthias; Haarstrick, Andreas
2012-06-01
In this study, a 3D urban groundwater model is presented which serves for calculation of multispecies contaminant transport in the subsurface on the regional scale. The total model consists of two submodels, the groundwater flow and reactive transport model, and is validated against field data. The model equations are solved applying finite element method. A sensitivity analysis is carried out to perform parameter identification of flow, transport and reaction processes. Coming from the latter, stochastic variation of flow, transport, and reaction input parameters and Monte Carlo simulation are used in calculating probabilities of pollutant occurrence in the domain. These probabilities could be part of determining future spots of contamination and their measure of damages. Application and validation is exemplarily shown for a contaminated site in Braunschweig (Germany), where a vast plume of chlorinated ethenes pollutes the groundwater. With respect to field application, the methods used for modelling reveal feasible and helpful tools to assess natural attenuation (MNA) and the risk that might be reduced by remediation actions.
Normality Index of Ventricular Contraction Based on a Statistical Model from FADS
Jiménez-Ángeles, Luis; Valdés-Cristerna, Raquel; Vallejo, Enrique; Bialostozky, David; Medina-Bañuelos, Verónica
2013-01-01
Radionuclide-based imaging is an alternative to evaluate ventricular function and synchrony and may be used as a tool for the identification of patients that could benefit from cardiac resynchronization therapy (CRT). In a previous work, we used Factor Analysis of Dynamic Structures (FADS) to analyze the contribution and spatial distribution of the 3 most significant factors (3-MSF) present in a dynamic series of equilibrium radionuclide angiography images. In this work, a probability density function model of the 3-MSF extracted from FADS for a control group is presented; also an index, based on the likelihood between the control group's contraction model and a sample of normal subjects is proposed. This normality index was compared with those computed for two cardiopathic populations, satisfying the clinical criteria to be considered as candidates for a CRT. The proposed normality index provides a measure, consistent with the phase analysis currently used in clinical environment, sensitive enough to show contraction differences between normal and abnormal groups, which suggests that it can be related to the degree of severity in the ventricular contraction dyssynchrony, and therefore shows promise as a follow-up procedure for patients under CRT. PMID:23634177
Modeling and estimation of stage-specific daily survival probabilities of nests
Stanley, T.R.
2000-01-01
In studies of avian nesting success, it is often of interest to estimate stage-specific daily survival probabilities of nests. When data can be partitioned by nesting stage (e.g., incubation stage, nestling stage), piecewise application of the Mayfield method or Johnsona??s method is appropriate. However, when the data contain nests where the transition from one stage to the next occurred during the interval between visits, piecewise approaches are inappropriate. In this paper, I present a model that allows joint estimation of stage-specific daily survival probabilities even when the time of transition between stages is unknown. The model allows interval lengths between visits to nests to vary, and the exact time of failure of nests does not need to be known. The performance of the model at various sample sizes and interval lengths between visits was investigated using Monte Carlo simulations, and it was found that the model performed quite well: bias was small and confidence-interval coverage was at the nominal 95% rate. A SAS program for obtaining maximum likelihood estimates of parameters, and their standard errors, is provided in the Appendix.
A formalism to generate probability distributions for performance-assessment modeling
Kaplan, P.G.
1990-12-31
A formalism is presented for generating probability distributions of parameters used in performance-assessment modeling. The formalism is used when data are either sparse or nonexistent. The appropriate distribution is a function of the known or estimated constraints and is chosen to maximize a quantity known as Shannon`s informational entropy. The formalism is applied to a parameter used in performance-assessment modeling. The functional form of the model that defines the parameter, data from the actual field site, and natural analog data are analyzed to estimate the constraints. A beta probability distribution of the example parameter is generated after finding four constraints. As an example of how the formalism is applied to the site characterization studies of Yucca Mountain, the distribution is generated for an input parameter in a performance-assessment model currently used to estimate compliance with disposal of high-level radioactive waste in geologic repositories, 10 CFR 60.113(a)(2), commonly known as the ground water travel time criterion. 8 refs., 2 figs.
Modeling the probability of arsenic in groundwater in New England as a tool for exposure assessment.
Ayotte, Joseph D; Nolan, Bernard T; Nuckols, John R; Cantor, Kenneth P; Robinson, Gilpin R; Baris, Dalsu; Hayes, Laura; Karagas, Margaret; Bress, William; Silverman, Debra T; Lubin, Jay H
2006-06-01
We developed a process-based model to predict the probability of arsenic exceeding 5 microg/L in drinking water wells in New England bedrock aquifers. The model is being used for exposure assessment in an epidemiologic study of bladder cancer. One important study hypothesis that may explain increased bladder cancer risk is elevated concentrations of inorganic arsenic in drinking water. In eastern New England, 20-30% of private wells exceed the arsenic drinking water standard of 10 micrograms per liter. Our predictive model significantly improves the understanding of factors associated with arsenic contamination in New England. Specific rock types, high arsenic concentrations in stream sediments, geochemical factors related to areas of Pleistocene marine inundation and proximity to intrusive granitic plutons, and hydrologic and landscape variables relating to groundwater residence time increase the probability of arsenic occurrence in groundwater. Previous studies suggest that arsenic in bedrock groundwater may be partly from past arsenical pesticide use. Variables representing historic agricultural inputs do not improve the model, indicating that this source does not significantly contribute to current arsenic concentrations. Due to the complexity of the fractured bedrock aquifers in the region, well depth and related variables also are not significant predictors.
Modeling the probability of arsenic in groundwater in New England as a tool for exposure assessment
Ayotte, J.D.; Nolan, B.T.; Nuckols, J.R.; Cantor, K.P.; Robinson, G.R.; Baris, D.; Hayes, L.; Karagas, M.; Bress, W.; Silverman, D.T.; Lubin, J.H.
2006-01-01
We developed a process-based model to predict the probability of arsenic exceeding 5 ??g/L in drinking water wells in New England bedrock aquifers. The model is being used for exposure assessment in an epidemiologic study of bladder cancer. One important study hypothesis that may explain increased bladder cancer risk is elevated concentrations of inorganic arsenic in drinking water. In eastern New England, 20-30% of private wells exceed the arsenic drinking water standard of 10 micrograms per liter. Our predictive model significantly improves the understanding of factors associated with arsenic contamination in New England. Specific rock types, high arsenic concentrations in stream sediments, geochemical factors related to areas of Pleistocene marine inundation and proximity to intrusive granitic plutons, and hydrologic and landscape variables relating to groundwater residence time increase the probability of arsenic occurrence in groundwater. Previous studies suggest that arsenic in bedrock groundwater may be partly from past arsenical pesticide use. Variables representing historic agricultural inputs do not improve the model, indicating that this source does not significantly contribute to current arsenic concentrations. Due to the complexity of the fractured bedrock aquifers in the region, well depth and related variables also are not significant predictors. ?? 2006 American Chemical Society.
Empirical probability model of cold plasma environment in the Jovian magnetosphere
NASA Astrophysics Data System (ADS)
Futaana, Yoshifumi; Wang, Xiao-Dong; Barabash, Stas; Roussos, Elias; Truscott, Pete
2015-04-01
We analyzed the Galileo PLS dataset to produce a new cold plasma environment model for the Jovian magneto- sphere. Although there exist many sophisticated radiation models, treating energetic plasma (e.g. JOSE, GIRE, or Salammbo), only a limited number of simple models has been utilized for cold plasma environment. By extend- ing the existing cold plasma models toward the probability domain, we can predict the extreme periods of Jovian environment by specifying the percentile of the environmental parameters. The new model was produced in the following procedure. We first referred to the existing cold plasma models of Divine and Garrett, 1983 (DG83) or Bagenal and Delamere 2011 (BD11). These models are scaled to fit the statistical median of the parameters obtained from Galileo PLS data. The scaled model (also called as "mean model") indicates the median environment of Jovian magnetosphere. Then, assuming that the deviations in the Galileo PLS parameters are purely due to variations in the environment, we extended the mean model toward the percentile domain. The input parameter of the model is simply the position of the spacecraft (distance, magnetic longitude and lati- tude) and the specific percentile (e.g. 0.5 for the mean model). All the parameters in the model are described in mathematical forms; therefore the needed computational resources are quite low. The new model can be used for assessing the JUICE mission profile. The spatial extent of the model covers the main phase of the JUICE mission; namely from the Europa orbit to 40 Rj (where Rj is the radius of Jupiter). In addition, theoretical extensions toward the latitudinal direction are also included in the model to support the high latitude orbit of the JUICE spacecraft.
NASA Astrophysics Data System (ADS)
Zhong, Rumian; Zong, Zhouhong; Niu, Jie; Liu, Qiqi; Zheng, Peijuan
2016-05-01
Modeling and simulation are routinely implemented to predict the behavior of complex structures. These tools powerfully unite theoretical foundations, numerical models and experimental data which include associated uncertainties and errors. A new methodology for multi-scale finite element (FE) model validation is proposed in this paper. The method is based on two-step updating method, a novel approach to obtain coupling parameters in the gluing sub-regions of a multi-scale FE model, and upon Probability Box (P-box) theory that can provide a lower and upper bound for the purpose of quantifying and transmitting the uncertainty of structural parameters. The structural health monitoring data of Guanhe Bridge, a composite cable-stayed bridge with large span, and Monte Carlo simulation were used to verify the proposed method. The results show satisfactory accuracy, as the overlap ratio index of each modal frequency is over 89% without the average absolute value of relative errors, and the CDF of normal distribution has a good coincidence with measured frequencies of Guanhe Bridge. The validated multiscale FE model may be further used in structural damage prognosis and safety prognosis.
Modelling the probability of ionospheric irregularity occurrence over African low latitude region
NASA Astrophysics Data System (ADS)
Mungufeni, Patrick; Jurua, Edward; Bosco Habarulema, John; Anguma Katrini, Simon
2015-06-01
This study presents models of geomagnetically quiet time probability of occurrence of ionospheric irregularities over the African low latitude region. GNSS-derived ionospheric total electron content data from Mbarara, Uganda (0.60°S, 30.74°E, geographic, 10.22°S, magnetic) and Libreville, Gabon (0.35°N, 9.68°E, geographic, 8.05°S, magnetic) during the period 2001-2012 were used. First, we established the rate of change of total electron content index (ROTI) value associated with background ionospheric irregularity over the region. This was done by analysing GNSS carrier-phases at L-band frequencies L1 and L2 with the aim of identifying cycle slip events associated with ionospheric irregularities. We identified at both stations a total of 699 events of cycle slips. The corresponding median ROTI value at the epochs of the cycle slip events was 0.54 TECU/min. The probability of occurrence of ionospheric irregularities associated with ROTI ≥ 0.5 TECU / min was then modelled by fitting cubic B-splines to the data. The aspects the model captured included diurnal, seasonal, and solar flux dependence patterns of the probability of occurrence of ionospheric irregularities. The model developed over Mbarara was validated with data over Mt. Baker, Uganda (0.35°N, 29.90°E, geographic, 9.25°S, magnetic), Kigali, Rwanda (1.94°S, 30.09°E, geographic, 11.62°S, magnetic), and Kampala, Uganda (0.34°N, 32.60°E, geographic, 9.29°S, magnetic). For the period validated at Mt. Baker (approximately, 137.64 km, north west), Kigali (approximately, 162.42 km, south west), and Kampala (approximately, 237.61 km, north east) the percentages of the number of errors (difference between the observed and the modelled probability of occurrence of ionospheric irregularity) less than 0.05 are 97.3, 89.4, and 81.3, respectively.
The basic reproduction number and the probability of extinction for a dynamic epidemic model.
Neal, Peter
2012-03-01
We consider the spread of an epidemic through a population divided into n sub-populations, in which individuals move between populations according to a Markov transition matrix Σ and infectives can only make infectious contacts with members of their current population. Expressions for the basic reproduction number, R₀, and the probability of extinction of the epidemic are derived. It is shown that in contrast to contact distribution models, the distribution of the infectious period effects both the basic reproduction number and the probability of extinction of the epidemic in the limit as the total population size N→∞. The interactions between the infectious period distribution and the transition matrix Σ mean that it is not possible to draw general conclusions about the effects on R₀ and the probability of extinction. However, it is shown that for n=2, the basic reproduction number, R₀, is maximised by a constant length infectious period and is decreasing in ς, the speed of movement between the two populations.
The basic reproduction number and the probability of extinction for a dynamic epidemic model.
Neal, Peter
2012-03-01
We consider the spread of an epidemic through a population divided into n sub-populations, in which individuals move between populations according to a Markov transition matrix Σ and infectives can only make infectious contacts with members of their current population. Expressions for the basic reproduction number, R₀, and the probability of extinction of the epidemic are derived. It is shown that in contrast to contact distribution models, the distribution of the infectious period effects both the basic reproduction number and the probability of extinction of the epidemic in the limit as the total population size N→∞. The interactions between the infectious period distribution and the transition matrix Σ mean that it is not possible to draw general conclusions about the effects on R₀ and the probability of extinction. However, it is shown that for n=2, the basic reproduction number, R₀, is maximised by a constant length infectious period and is decreasing in ς, the speed of movement between the two populations. PMID:22269870
NASA Astrophysics Data System (ADS)
Zhang, L.; Xu, M.; Huang, M.; Yu, G.
2009-11-01
Modeling ecosystem carbon cycle on the regional and global scales is crucial to the prediction of future global atmospheric CO2 concentration and thus global temperature which features large uncertainties due mainly to the limitations in our knowledge and in the climate and ecosystem models. There is a growing body of research on parameter estimation against available carbon measurements to reduce model prediction uncertainty at regional and global scales. However, the systematic errors with the observation data have rarely been investigated in the optimization procedures in previous studies. In this study, we examined the feasibility of reducing the impact of systematic errors on parameter estimation using normalization methods, and evaluated the effectiveness of three normalization methods (i.e. maximum normalization, min-max normalization, and z-score normalization) on inversing key parameters, for example the maximum carboxylation rate (Vcmax,25) at a reference temperature of 25°C, in a process-based ecosystem model for deciduous needle-leaf forests in northern China constrained by the leaf area index (LAI) data. The LAI data used for parameter estimation were composed of the model output LAI (truth) and various designated systematic errors and random errors. We found that the estimation of Vcmax,25 could be severely biased with the composite LAI if no normalization was taken. Compared with the maximum normalization and the min-max normalization methods, the z-score normalization method was the most robust in reducing the impact of systematic errors on parameter estimation. The most probable values of estimated Vcmax,25 inversed by the z-score normalized LAI data were consistent with the true parameter values as in the model inputs though the estimation uncertainty increased with the magnitudes of random errors in the observations. We concluded that the z-score normalization method should be applied to the observed or measured data to improve model parameter
Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.
2010-01-01
Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867
NASA Astrophysics Data System (ADS)
Mori, Shohei; Hirata, Shinnosuke; Yamaguchi, Tadashi; Hachiya, Hiroyuki
2015-07-01
We have been developing a quantitative diagnostic method for liver fibrosis using an ultrasound image. In our previous study, we proposed a multi-Rayleigh model to express a probability density function of the echo amplitude from liver fibrosis and proposed a probability imaging method of tissue characteristics on the basis of the multi-Rayleigh model. In an evaluation using the multi-Rayleigh model, we found that a modeling error of the multi-Rayleigh model was increased by the effect of nonspeckle signals. In this paper, we proposed a method of removing nonspeckle signals using the modeling error of the multi-Rayleigh model and evaluated the probability image of tissue characteristics after removing the nonspeckle signals. By removing nonspeckle signals, the modeling error of the multi-Rayleigh model was decreased. A correct probability image of tissue characteristics was obtained by removing nonspeckle signals. We concluded that the removal of nonspeckle signals is important for evaluating liver fibrosis quantitatively.
Insight into Vent Opening Probability in Volcanic Calderas in the Light of a Sill Intrusion Model
NASA Astrophysics Data System (ADS)
Giudicepietro, Flora; Macedonio, G.; D'Auria, L.; Martini, M.
2016-05-01
The aim of this paper is to discuss a novel approach to provide insights on the probability of vent opening in calderas, using a dynamic model of sill intrusion. The evolution of the stress field is the main factor that controls the vent opening processes in volcanic calderas. On the basis of previous studies, we think that the intrusion of sills is one of the most common mechanism governing caldera unrest. Therefore, we have investigated the spatial and temporal evolution of the stress field due to the emplacement of a sill at shallow depth to provide insight on vent opening probability. We carried out several numerical experiments by using a physical model, to assess the role of the magma properties (viscosity), host rock characteristics (Young's modulus and thickness), and dynamics of the intrusion process (mass flow rate) in controlling the stress field. Our experiments highlight that high magma viscosity produces larger stress values, while low magma viscosity leads to lower stresses and favors the radial spreading of the sill. Also high-rock Young's modulus gives high stress intensity, whereas low values of Young's modulus produce a dramatic reduction of the stress associated with the intrusive process. The maximum intensity of tensile stress is concentrated at the front of the sill and propagates radially with it, over time. In our simulations, we find that maximum values of tensile stress occur in ring-shaped areas with radius ranging between 350 m and 2500 m from the injection point, depending on the model parameters. The probability of vent opening is higher in these areas.
Universal image compression using multiscale recurrent patterns with adaptive probability model.
de Lima Filho, E B; da Silva, E B; de Carvalho, M B; Pinage, F S
2008-04-01
In this work, we further develop the multidimensional multiscale parser (MMP) algorithm, a recently proposed universal lossy compression method which has been successfully applied to images as well as other types of data, as video and ECG signals. The MMP is based on approximate multiscale pattern matching, encoding blocks of an input signal using expanded and contracted versions of patterns stored in a dictionary. The dictionary is updated using expanded and contracted versions of concatenations of previously encoded blocks. This implies that MMP builds its own dictionary while the input data is being encoded, using segments of the input itself, which lends it a universal flavor. It presents a flexible structure, which allows for easily adding data-specific extensions to the base algorithm. Often, the signals to be encoded belong to a narrow class, as the one of smooth images. In these cases, one expects that some improvement can be achieved by introducing some knowledge about the source to be encoded. In this paper, we use the assumption about the smoothness of the source in order to create good context models for the probability of blocks in the dictionary. Such probability models are estimated by considering smoothness constraints around causal block boundaries. In addition, we refine the obtained probability models by also exploiting the existing knowledge about the original scale of the included blocks during the dictionary updating process. Simulation results have shown that these developments allow significant improvements over the original MMP for smooth images, while keeping its state-of-the-art performance for more complex, less smooth ones, thus improving MMP's universal character.
Modeling and forecasting foreign exchange daily closing prices with normal inverse Gaussian
NASA Astrophysics Data System (ADS)
Teneng, Dean
2013-09-01
We fit the normal inverse Gaussian(NIG) distribution to foreign exchange closing prices using the open software package R and select best models by Käärik and Umbleja (2011) proposed strategy. We observe that daily closing prices (12/04/2008 - 07/08/2012) of CHF/JPY, AUD/JPY, GBP/JPY, NZD/USD, QAR/CHF, QAR/EUR, SAR/CHF, SAR/EUR, TND/CHF and TND/EUR are excellent fits while EGP/EUR and EUR/GBP are good fits with a Kolmogorov-Smirnov test p-value of 0.062 and 0.08 respectively. It was impossible to estimate normal inverse Gaussian parameters (by maximum likelihood; computational problem) for JPY/CHF but CHF/JPY was an excellent fit. Thus, while the stochastic properties of an exchange rate can be completely modeled with a probability distribution in one direction, it may be impossible the other way around. We also demonstrate that foreign exchange closing prices can be forecasted with the normal inverse Gaussian (NIG) Lévy process, both in cases where the daily closing prices can and cannot be modeled by NIG distribution.
Velocity-gradient probability distribution functions in a lagrangian model of turbulence
NASA Astrophysics Data System (ADS)
Moriconi, L.; Pereira, R. M.; Grigorio, L. S.
2014-10-01
The Recent Fluid Deformation Closure (RFDC) model of lagrangian turbulence is recast in path-integral language within the framework of the Martin-Siggia-Rose functional formalism. In order to derive analytical expressions for the velocity-gradient probability distribution functions (vgPDFs), we carry out noise renormalization in the low-frequency regime and find approximate extrema for the Martin-Siggia-Rose effective action. We verify, with the help of Monte Carlo simulations, that the vgPDFs so obtained yield a close description of the single-point statistical features implied by the original RFDC stochastic differential equations.
Duffy, Stephen
2013-09-09
This project will implement inelastic constitutive models that will yield the requisite stress-strain information necessary for graphite component design. Accurate knowledge of stress states (both elastic and inelastic) is required to assess how close a nuclear core component is to failure. Strain states are needed to assess deformations in order to ascertain serviceability issues relating to failure, e.g., whether too much shrinkage has taken place for the core to function properly. Failure probabilities, as opposed to safety factors, are required in order to capture the bariability in failure strength in tensile regimes. The current stress state is used to predict the probability of failure. Stochastic failure models will be developed that can accommodate possible material anisotropy. This work will also model material damage (i.e., degradation of mechanical properties) due to radiation exposure. The team will design tools for components fabricated from nuclear graphite. These tools must readily interact with finite element software--in particular, COMSOL, the software algorithm currently being utilized by the Idaho National Laboratory. For the eleastic response of graphite, the team will adopt anisotropic stress-strain relationships available in COMSO. Data from the literature will be utilized to characterize the appropriate elastic material constants.
NASA Astrophysics Data System (ADS)
Takeda, Katsunori; Hattori, Tetsuo; Kawano, Hiromichi
In real time analysis and forecasting of time series data, it is important to detect the structural change as immediately, correctly, and simply as possible. And it is necessary for rebuilding the next prediction model after the change point as soon as possible. For this kind of time series data analysis, in general, multiple linear regression models are used. In this paper, we present two methods, i.e., Sequential Probability Ratio Test (SPRT) and Chow Test that is well-known in economics, and describe those experimental evaluations of the effectiveness in the change detection using the multiple regression models. Moreover, we extend the definition of the detected change point in the SPRT method, and show the improvement of the change detection accuracy.
NASA Astrophysics Data System (ADS)
Li, Qi-Lang; Wong, S. C.; Min, Jie; Tian, Shuo; Wang, Bing-Hong
2016-08-01
This study examines the cellular automata traffic flow model, which considers the heterogeneity of vehicle acceleration and the delay probability of vehicles. Computer simulations are used to identify three typical phases in the model: free-flow, synchronized flow, and wide moving traffic jam. In the synchronized flow region of the fundamental diagram, the low and high velocity vehicles compete with each other and play an important role in the evolution of the system. The analysis shows that there are two types of bistable phases. However, in the original Nagel and Schreckenberg cellular automata traffic model, there are only two kinds of traffic conditions, namely, free-flow and traffic jams. The synchronized flow phase and bistable phase have not been found.
A probabilistic model for predicting the probability of no-show in hospital appointments.
Alaeddini, Adel; Yang, Kai; Reddy, Chandan; Yu, Susan
2011-06-01
The number of no-shows has a significant impact on the revenue, cost and resource utilization for almost all healthcare systems. In this study we develop a hybrid probabilistic model based on logistic regression and empirical Bayesian inference to predict the probability of no-shows in real time using both general patient social and demographic information and individual clinical appointments attendance records. The model also considers the effect of appointment date and clinic type. The effectiveness of the proposed approach is validated based on a patient dataset from a VA medical center. Such an accurate prediction model can be used to enable a precise selective overbooking strategy to reduce the negative effect of no-shows and to fill appointment slots while maintaining short wait times.
Void probability as a function of the void's shape and scale-invariant models
NASA Technical Reports Server (NTRS)
Elizalde, E.; Gaztanaga, E.
1991-01-01
The dependence of counts in cells on the shape of the cell for the large scale galaxy distribution is studied. A very concrete prediction can be done concerning the void distribution for scale invariant models. The prediction is tested on a sample of the CfA catalog, and good agreement is found. It is observed that the probability of a cell to be occupied is bigger for some elongated cells. A phenomenological scale invariant model for the observed distribution of the counts in cells, an extension of the negative binomial distribution, is presented in order to illustrate how this dependence can be quantitatively determined. An original, intuitive derivation of this model is presented.
Classical signal model reproducing quantum probabilities for single and coincidence detections
NASA Astrophysics Data System (ADS)
Khrennikov, Andrei; Nilsson, Börje; Nordebo, Sven
2012-05-01
We present a simple classical (random) signal model reproducing Born's rule. The crucial point of our approach is that the presence of detector's threshold and calibration procedure have to be treated not as simply experimental technicalities, but as the basic counterparts of the theoretical model. We call this approach threshold signal detection model (TSD). The experiment on coincidence detection which was done by Grangier in 1986 [22] played a crucial role in rejection of (semi-)classical field models in favour of quantum mechanics (QM): impossibility to resolve the wave-particle duality in favour of a purely wave model. QM predicts that the relative probability of coincidence detection, the coefficient g(2) (0), is zero (for one photon states), but in (semi-)classical models g(2)(0) >= 1. In TSD the coefficient g(2)(0) decreases as 1/ɛ2d, where ɛd > 0 is the detection threshold. Hence, by increasing this threshold an experimenter can make the coefficient g(2) (0) essentially less than 1. The TSD-prediction can be tested experimentally in new Grangier type experiments presenting a detailed monitoring of dependence of the coefficient g(2)(0) on the detection threshold. Structurally our model has some similarity with the prequantum model of Grossing et al. Subquantum stochasticity is composed of the two counterparts: a stationary process in the space of internal degrees of freedom and the random walk type motion describing the temporal dynamics.
NASA Astrophysics Data System (ADS)
Kim, S. S. H.; Hughes, J. D.; Chen, J.; Dutta, D.; Vaze, J.
2014-12-01
Achieving predictive success is a major challenge in hydrological modelling. Predictive metrics indicate whether models and parameters are appropriate for impact assessment, design, planning and management, forecasting and underpinning policy. It is often found that very different parameter sets and model structures are equally acceptable system representations (commonly described as equifinality). Furthermore, parameters that produce the best goodness of fit during a calibration period may often yield poor results outside of that period. A calibration method is presented that uses a recursive Bayesian filter to estimate the probability of consistent performance of parameter sets in different sub-periods. The result is a probability distribution for each specified performance interval. This generic method utilises more information within time-series data than what is typically used for calibrations, and could be adopted for different types of time-series modelling applications. Where conventional calibration methods implicitly identify the best performing parameterisations on average, the new method looks at the consistency of performance during sub-periods. The proposed calibration method, therefore, can be used to avoid heavy weighting toward rare periods of good agreement. The method is trialled in a conceptual river system model called the Australian Water Resources Assessments River (AWRA-R) model in the Murray-Darling Basin, Australia. The new method is tested via cross-validation and results are compared to a traditional split-sample calibration/validation to evaluate the new technique's ability to predict daily streamflow. The results showed that the new calibration method could produce parameterisations that performed better in validation periods than optimum calibration parameter sets. The method shows ability to improve on predictive performance and provide more realistic flux terms compared to traditional split-sample calibration methods.
NASA Astrophysics Data System (ADS)
James, P.
2011-12-01
With a growing need for housing in the U.K., the government has proposed increased development of brownfield sites. However, old mine workings and natural cavities represent a potential hazard before, during and after construction on such sites, and add further complication to subsurface parameters. Cavities are hence a limitation to certain redevelopment and their detection is an ever important consideration. The current standard technique for cavity detection is a borehole grid, which is intrusive, non-continuous, slow and expensive. A new robust investigation standard in the detection of cavities is sought and geophysical techniques offer an attractive alternative. Geophysical techniques have previously been utilised successfully in the detection of cavities in various geologies, but still has an uncertain reputation in the engineering industry. Engineers are unsure of the techniques and are inclined to rely on well known techniques than utilise new technologies. Bad experiences with geophysics are commonly due to the indiscriminate choice of particular techniques. It is imperative that a geophysical survey is designed with the specific site and target in mind at all times, and the ability and judgement to rule out some, or all, techniques. To this author's knowledge no comparative software exists to aid technique choice. Also, previous modelling software limit the shapes of bodies and hence typical cavity shapes are not represented. Here, we introduce 3D modelling software (Matlab) which computes and compares the response to various cavity targets from a range of techniques (gravity, gravity gradient, magnetic, magnetic gradient and GPR). Typical near surface cavity shapes are modelled including shafts, bellpits, various lining and capping materials, and migrating voids. The probability of cavity detection is assessed in typical subsurface and noise conditions across a range of survey parameters. Techniques can be compared and the limits of detection distance
Predicting Mortality in Low-Income Country ICUs: The Rwanda Mortality Probability Model (R-MPM)
Kiviri, Willy; Fowler, Robert A.; Mueller, Ariel; Novack, Victor; Banner-Goodspeed, Valerie M.; Weinkauf, Julia L.; Talmor, Daniel S.; Twagirumugabe, Theogene
2016-01-01
Introduction Intensive Care Unit (ICU) risk prediction models are used to compare outcomes for quality improvement initiatives, benchmarking, and research. While such models provide robust tools in high-income countries, an ICU risk prediction model has not been validated in a low-income country where ICU population characteristics are different from those in high-income countries, and where laboratory-based patient data are often unavailable. We sought to validate the Mortality Probability Admission Model, version III (MPM0-III) in two public ICUs in Rwanda and to develop a new Rwanda Mortality Probability Model (R-MPM) for use in low-income countries. Methods We prospectively collected data on all adult patients admitted to Rwanda’s two public ICUs between August 19, 2013 and October 6, 2014. We described demographic and presenting characteristics and outcomes. We assessed the discrimination and calibration of the MPM0-III model. Using stepwise selection, we developed a new logistic model for risk prediction, the R-MPM, and used bootstrapping techniques to test for optimism in the model. Results Among 427 consecutive adults, the median age was 34 (IQR 25–47) years and mortality was 48.7%. Mechanical ventilation was initiated for 85.3%, and 41.9% received vasopressors. The MPM0-III predicted mortality with area under the receiver operating characteristic curve of 0.72 and Hosmer-Lemeshow chi-square statistic p = 0.024. We developed a new model using five variables: age, suspected or confirmed infection within 24 hours of ICU admission, hypotension or shock as a reason for ICU admission, Glasgow Coma Scale score at ICU admission, and heart rate at ICU admission. Using these five variables, the R-MPM predicted outcomes with area under the ROC curve of 0.81 with 95% confidence interval of (0.77, 0.86), and Hosmer-Lemeshow chi-square statistic p = 0.154. Conclusions The MPM0-III has modest ability to predict mortality in a population of Rwandan ICU patients. The R
SAR amplitude probability density function estimation based on a generalized Gaussian model.
Moser, Gabriele; Zerubia, Josiane; Serpico, Sebastiano B
2006-06-01
In the context of remotely sensed data analysis, an important problem is the development of accurate models for the statistics of the pixel intensities. Focusing on synthetic aperture radar (SAR) data, this modeling process turns out to be a crucial task, for instance, for classification or for denoising purposes. In this paper, an innovative parametric estimation methodology for SAR amplitude data is proposed that adopts a generalized Gaussian (GG) model for the complex SAR backscattered signal. A closed-form expression for the corresponding amplitude probability density function (PDF) is derived and a specific parameter estimation algorithm is developed in order to deal with the proposed model. Specifically, the recently proposed "method-of-log-cumulants" (MoLC) is applied, which stems from the adoption of the Mellin transform (instead of the usual Fourier transform) in the computation of characteristic functions and from the corresponding generalization of the concepts of moment and cumulant. For the developed GG-based amplitude model, the resulting MoLC estimates turn out to be numerically feasible and are also analytically proved to be consistent. The proposed parametric approach was validated by using several real ERS-1, XSAR, E-SAR, and NASA/JPL airborne SAR images, and the experimental results prove that the method models the amplitude PDF better than several previously proposed parametric models for backscattering phenomena. PMID:16764268
Royle, J. Andrew; Chandler, Richard B.; Yackulic, Charles; Nichols, James D.
2012-01-01
1. Understanding the factors affecting species occurrence is a pre-eminent focus of applied ecological research. However, direct information about species occurrence is lacking for many species. Instead, researchers sometimes have to rely on so-called presence-only data (i.e. when no direct information about absences is available), which often results from opportunistic, unstructured sampling. MAXENT is a widely used software program designed to model and map species distribution using presence-only data. 2. We provide a critical review of MAXENT as applied to species distribution modelling and discuss how it can lead to inferential errors. A chief concern is that MAXENT produces a number of poorly defined indices that are not directly related to the actual parameter of interest – the probability of occurrence (ψ). This focus on an index was motivated by the belief that it is not possible to estimate ψ from presence-only data; however, we demonstrate that ψ is identifiable using conventional likelihood methods under the assumptions of random sampling and constant probability of species detection. 3. The model is implemented in a convenient r package which we use to apply the model to simulated data and data from the North American Breeding Bird Survey. We demonstrate that MAXENT produces extreme under-predictions when compared to estimates produced by logistic regression which uses the full (presence/absence) data set. We note that MAXENT predictions are extremely sensitive to specification of the background prevalence, which is not objectively estimated using the MAXENT method. 4. As with MAXENT, formal model-based inference requires a random sample of presence locations. Many presence-only data sets, such as those based on museum records and herbarium collections, may not satisfy this assumption. However, when sampling is random, we believe that inference should be based on formal methods that facilitate inference about interpretable ecological quantities
Serfling, Robert; Ogola, Gerald
2016-02-10
Among men, prostate cancer (CaP) is the most common newly diagnosed cancer and the second leading cause of death from cancer. A major issue of very large scale is avoiding both over-treatment and under-treatment of CaP cases. The central challenge is deciding clinical significance or insignificance when the CaP biopsy results are positive but only marginally so. A related concern is deciding how to increase the number of biopsy cores for larger prostates. As a foundation for improved choice of number of cores and improved interpretation of biopsy results, we develop a probability model for the number of positive cores found in a biopsy, given the total number of cores, the volumes of the tumor nodules, and - very importantly - the prostate volume. Also, three applications are carried out: guidelines for the number of cores as a function of prostate volume, decision rules for insignificant versus significant CaP using number of positive cores, and, using prior distributions on total tumor size, Bayesian posterior probabilities for insignificant CaP and posterior median CaP. The model-based results have generality of application, take prostate volume into account, and provide attractive tradeoffs of specificity versus sensitivity. Copyright © 2015 John Wiley & Sons, Ltd.
Sato, Tatsuhiko; Hamada, Nobuyuki
2014-01-01
We here propose a new model assembly for estimating the surviving fraction of cells irradiated with various types of ionizing radiation, considering both targeted and nontargeted effects in the same framework. The probability densities of specific energies in two scales, which are the cell nucleus and its substructure called a domain, were employed as the physical index for characterizing the radiation fields. In the model assembly, our previously established double stochastic microdosimetric kinetic (DSMK) model was used to express the targeted effect, whereas a newly developed model was used to express the nontargeted effect. The radioresistance caused by overexpression of anti-apoptotic protein Bcl-2 known to frequently occur in human cancer was also considered by introducing the concept of the adaptive response in the DSMK model. The accuracy of the model assembly was examined by comparing the computationally and experimentally determined surviving fraction of Bcl-2 cells (Bcl-2 overexpressing HeLa cells) and Neo cells (neomycin resistant gene-expressing HeLa cells) irradiated with microbeam or broadbeam of energetic heavy ions, as well as the WI-38 normal human fibroblasts irradiated with X-ray microbeam. The model assembly reproduced very well the experimentally determined surviving fraction over a wide range of dose and linear energy transfer (LET) values. Our newly established model assembly will be worth being incorporated into treatment planning systems for heavy-ion therapy, brachytherapy, and boron neutron capture therapy, given critical roles of the frequent Bcl-2 overexpression and the nontargeted effect in estimating therapeutic outcomes and harmful effects of such advanced therapeutic modalities.
Modeling Normal Shock Velocity Curvature Relation for Heterogeneous Explosives
NASA Astrophysics Data System (ADS)
Yoo, Sunhee; Crochet, Michael; Pemberton, Steve
2015-06-01
The normal shock velocity and curvature, Dn(κ) , relation on a detonation shock surface has been an important functional quantity to measure to understand the shock strength exerted against the material interface between a main explosive charge and the case of an explosive munition. The Dn(κ) relation is considered an intrinsic property of an explosive, and can be experimentally deduced by rate stick tests at various charge diameters. However, experimental measurements of the Dn(κ) relation for heterogeneous explosives such as PBXN-111 are challenging due to the non-smoothness and asymmetry usually observed in the experimental streak records of explosion fronts. Out of the many possibilities, the asymmetric character may be attributed to the heterogeneity of the explosives, a hypothesis which begs two questions: (1) is there any simple hydrodynamic model that can explain such an asymmetric shock evolution, and (2) what statistics can be derived for the asymmetry using simulations with defined structural heterogeneity in the unreacted explosive? Saenz, Taylor and Stewart studied constitutive models for derivation of the Dn(κ) relation on porous `homogeneous' explosives and carried out simulations in a spherical coordinate frame. In this paper, we extend their model to account for `heterogeneity' and present shock evolutions in heterogeneous explosives using 2-D hydrodynamic simulations with some statistical examination. (96TW-2015-0004)
A radiation damage repair model for normal tissues
NASA Astrophysics Data System (ADS)
Partridge, Mike
2008-07-01
A cellular Monte Carlo model describing radiation damage and repair in normal epithelial tissues is presented. The deliberately simplified model includes cell cycling, cell motility and radiation damage response (cell cycle arrest and cell death) only. Results demonstrate that the model produces a stable equilibrium system for mean cell cycle times in the range 24-96 h. Simulated irradiation of these stable equilibrium systems produced a range of responses that are shown to be consistent with experimental and clinical observation, including (i) re-epithelialization of radiation-induced lesions by a mixture of cell migration into the wound and repopulation at the periphery; (ii) observed radiosensitivity that is quantitatively consistent with both rate of induction of irreparable DNA lesions and, independently, with the observed acute oral and pharyngeal mucosal reactions to radiotherapy; (iii) an observed time between irradiation and maximum toxicity that is consistent with experimental data for skin; (iv) quantitatively accurate predictions of low-dose hyper-radiosensitivity; (v) Gomperzian repopulation for very small lesions (~2000 cells) and (vi) a linear rate of re-epithelialization of 5-10 µm h-1 for large lesions (>15 000 cells).
Inferring Pairwise Interactions from Biological Data Using Maximum-Entropy Probability Models.
Stein, Richard R; Marks, Debora S; Sander, Chris
2015-07-01
Maximum entropy-based inference methods have been successfully used to infer direct interactions from biological datasets such as gene expression data or sequence ensembles. Here, we review undirected pairwise maximum-entropy probability models in two categories of data types, those with continuous and categorical random variables. As a concrete example, we present recently developed inference methods from the field of protein contact prediction and show that a basic set of assumptions leads to similar solution strategies for inferring the model parameters in both variable types. These parameters reflect interactive couplings between observables, which can be used to predict global properties of the biological system. Such methods are applicable to the important problems of protein 3-D structure prediction and association of gene-gene networks, and they enable potential applications to the analysis of gene alteration patterns and to protein design. PMID:26225866
Ellis, Andrew M.; Yang Shengfu
2007-09-15
A theoretical model has been developed to describe the probability of charge transfer from helium cations to dopant molecules inside helium nanodroplets following electron-impact ionization. The location of the initial charge site inside helium nanodroplets subject to electron impact has been investigated and is found to play an important role in understanding the ionization of dopants inside helium droplets. The model is consistent with a charge migration process in small helium droplets that is strongly directed by intermolecular forces originating from the dopant, whereas for large droplets (tens of thousands of helium atoms and larger) the charge migration increasingly takes on the character of a random walk. This suggests a clear droplet size limit for the use of electron-impact mass spectrometry for detecting molecules in helium droplets.
Inferring Pairwise Interactions from Biological Data Using Maximum-Entropy Probability Models
Stein, Richard R.; Marks, Debora S.; Sander, Chris
2015-01-01
Maximum entropy-based inference methods have been successfully used to infer direct interactions from biological datasets such as gene expression data or sequence ensembles. Here, we review undirected pairwise maximum-entropy probability models in two categories of data types, those with continuous and categorical random variables. As a concrete example, we present recently developed inference methods from the field of protein contact prediction and show that a basic set of assumptions leads to similar solution strategies for inferring the model parameters in both variable types. These parameters reflect interactive couplings between observables, which can be used to predict global properties of the biological system. Such methods are applicable to the important problems of protein 3-D structure prediction and association of gene–gene networks, and they enable potential applications to the analysis of gene alteration patterns and to protein design. PMID:26225866
Li, Xin; Li, Ye
2015-01-01
Regular respiratory signals (RRSs) acquired with physiological sensing systems (e.g., the life-detection radar system) can be used to locate survivors trapped in debris in disaster rescue, or predict the breathing motion to allow beam delivery under free breathing conditions in external beam radiotherapy. Among the existing analytical models for RRSs, the harmonic-based random model (HRM) is shown to be the most accurate, which, however, is found to be subject to considerable error if the RRS has a slowly descending end-of-exhale (EOE) phase. The defect of the HRM motivates us to construct a more accurate analytical model for the RRS. In this paper, we derive a new analytical RRS model from the probability density function of Rayleigh distribution. We evaluate the derived RRS model by using it to fit a real-life RRS in the sense of least squares, and the evaluation result shows that, our presented model exhibits lower error and fits the slowly descending EOE phases of the real-life RRS better than the HRM. PMID:26736208
NASA Technical Reports Server (NTRS)
Smith, N. S. A.; Frolov, S. M.; Bowman, C. T.
1996-01-01
Two types of mixing sub-models are evaluated in connection with a joint-scalar probability density function method for turbulent nonpremixed combustion. Model calculations are made and compared to simulation results for homogeneously distributed methane-air reaction zones mixing and reacting in decaying turbulence within a two-dimensional enclosed domain. The comparison is arranged to ensure that both the simulation and model calculations a) make use of exactly the same chemical mechanism, b) do not involve non-unity Lewis number transport of species, and c) are free from radiation loss. The modified Curl mixing sub-model was found to provide superior predictive accuracy over the simple relaxation-to-mean submodel in the case studied. Accuracy to within 10-20% was found for global means of major species and temperature; however, nitric oxide prediction accuracy was lower and highly dependent on the choice of mixing sub-model. Both mixing submodels were found to produce non-physical mixing behavior for mixture fractions removed from the immediate reaction zone. A suggestion for a further modified Curl mixing sub-model is made in connection with earlier work done in the field.
An investigation of a quantum probability model for the constructive effect of affective evaluation.
White, Lee C; Barqué-Duran, Albert; Pothos, Emmanuel M
2016-01-13
The idea that choices can have a constructive effect has received a great deal of empirical support. The act of choosing appears to influence subsequent preferences for the options available. Recent research has proposed a cognitive model based on quantum probability (QP), which suggests that whether or not a participant provides an affective evaluation for a positively or negatively valenced stimulus can also be constructive and so, for example, influence the affective evaluation of a second oppositely valenced stimulus. However, there are some outstanding methodological questions in relation to this previous research. This paper reports the results of three experiments designed to resolve these questions. Experiment 1, using a binary response format, provides partial support for the interaction predicted by the QP model; and Experiment 2, which controls for the length of time participants have to respond, fully supports the QP model. Finally, Experiment 3 sought to determine whether the key effect can generalize beyond affective judgements about visual stimuli. Using judgements about the trustworthiness of well-known people, the predictions of the QP model were confirmed. Together, these three experiments provide further support for the QP model of the constructive effect of simple evaluations. PMID:26621993
Probability models for theater nuclear warfare. Final report, June 1988-September 1989
Youngren, M.A.
1989-09-01
This paper proposes specific probabilistic approaches to address several major problems associated with the representation of tactical nuclear warfare at the theater level. The first problem is identifying the locations of small units (potential nuclear targets) such as companies or battalions within theater-level conventional scenarios or model outputs. Current approaches to identifying these small unit locations fail to take into account the variability that might be realized in any specific battle. A two-dimensional multivariate model is proposed to describe uncertainty about the precise location of the potential targets. The second major problem lies in the interface between theater-level nuclear analyses and conventional battle expected value simulations. An expected value model demands a single input to represent the effect of a nuclear exchange. However, a theater-level nuclear exchange may generate many different outcomes which will have significantly different effects. The probability models described in this paper may be used as a research tool to estimate the sensitivity of exchange outcomes to various data and assumptions, as a surrogate for detailed, complex simulation models; or as an estimator of the sample space of all possible outcomes of a theater nuclear exchange.
Probability based remaining capacity estimation using data-driven and neural network model
NASA Astrophysics Data System (ADS)
Wang, Yujie; Yang, Duo; Zhang, Xu; Chen, Zonghai
2016-05-01
Since large numbers of lithium-ion batteries are composed in pack and the batteries are complex electrochemical devices, their monitoring and safety concerns are key issues for the applications of battery technology. An accurate estimation of battery remaining capacity is crucial for optimization of the vehicle control, preventing battery from over-charging and over-discharging and ensuring the safety during its service life. The remaining capacity estimation of a battery includes the estimation of state-of-charge (SOC) and state-of-energy (SOE). In this work, a probability based adaptive estimator is presented to obtain accurate and reliable estimation results for both SOC and SOE. For the SOC estimation, an n ordered RC equivalent circuit model is employed by combining an electrochemical model to obtain more accurate voltage prediction results. For the SOE estimation, a sliding window neural network model is proposed to investigate the relationship between the terminal voltage and the model inputs. To verify the accuracy and robustness of the proposed model and estimation algorithm, experiments under different dynamic operation current profiles are performed on the commercial 1665130-type lithium-ion batteries. The results illustrate that accurate and robust estimation can be obtained by the proposed method.
Modeling accident frequencies as zero-altered probability processes: an empirical inquiry.
Shankar, V; Milton, J; Mannering, F
1997-11-01
This paper presents an empirical inquiry into the applicability of zero-altered counting processes to roadway section accident frequencies. The intent of such a counting process is to distinguish sections of roadway that are truly safe (near zero-accident likelihood) from those that are unsafe but happen to have zero accidents observed during the period of observation (e.g. one year). Traditional applications of Poisson and negative binomial accident frequency models do not account for this distinction and thus can produce biased coefficient estimates because of the preponderance of zero-accident observations. Zero-altered probability processes such as the zero-inflated Poisson (ZIP) and zero-inflated negative binomial (ZINB) distributions are examined and proposed for accident frequencies by roadway functional class and geographic location. The findings show that the ZIP structure models are promising and have great flexibility in uncovering processes affecting accident frequencies on roadway sections observed with zero accidents and those with observed accident occurrences. This flexibility allows highway engineers to better isolate design factors that contribute to accident occurrence and also provides additional insight into variables that determine the relative accident likelihoods of safe versus unsafe roadways. The generic nature of the models and the relatively good power of the Vuong specification test used in the non-nested hypotheses of model specifications offers roadway designers the potential to develop a global family of models for accident frequency prediction that can be embedded in a larger safety management system. PMID:9370019
NASA Astrophysics Data System (ADS)
Adeloye, Adebayo J.; Soundharajan, Bankaru-Swamy; Musto, Jagarkhin N.; Chiamsathit, Chuthamat
2015-10-01
This study has carried out an assessment of Phien generalised storage-yield-probability (S-Y-P) models using recorded runoff data of six global rivers that were carefully selected such that they satisfy the criteria specified for the models. Using stochastic hydrology, 2000 replicates of the historic records were generated and used to drive the sequent peak algorithm (SPA) for estimating capacity of hypothetical reservoirs at the respective sites. The resulting ensembles of reservoir capacity estimates were then analysed to determine the mean, standard deviation and quantiles, which were then compared with corresponding estimates produced by the Phien models. The results showed that Phien models produced a mix of significant under- and over-predictions of the mean and standard deviation of capacity, with the under-prediction situations occurring as the level of development reduces. On the other hand, consistent over-prediction was obtained for full regulation for all the rivers analysed. The biases in the reservoir capacity quantiles were equally high, implying that the limitations of the Phien models affect the entire distribution function of reservoir capacity. Due to very high values of these errors, it is recommended that the Phien relationships should be avoided for reservoir planning.
NASA Astrophysics Data System (ADS)
Blessent, Daniela; Therrien, René; Lemieux, Jean-Michel
2011-12-01
This paper presents numerical simulations of a series of hydraulic interference tests conducted in crystalline bedrock at Olkiluoto (Finland), a potential site for the disposal of the Finnish high-level nuclear waste. The tests are in a block of crystalline bedrock of about 0.03 km3 that contains low-transmissivity fractures. Fracture density, orientation, and fracture transmissivity are estimated from Posiva Flow Log (PFL) measurements in boreholes drilled in the rock block. On the basis of those data, a geostatistical approach relying on a transitional probability and Markov chain models is used to define a conceptual model based on stochastic fractured rock facies. Four facies are defined, from sparsely fractured bedrock to highly fractured bedrock. Using this conceptual model, three-dimensional groundwater flow is then simulated to reproduce interference pumping tests in either open or packed-off boreholes. Hydraulic conductivities of the fracture facies are estimated through automatic calibration using either hydraulic heads or both hydraulic heads and PFL flow rates as targets for calibration. The latter option produces a narrower confidence interval for the calibrated hydraulic conductivities, therefore reducing the associated uncertainty and demonstrating the usefulness of the measured PFL flow rates. Furthermore, the stochastic facies conceptual model is a suitable alternative to discrete fracture network models to simulate fluid flow in fractured geological media.
McClure, Meredith L; Burdett, Christopher L; Farnsworth, Matthew L; Lutman, Mark W; Theobald, David M; Riggs, Philip D; Grear, Daniel A; Miller, Ryan S
2015-01-01
Wild pigs (Sus scrofa), also known as wild swine, feral pigs, or feral hogs, are one of the most widespread and successful invasive species around the world. Wild pigs have been linked to extensive and costly agricultural damage and present a serious threat to plant and animal communities due to their rooting behavior and omnivorous diet. We modeled the current distribution of wild pigs in the United States to better understand the physiological and ecological factors that may determine their invasive potential and to guide future study and eradication efforts. Using national-scale wild pig occurrence data reported between 1982 and 2012 by wildlife management professionals, we estimated the probability of wild pig occurrence across the United States using a logistic discrimination function and environmental covariates hypothesized to influence the distribution of the species. Our results suggest the distribution of wild pigs in the U.S. was most strongly limited by cold temperatures and availability of water, and that they were most likely to occur where potential home ranges had higher habitat heterogeneity, providing access to multiple key resources including water, forage, and cover. High probability of occurrence was also associated with frequent high temperatures, up to a high threshold. However, this pattern is driven by pigs' historic distribution in warm climates of the southern U.S. Further study of pigs' ability to persist in cold northern climates is needed to better understand whether low temperatures actually limit their distribution. Our model highlights areas at risk of invasion as those with habitat conditions similar to those found in pigs' current range that are also near current populations. This study provides a macro-scale approach to generalist species distribution modeling that is applicable to other generalist and invasive species. PMID:26267266
McClure, Meredith L; Burdett, Christopher L; Farnsworth, Matthew L; Lutman, Mark W; Theobald, David M; Riggs, Philip D; Grear, Daniel A; Miller, Ryan S
2015-01-01
Wild pigs (Sus scrofa), also known as wild swine, feral pigs, or feral hogs, are one of the most widespread and successful invasive species around the world. Wild pigs have been linked to extensive and costly agricultural damage and present a serious threat to plant and animal communities due to their rooting behavior and omnivorous diet. We modeled the current distribution of wild pigs in the United States to better understand the physiological and ecological factors that may determine their invasive potential and to guide future study and eradication efforts. Using national-scale wild pig occurrence data reported between 1982 and 2012 by wildlife management professionals, we estimated the probability of wild pig occurrence across the United States using a logistic discrimination function and environmental covariates hypothesized to influence the distribution of the species. Our results suggest the distribution of wild pigs in the U.S. was most strongly limited by cold temperatures and availability of water, and that they were most likely to occur where potential home ranges had higher habitat heterogeneity, providing access to multiple key resources including water, forage, and cover. High probability of occurrence was also associated with frequent high temperatures, up to a high threshold. However, this pattern is driven by pigs' historic distribution in warm climates of the southern U.S. Further study of pigs' ability to persist in cold northern climates is needed to better understand whether low temperatures actually limit their distribution. Our model highlights areas at risk of invasion as those with habitat conditions similar to those found in pigs' current range that are also near current populations. This study provides a macro-scale approach to generalist species distribution modeling that is applicable to other generalist and invasive species.
Photometric redshifts and quasar probabilities from a single, data-driven generative model
Bovy, Jo; Myers, Adam D.; Hennawi, Joseph F.; Hogg, David W.; McMahon, Richard G.; Schiminovich, David; Sheldon, Erin S.; Brinkmann, Jon; Schneider, Donald P.; Weaver, Benjamin A.
2012-03-20
We describe a technique for simultaneously classifying and estimating the redshift of quasars. It can separate quasars from stars in arbitrary redshift ranges, estimate full posterior distribution functions for the redshift, and naturally incorporate flux uncertainties, missing data, and multi-wavelength photometry. We build models of quasars in flux-redshift space by applying the extreme deconvolution technique to estimate the underlying density. By integrating this density over redshift, one can obtain quasar flux densities in different redshift ranges. This approach allows for efficient, consistent, and fast classification and photometric redshift estimation. This is achieved by combining the speed obtained by choosing simple analytical forms as the basis of our density model with the flexibility of non-parametric models through the use of many simple components with many parameters. We show that this technique is competitive with the best photometric quasar classification techniques—which are limited to fixed, broad redshift ranges and high signal-to-noise ratio data—and with the best photometric redshift techniques when applied to broadband optical data. We demonstrate that the inclusion of UV and NIR data significantly improves photometric quasar-star separation and essentially resolves all of the redshift degeneracies for quasars inherent to the ugriz filter system, even when included data have a low signal-to-noise ratio. For quasars spectroscopically confirmed by the SDSS 84% and 97% of the objects with Galaxy Evolution Explorer UV and UKIDSS NIR data have photometric redshifts within 0.1 and 0.3, respectively, of the spectroscopic redshift; this amounts to about a factor of three improvement over ugriz-only photometric redshifts. Our code to calculate quasar probabilities and redshift probability distributions is publicly available.
PHOTOMETRIC REDSHIFTS AND QUASAR PROBABILITIES FROM A SINGLE, DATA-DRIVEN GENERATIVE MODEL
Bovy, Jo; Hogg, David W.; Weaver, Benjamin A.; Myers, Adam D.; Hennawi, Joseph F.; McMahon, Richard G.; Schiminovich, David; Sheldon, Erin S.; Brinkmann, Jon; Schneider, Donald P.
2012-04-10
We describe a technique for simultaneously classifying and estimating the redshift of quasars. It can separate quasars from stars in arbitrary redshift ranges, estimate full posterior distribution functions for the redshift, and naturally incorporate flux uncertainties, missing data, and multi-wavelength photometry. We build models of quasars in flux-redshift space by applying the extreme deconvolution technique to estimate the underlying density. By integrating this density over redshift, one can obtain quasar flux densities in different redshift ranges. This approach allows for efficient, consistent, and fast classification and photometric redshift estimation. This is achieved by combining the speed obtained by choosing simple analytical forms as the basis of our density model with the flexibility of non-parametric models through the use of many simple components with many parameters. We show that this technique is competitive with the best photometric quasar classification techniques-which are limited to fixed, broad redshift ranges and high signal-to-noise ratio data-and with the best photometric redshift techniques when applied to broadband optical data. We demonstrate that the inclusion of UV and NIR data significantly improves photometric quasar-star separation and essentially resolves all of the redshift degeneracies for quasars inherent to the ugriz filter system, even when included data have a low signal-to-noise ratio. For quasars spectroscopically confirmed by the SDSS 84% and 97% of the objects with Galaxy Evolution Explorer UV and UKIDSS NIR data have photometric redshifts within 0.1 and 0.3, respectively, of the spectroscopic redshift; this amounts to about a factor of three improvement over ugriz-only photometric redshifts. Our code to calculate quasar probabilities and redshift probability distributions is publicly available.
McClure, Meredith L.; Burdett, Christopher L.; Farnsworth, Matthew L.; Lutman, Mark W.; Theobald, David M.; Riggs, Philip D.; Grear, Daniel A.; Miller, Ryan S.
2015-01-01
Wild pigs (Sus scrofa), also known as wild swine, feral pigs, or feral hogs, are one of the most widespread and successful invasive species around the world. Wild pigs have been linked to extensive and costly agricultural damage and present a serious threat to plant and animal communities due to their rooting behavior and omnivorous diet. We modeled the current distribution of wild pigs in the United States to better understand the physiological and ecological factors that may determine their invasive potential and to guide future study and eradication efforts. Using national-scale wild pig occurrence data reported between 1982 and 2012 by wildlife management professionals, we estimated the probability of wild pig occurrence across the United States using a logistic discrimination function and environmental covariates hypothesized to influence the distribution of the species. Our results suggest the distribution of wild pigs in the U.S. was most strongly limited by cold temperatures and availability of water, and that they were most likely to occur where potential home ranges had higher habitat heterogeneity, providing access to multiple key resources including water, forage, and cover. High probability of occurrence was also associated with frequent high temperatures, up to a high threshold. However, this pattern is driven by pigs’ historic distribution in warm climates of the southern U.S. Further study of pigs’ ability to persist in cold northern climates is needed to better understand whether low temperatures actually limit their distribution. Our model highlights areas at risk of invasion as those with habitat conditions similar to those found in pigs’ current range that are also near current populations. This study provides a macro-scale approach to generalist species distribution modeling that is applicable to other generalist and invasive species. PMID:26267266
Compound nucleus formation probability PCN defined within the dynamical cluster-decay model
NASA Astrophysics Data System (ADS)
Chopra, Sahila; Kaur, Arshdeep; Gupta, Raj K.
2015-01-01
With in the dynamical cluster-decay model (DCM), the compound nucleus fusion/ formation probability PCN is defined for the first time, and its variation with CN excitation energy E* and fissility parameter χ is studied. In DCM, the (total) fusion cross section σfusion is sum of the compound nucleus (CN) and noncompound nucleus (nCN) decay processes, each calculated as the dynamical fragmentation process. The CN cross section σCN is constituted of the evaporation residues (ER) and fusion-fission (ff), including the intermediate mass fragments (IMFs), each calculated for all contributing decay fragments (A1, A2) in terms of their formation and barrier penetration probabilities P0 and P. The nCN cross section σnCN is determined as the quasi-fission (qf) process where P0=1 and P is calculated for the entrance channel nuclei. The calculations are presented for six different target-projectile combinations of CN mass A~100 to superheavy, at various different center-of-mass energies with effects of deformations and orientations of nuclei included in it. Interesting results are that the PCN=1 for complete fusion, but PCN <1 or ≪1 due to the nCN conribution, depending strongly on both E* and χ.
Fixation probability and the crossing time in the Wright-Fisher multiple alleles model
NASA Astrophysics Data System (ADS)
Gill, Wonpyong
2009-08-01
The fixation probability and crossing time in the Wright-Fisher multiple alleles model, which describes a finite haploid population, were calculated by switching on an asymmetric sharply-peaked landscape with a positive asymmetric parameter, r, such that the reversal allele of the optimal allele has higher fitness than the optimal allele. The fixation probability, which was evaluated as the ratio of the first arrival time at the reversal allele to the origination time, was double the selective advantage of the reversal allele compared with the optimal allele in the strong selection region, where the fitness parameter, k, is much larger than the critical fitness parameter, kc. The crossing time in a finite population for r>0 and k
Lura, Derek; Wernke, Matthew; Alqasemi, Redwan; Carey, Stephanie; Dubey, Rajiv
2012-01-01
This paper presents the probability density based gradient projection (GP) of the null space of the Jacobian for a 25 degree of freedom bilateral robotic human body model (RHBM). This method was used to predict the inverse kinematics of the RHBM and maximize the similarity between predicted inverse kinematic poses and recorded data of 10 subjects performing activities of daily living. The density function was created for discrete increments of the workspace. The number of increments in each direction (x, y, and z) was varied from 1 to 20. Performance of the method was evaluated by finding the root mean squared (RMS) of the difference between the predicted joint angles relative to the joint angles recorded from motion capture. The amount of data included in the creation of the probability density function was varied from 1 to 10 subjects, creating sets of for subjects included and excluded from the density function. The performance of the GP method for subjects included and excluded from the density function was evaluated to test the robustness of the method. Accuracy of the GP method varied with amount of incremental division of the workspace, increasing the number of increments decreased the RMS error of the method, with the error of average RMS error of included subjects ranging from 7.7° to 3.7°. However increasing the number of increments also decreased the robustness of the method.
LaBudde, Robert A; Harnly, James M
2012-01-01
A qualitative botanical identification method (BIM) is an analytical procedure that returns a binary result (1 = Identified, 0 = Not Identified). A BIM may be used by a buyer, manufacturer, or regulator to determine whether a botanical material being tested is the same as the target (desired) material, or whether it contains excessive nontarget (undesirable) material. The report describes the development and validation of studies for a BIM based on the proportion of replicates identified, or probability of identification (POI), as the basic observed statistic. The statistical procedures proposed for data analysis follow closely those of the probability of detection, and harmonize the statistical concepts and parameters between quantitative and qualitative method validation. Use of POI statistics also harmonizes statistical concepts for botanical, microbiological, toxin, and other analyte identification methods that produce binary results. The POI statistical model provides a tool for graphical representation of response curves for qualitative methods, reporting of descriptive statistics, and application of performance requirements. Single collaborator and multicollaborative study examples are given.
NASA Astrophysics Data System (ADS)
Musho, Matthew K.; Kozak, John J.
1984-10-01
A method is presented for calculating exactly the relative width (σ2)1/2/
A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities
Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.
1999-01-01
A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.
Kukla, G.; Gavin, J.
1994-05-01
This report was prepared at the Lamont-Doherty Geological Observatory of Columbia University at Palisades, New York, under subcontract to Pacific Northwest Laboratory it is a part of a larger project of global climate studies which supports site characterization work required for the selection of a potential high-level nuclear waste repository and forms part of the Performance Assessment Scientific Support (PASS) Program at PNL. The work under the PASS Program is currently focusing on the proposed site at Yucca Mountain, Nevada, and is under the overall direction of the Yucca Mountain Project Office US Department of Energy, Las Vegas, Nevada. The final results of the PNL project will provide input to global atmospheric models designed to test specific climate scenarios which will be used in the site specific modeling work of others. The primary purpose of the data bases compiled and of the astronomic predictive models is to aid in the estimation of the probabilities of future climate states. The results will be used by two other teams working on the global climate study under contract to PNL. They are located at and the University of Maine in Orono, Maine, and the Applied Research Corporation in College Station, Texas. This report presents the results of the third year`s work on the global climate change models and the data bases describing past climates.
Ando, Tomohiro; Imoto, Seiya; Miyano, Satoru
2004-01-01
One important application of microarray gene expression data is to study the relationship between the clinical phenotype of cancer patients and gene expression profiles on the whole-genome scale. The clinical phenotype includes several different types of cancers, survival times, relapse times, drug responses and so on. Under the situation that the subtypes of cancer have not been previously identified or known to exist, we develop a new kernel mixture modeling method that performs simultaneously identification of the subtype of cancer, prediction of the probabilities of both cancer type and patient's survival, and detection of a set of marker genes on which to base a diagnosis. The proposed method is successfully performed on real data analysis and simulation studies.
A generative probability model of joint label fusion for multi-atlas based brain segmentation
Wu, Guorong; Wang, Qian; Zhang, Daoqiang; Nie, Feiping; Huang, Heng; Shen, Dinggang
2013-01-01
Automated labeling of anatomical structures in medical images is very important in many neuroscience studies. Recently, patch-based labeling has been widely investigated to alleviate the possible mis-alignment when registering atlases to the target image. However, the weights used for label fusion from the registered atlases are generally computed independently and thus lack the capability of preventing the ambiguous atlas patches from contributing to the label fusion. More critically, these weights are often calculated based only on the simple patch similarity, thus not necessarily providing optimal solution for label fusion. To address these limitations, we propose a generative probability model to describe the procedure of label fusion in a multi-atlas scenario, for the goal of labeling each point in the target image by the best representative atlas patches that also have the largest labeling unanimity in labeling the underlying point correctly. Specifically, sparsity constraint is imposed upon label fusion weights, in order to select a small number of atlas patches that best represent the underlying target patch, thus reducing the risks of including the misleading atlas patches. The labeling unanimity among atlas patches is achieved by exploring their dependencies, where we model these dependencies as the joint probability of each pair of atlas patches in correctly predicting the labels, by analyzing the correlation of their morphological error patterns and also the labeling consensus among atlases. The patch dependencies will be further recursively updated based on the latest labeling results to correct the possible labeling errors, which falls to the Expectation Maximization (EM) framework. To demonstrate the labeling performance, we have comprehensively evaluated our patch-based labeling method on the whole brain parcellation and hippocampus segmentation. Promising labeling results have been achieved with comparison to the conventional patch-based labeling
I show that a conditional probability analysis using a stressor-response model based on a logistic regression provides a useful approach for developing candidate water quality criteria from empirical data, such as the Maryland Biological Streams Survey (MBSS) data.
Modeling Longitudinal Data Containing Non-Normal Within Subject Errors
NASA Technical Reports Server (NTRS)
Feiveson, Alan; Glenn, Nancy L.
2013-01-01
The mission of the National Aeronautics and Space Administration’s (NASA) human research program is to advance safe human spaceflight. This involves conducting experiments, collecting data, and analyzing data. The data are longitudinal and result from a relatively few number of subjects; typically 10 – 20. A longitudinal study refers to an investigation where participant outcomes and possibly treatments are collected at multiple follow-up times. Standard statistical designs such as mean regression with random effects and mixed–effects regression are inadequate for such data because the population is typically not approximately normally distributed. Hence, more advanced data analysis methods are necessary. This research focuses on four such methods for longitudinal data analysis: the recently proposed linear quantile mixed models (lqmm) by Geraci and Bottai (2013), quantile regression, multilevel mixed–effects linear regression, and robust regression. This research also provides computational algorithms for longitudinal data that scientists can directly use for human spaceflight and other longitudinal data applications, then presents statistical evidence that verifies which method is best for specific situations. This advances the study of longitudinal data in a broad range of applications including applications in the sciences, technology, engineering and mathematics fields.
Moment-Based Probability Modeling and Extreme Response Estimation, The FITS Routine Version 1.2
MANUEL,LANCE; KASHEF,TINA; WINTERSTEIN,STEVEN R.
1999-11-01
This report documents the use of the FITS routine, which provides automated fits of various analytical, commonly used probability models from input data. It is intended to complement the previously distributed FITTING routine documented in RMS Report 14 (Winterstein et al., 1994), which implements relatively complex four-moment distribution models whose parameters are fit with numerical optimization routines. Although these four-moment fits can be quite useful and faithful to the observed data, their complexity can make them difficult to automate within standard fitting algorithms. In contrast, FITS provides more robust (lower moment) fits of simpler, more conventional distribution forms. For each database of interest, the routine estimates the distribution of annual maximum response based on the data values and the duration, T, over which they were recorded. To focus on the upper tails of interest, the user can also supply an arbitrary lower-bound threshold, {chi}{sub low}, above which a shifted distribution model--exponential or Weibull--is fit.
Søvik, Aste; Malinen, Eirik; Bruland, Øyvind S; Bentzen, Søren M; Olsen, Dag Rune
2007-01-21
Tumour hypoxia is a known cause of clinical resistance to radiation therapy. The purpose of this work was to model the effects on tumour control probability (TCP) of selectively boosting the dose to hypoxic regions in a tumour, while keeping the mean tumour dose constant. A tumour model with a continuous oxygen distribution, incorporating pO(2) histograms published for head and neck patients, was developed. Temporal and spatial variations in the oxygen distribution, non-uniform cell density and cell proliferation during treatment were included in the tumour modelling. Non-uniform dose prescriptions were made based on a segmentation of the tumours into four compartments. The main findings were: (1) Dose redistribution considerably improved TCP for all tumours. (2) The effect on TCP depended on the degree of reoxygenation during treatment, with a maximum relative increase in TCP for tumours with poor or no reoxygenation. (3) Acute hypoxia reduced TCP moderately, while underdosing chronic hypoxic cells gave large reductions in TCP. (4) Restricted dose redistribution still gave a substantial increase in TCP as compared to uniform dose boosts. In conclusion, redistributing dose according to tumour oxygenation status might increase TCP when the tumour response to radiotherapy is limited by chronic hypoxia. This could potentially improve treatment outcome in a subpopulation of patients who respond poorly to conventional radiotherapy. PMID:17202629
Multivariate Models for Normal and Binary Responses in Intervention Studies
ERIC Educational Resources Information Center
Pituch, Keenan A.; Whittaker, Tiffany A.; Chang, Wanchen
2016-01-01
Use of multivariate analysis (e.g., multivariate analysis of variance) is common when normally distributed outcomes are collected in intervention research. However, when mixed responses--a set of normal and binary outcomes--are collected, standard multivariate analyses are no longer suitable. While mixed responses are often obtained in…
Stacey, W.M.
1992-12-01
A new computational model for neutral particle transport in the outer regions of a diverted tokamak plasma chamber is presented. The model is based on the calculation of transmission and escape probabilities using first-flight integral transport theory and the balancing of fluxes across the surfaces bounding the various regions. The geometrical complexity of the problem is included in precomputed probabilities which depend only on the mean free path of the region.
NASA Astrophysics Data System (ADS)
Merdan, Ziya; Karakuş, Özlem
2016-07-01
The six dimensional Ising model with nearest-neighbor pair interactions has been simulated and verified numerically on the Creutz Cellular Automaton by using five bit demons near the infinite-lattice critical temperature with the linear dimensions L=4,6,8,10. The order parameter probability distribution for six dimensional Ising model has been calculated at the critical temperature. The constants of the analytical function have been estimated by fitting to probability function obtained numerically at the finite size critical point.
NASA Astrophysics Data System (ADS)
Lee, T. S.; Yoon, S.; Jeong, C.
2012-12-01
The primary purpose of frequency analysis in hydrology is to estimate the magnitude of an event with a given frequency of occurrence. The precision of frequency analysis depends on the selection of an appropriate probability distribution model (PDM) and parameter estimation techniques. A number of PDMs have been developed to describe the probability distribution of the hydrological variables. For each of the developed PDMs, estimated parameters are provided based on alternative estimation techniques, such as the method of moments (MOM), probability weighted moments (PWM), linear function of ranked observations (L-moments), and maximum likelihood (ML). Generally, the results using ML are more reliable than the other methods. However, the ML technique is more laborious than the other methods because an iterative numerical solution, such as the Newton-Raphson method, must be used for the parameter estimation of PDMs. In the meantime, meta-heuristic approaches have been developed to solve various engineering optimization problems (e.g., linear and stochastic, dynamic, nonlinear). These approaches include genetic algorithms, ant colony optimization, simulated annealing, tabu searches, and evolutionary computation methods. Meta-heuristic approaches use a stochastic random search instead of a gradient search so that intricate derivative information is unnecessary. Therefore, the meta-heuristic approaches have been shown to be a useful strategy to solve optimization problems in hydrology. A number of studies focus on using meta-heuristic approaches for estimation of hydrological variables with parameter estimation of PDMs. Applied meta-heuristic approaches offer reliable solutions but use more computation time than derivative-based methods. Therefore, the purpose of this study is to enhance the meta-heuristic approach for the parameter estimation of PDMs by using a recently developed algorithm known as a harmony search (HS). The performance of the HS is compared to the
Kausar, A. S. M. Zahid; Wo, Lau Chun
2014-01-01
Although ray tracing based propagation prediction models are popular for indoor radio wave propagation characterization, most of them do not provide an integrated approach for achieving the goal of optimum coverage, which is a key part in designing wireless network. In this paper, an accelerated technique of three-dimensional ray tracing is presented, where rough surface scattering is included for making a more accurate ray tracing technique. Here, the rough surface scattering is represented by microfacets, for which it becomes possible to compute the scattering field in all possible directions. New optimization techniques, like dual quadrant skipping (DQS) and closest object finder (COF), are implemented for fast characterization of wireless communications and making the ray tracing technique more efficient. In conjunction with the ray tracing technique, probability based coverage optimization algorithm is accumulated with the ray tracing technique to make a compact solution for indoor propagation prediction. The proposed technique decreases the ray tracing time by omitting the unnecessary objects for ray tracing using the DQS technique and by decreasing the ray-object intersection time using the COF technique. On the other hand, the coverage optimization algorithm is based on probability theory, which finds out the minimum number of transmitters and their corresponding positions in order to achieve optimal indoor wireless coverage. Both of the space and time complexities of the proposed algorithm surpass the existing algorithms. For the verification of the proposed ray tracing technique and coverage algorithm, detailed simulation results for different scattering factors, different antenna types, and different operating frequencies are presented. Furthermore, the proposed technique is verified by the experimental results. PMID:25202733
Kausar, A S M Zahid; Reza, Ahmed Wasif; Wo, Lau Chun; Ramiah, Harikrishnan
2014-01-01
Although ray tracing based propagation prediction models are popular for indoor radio wave propagation characterization, most of them do not provide an integrated approach for achieving the goal of optimum coverage, which is a key part in designing wireless network. In this paper, an accelerated technique of three-dimensional ray tracing is presented, where rough surface scattering is included for making a more accurate ray tracing technique. Here, the rough surface scattering is represented by microfacets, for which it becomes possible to compute the scattering field in all possible directions. New optimization techniques, like dual quadrant skipping (DQS) and closest object finder (COF), are implemented for fast characterization of wireless communications and making the ray tracing technique more efficient. In conjunction with the ray tracing technique, probability based coverage optimization algorithm is accumulated with the ray tracing technique to make a compact solution for indoor propagation prediction. The proposed technique decreases the ray tracing time by omitting the unnecessary objects for ray tracing using the DQS technique and by decreasing the ray-object intersection time using the COF technique. On the other hand, the coverage optimization algorithm is based on probability theory, which finds out the minimum number of transmitters and their corresponding positions in order to achieve optimal indoor wireless coverage. Both of the space and time complexities of the proposed algorithm surpass the existing algorithms. For the verification of the proposed ray tracing technique and coverage algorithm, detailed simulation results for different scattering factors, different antenna types, and different operating frequencies are presented. Furthermore, the proposed technique is verified by the experimental results. PMID:25202733
NASA Astrophysics Data System (ADS)
Xu, L.; Schull, M. A.; Samanta, A.; Myneni, R. B.; Knyazikhin, Y.
2010-12-01
The concept of canopy spectral invariants expresses the observation that simple algebraic combinations of leaf and canopy spectral reflectance become wavelength independent and determine two canopy structure specific variables - the recollision and escape probabilities. These variables specify an accurate relationship between the spectral response of a vegetation canopy to incident solar radiation at the leaf and the canopy scale. They are sensitive to important structural features of the canopy such as forest cover, tree density, leaf area index, crown geometry, forest type and stand age. The canopy spectral invariant behavior is a very strong effect clearly seen in optical remote sensing data. The relative simplicity of retrieving the spectral invariants however is accompanied by considerable difficulties in their interpretations due to the lack of models for these parameters. We use the stochastic radiative transfer equation to relate the spectral invariants to the 3D canopy structure. Stochastic radiative transfer model treats the vegetation canopy as a stochastic medium. It expresses the 3D spatial correlation with the use of the pair correlation function, which plays a key role in measuring the spatial correlation of the 3D canopy structure over a wide range of scales. Data analysis from a simulated single bush to the comprehensive forest canopy is presented for both passive and active (lidar) remote sensing domain.
Yu Meiling; Xu Mingmei; Liu Lianshou; Liu Zhengyou
2009-12-15
The quantitative dependence of quark-gluon plasma (QGP)-formation probability (P{sub QGP}) on the centrality of Au-Au collisions is studied using a bond percolation model. The P{sub QGP} versus the maximum distance S{sub max} for a bond to form is calculated from the model for various nuclei and the P{sub QGP} at different centralities of Au-Au collisions for the given S{sub max} are obtained therefrom. The experimental data of the nuclear modification factor R{sub AA}(p{sub T}) for the most central Au-Au collisions at {radical}(s{sub NN})=200 and 130 GeV are utilized to transform S{sub max} to {radical}(s{sub NN}). The P{sub QGP} for different centralities of Au-Au collisions at these two energies are thus obtained, which is useful for correctly understanding the centrality dependence of the experimental data.
Model assisted probability of detection for a guided waves based SHM technique
NASA Astrophysics Data System (ADS)
Memmolo, V.; Ricci, F.; Maio, L.; Boffa, N. D.; Monaco, E.
2016-04-01
Guided wave (GW) Structural Health Monitoring (SHM) allows to assess the health of aerostructures thanks to the great sensitivity to delamination and/or debondings appearance. Due to the several complexities affecting wave propagation in composites, an efficient GW SHM system requires its effective quantification associated to a rigorous statistical evaluation procedure. Probability of Detection (POD) approach is a commonly accepted measurement method to quantify NDI results and it can be effectively extended to an SHM context. However, it requires a very complex setup arrangement and many coupons. When a rigorous correlation with measurements is adopted, Model Assisted POD (MAPOD) is an efficient alternative to classic methods. This paper is concerned with the identification of small emerging delaminations in composite structural components. An ultrasonic GW tomography focused to impact damage detection in composite plate-like structures recently developed by authors is investigated, getting the bases for a more complex MAPOD analysis. Experimental tests carried out on a typical wing composite structure demonstrated the effectiveness of modeling approach in order to detect damages with the tomographic algorithm. Environmental disturbances, which affect signal waveforms and consequently damage detection, are considered simulating a mathematical noise in the modeling stage. A statistical method is used for an effective making decision procedure. A Damage Index approach is implemented as metric to interpret the signals collected from a distributed sensor network and a subsequent graphic interpolation is carried out to reconstruct the damage appearance. A model validation and first reliability assessment results are provided, in view of performance system quantification and its optimization as well.
A Mechanistic Beta-Binomial Probability Model for mRNA Sequencing Data
Smith, Gregory R.; Birtwistle, Marc R.
2016-01-01
A main application for mRNA sequencing (mRNAseq) is determining lists of differentially-expressed genes (DEGs) between two or more conditions. Several software packages exist to produce DEGs from mRNAseq data, but they typically yield different DEGs, sometimes markedly so. The underlying probability model used to describe mRNAseq data is central to deriving DEGs, and not surprisingly most softwares use different models and assumptions to analyze mRNAseq data. Here, we propose a mechanistic justification to model mRNAseq as a binomial process, with data from technical replicates given by a binomial distribution, and data from biological replicates well-described by a beta-binomial distribution. We demonstrate good agreement of this model with two large datasets. We show that an emergent feature of the beta-binomial distribution, given parameter regimes typical for mRNAseq experiments, is the well-known quadratic polynomial scaling of variance with the mean. The so-called dispersion parameter controls this scaling, and our analysis suggests that the dispersion parameter is a continually decreasing function of the mean, as opposed to current approaches that impose an asymptotic value to the dispersion parameter at moderate mean read counts. We show how this leads to current approaches overestimating variance for moderately to highly expressed genes, which inflates false negative rates. Describing mRNAseq data with a beta-binomial distribution thus may be preferred since its parameters are relatable to the mechanistic underpinnings of the technique and may improve the consistency of DEG analysis across softwares, particularly for moderately to highly expressed genes. PMID:27326762
A Mechanistic Beta-Binomial Probability Model for mRNA Sequencing Data.
Smith, Gregory R; Birtwistle, Marc R
2016-01-01
A main application for mRNA sequencing (mRNAseq) is determining lists of differentially-expressed genes (DEGs) between two or more conditions. Several software packages exist to produce DEGs from mRNAseq data, but they typically yield different DEGs, sometimes markedly so. The underlying probability model used to describe mRNAseq data is central to deriving DEGs, and not surprisingly most softwares use different models and assumptions to analyze mRNAseq data. Here, we propose a mechanistic justification to model mRNAseq as a binomial process, with data from technical replicates given by a binomial distribution, and data from biological replicates well-described by a beta-binomial distribution. We demonstrate good agreement of this model with two large datasets. We show that an emergent feature of the beta-binomial distribution, given parameter regimes typical for mRNAseq experiments, is the well-known quadratic polynomial scaling of variance with the mean. The so-called dispersion parameter controls this scaling, and our analysis suggests that the dispersion parameter is a continually decreasing function of the mean, as opposed to current approaches that impose an asymptotic value to the dispersion parameter at moderate mean read counts. We show how this leads to current approaches overestimating variance for moderately to highly expressed genes, which inflates false negative rates. Describing mRNAseq data with a beta-binomial distribution thus may be preferred since its parameters are relatable to the mechanistic underpinnings of the technique and may improve the consistency of DEG analysis across softwares, particularly for moderately to highly expressed genes. PMID:27326762
A generic probability based model to derive regional patterns of crops in time and space
NASA Astrophysics Data System (ADS)
Wattenbach, Martin; Luedtke, Stefan; Redweik, Richard; van Oijen, Marcel; Balkovic, Juraj; Reinds, Gert Jan
2015-04-01
Croplands are not only the key to human food supply, they also change the biophysical and biogeochemical properties of the land surface leading to changes in the water cycle, energy portioning, they influence soil erosion and substantially contribute to the amount of greenhouse gases entering the atmosphere. The effects of croplands on the environment depend on the type of crop and the associated management which both are related to the site conditions, economic boundary settings as well as preferences of individual farmers. The method described here is designed to predict the most probable crop to appear at a given location and time. The method uses statistical crop area information on NUTS2 level from EUROSTAT and the Common Agricultural Policy Regionalized Impact Model (CAPRI) as observation. These crops are then spatially disaggregated to the 1 x 1 km grid scale within the region, using the assumption that the probability of a crop appearing at a given location and a given year depends on a) the suitability of the land for the cultivation of the crop derived from the MARS Crop Yield Forecast System (MCYFS) and b) expert knowledge of agricultural practices. The latter includes knowledge concerning the feasibility of one crop following another (e.g. a late-maturing crop might leave too little time for the establishment of a winter cereal crop) and the need to combat weed infestations or crop diseases. The model is implemented in R and PostGIS. The quality of the generated crop sequences per grid cell is evaluated on the basis of the given statistics reported by the joint EU/CAPRI database. The assessment is given on NUTS2 level using per cent bias as a measure with a threshold of 15% as minimum quality. The results clearly indicates that crops with a large relative share within the administrative unit are not as error prone as crops that allocate only minor parts of the unit. However, still roughly 40% show an absolute per cent bias above the 15% threshold. This
Comparing normal modes across different models and scales: Hessian reduction versus coarse-graining.
Ghysels, An; Miller, Benjamin T; Pickard, Frank C; Brooks, Bernard R
2012-10-30
Dimension reduction is often necessary when attempting to reach longer length and time scales in molecular simulations. It is realized by constraining degrees of freedom or by coarse-graining the system. When evaluating the accuracy of a dimensional reduction, there is a practical challenge: the models yield vectors with different lengths, making a comparison by calculating their dot product impossible. This article investigates mapping procedures for normal mode analysis. We first review a horizontal mapping procedure for the reduced Hessian techniques, which projects out degrees of freedom. We then design a vertical mapping procedure for the "implosion" of the all-atom (AA) Hessian to a coarse-grained scale that is based upon vibrational subsystem analysis. This latter method derives both effective force constants and an effective kinetic tensor. Next, a series of metrics is presented for comparison across different scales, where special attention is given to proper mass-weighting. The dimension-dependent metrics, which require prior mapping for proper evaluation, are frequencies, overlap of normal mode vectors, probability similarity, Hessian similarity, collectivity of modes, and thermal fluctuations. The dimension-independent metrics are shape derivatives, elastic modulus, vibrational free energy differences, heat capacity, and projection on a predefined basis set. The power of these metrics to distinguish between reasonable and unreasonable models is tested on a toy alpha helix system and a globular protein; both are represented at several scales: the AA scale, a Gō-like model, a canonical elastic network model, and a network model with intentionally unphysical force constants.
ERIC Educational Resources Information Center
Neel, John H.
Induced probabilities have been largely ignored by educational researchers. Simply stated, if a new or random variable is defined in terms of a first random variable, then induced probability is the probability or density of the new random variable that can be found by summation or integration over the appropriate domains of the original random…
NASA Astrophysics Data System (ADS)
Peng, Guanghan; Liu, Changqing; Tuo, Manxian
2015-10-01
In this paper, a new lattice model is proposed with the traffic interruption probability term in two-lane traffic system. The linear stability condition and the mKdV equation are derived from linear stability analysis and nonlinear analysis by introducing the traffic interruption probability of optimal current for two-lane traffic freeway, respectively. Numerical simulation shows that the traffic interruption probability corresponding to high reaction coefficient can efficiently improve the stability of two-lane traffic flow as traffic interruption occurs with lane changing.
A Tool for Modelling the Probability of Landslides Impacting Road Networks
NASA Astrophysics Data System (ADS)
Taylor, Faith E.; Santangelo, Michele; Marchesini, Ivan; Malamud, Bruce D.; Guzzetti, Fausto
2014-05-01
Triggers such as earthquakes or heavy rainfall can result in hundreds to thousands of landslides occurring across a region within a short space of time. These landslides can in turn result in blockages across the road network, impacting how people move about a region. Here, we show the development and application of a semi-stochastic model to simulate how landslides intersect with road networks during a triggered landslide event. This was performed by creating 'synthetic' triggered landslide inventory maps and overlaying these with a road network map to identify where road blockages occur. Our landslide-road model has been applied to two regions: (i) the Collazzone basin (79 km2) in Central Italy where 422 landslides were triggered by rapid snowmelt in January 1997, (ii) the Oat Mountain quadrangle (155 km2) in California, USA, where 1,350 landslides were triggered by the Northridge Earthquake (M = 6.7) in January 1994. For both regions, detailed landslide inventory maps for the triggered events were available, in addition to maps of landslide susceptibility and road networks of primary, secondary and tertiary roads. To create 'synthetic' landslide inventory maps, landslide areas (AL) were randomly selected from a three-parameter inverse gamma probability density function, consisting of a power law decay of about -2.4 for medium and large values of AL and an exponential rollover for small values of AL. The number of landslide areas selected was based on the observed density of landslides (number of landslides km-2) in the triggered event inventories. Landslide shapes were approximated as ellipses, where the ratio of the major and minor axes varies with AL. Landslides were then dropped over the region semi-stochastically, conditioned by a landslide susceptibility map, resulting in a synthetic landslide inventory map. The originally available landslide susceptibility maps did not take into account susceptibility changes in the immediate vicinity of roads, therefore
Gomberg, J.; Felzer, K.
2008-01-01
We have used observations from Felzer and Brodsky (2006) of the variation of linear aftershock densities (i.e., aftershocks per unit length) with the magnitude of and distance from the main shock fault to derive constraints on how the probability of a main shock triggering a single aftershock at a point, P(r, D), varies as a function of distance, r, and main shock rupture dimension, D. We find that P(r, D) becomes independent of D as the triggering fault is approached. When r ??? D P(r, D) scales as Dm where m-2 and decays with distance approximately as r-n with n = 2, with a possible change to r-(n-1) at r > h, where h is the closest distance between the fault and the boundaries of the seismogenic zone. These constraints may be used to test hypotheses about the types of deformations and mechanisms that trigger aftershocks. We illustrate this using dynamic deformations (i.e., radiated seismic waves) and a posited proportionality with P(r, D). Deformation characteristics examined include peak displacements, peak accelerations and velocities (proportional to strain rates and strains, respectively), and two measures that account for cumulative deformations. Our model indicates that either peak strains alone or strain rates averaged over the duration of rupture may be responsible for aftershock triggering.
Terrestrial Food-Chain Model for Normal Operations.
1991-10-01
Version 00 TERFOC-N calculates radiation doses to the public due to atmospheric releases of radionuclides in normal operations of nuclear facilities. The code estimates the highest individual dose and the collective dose from four exposure highways: internal doses from ingestion and inhalation, external doses from cloudshine and groundshine.
Bayesian modeling of censored partial linear models using scale-mixtures of normal distributions
NASA Astrophysics Data System (ADS)
Castro, Luis M.; Lachos, Victor H.; Ferreira, Guillermo P.; Arellano-Valle, Reinaldo B.
2012-10-01
Regression models where the dependent variable is censored (limited) are usually considered in statistical analysis. Particularly, the case of a truncation to the left of zero and a normality assumption for the error terms is studied in detail by [1] in the well known Tobit model. In the present article, this typical censored regression model is extended by considering a partial linear model with errors belonging to the class of scale mixture of normal distributions. We achieve a fully Bayesian inference by adopting a Metropolis algorithm within a Gibbs sampler. The likelihood function is utilized to compute not only some Bayesian model selection measures but also to develop Bayesian case-deletion influence diagnostics based on the q-divergence measures. We evaluate the performances of the proposed methods with simulated data. In addition, we present an application in order to know what type of variables affect the income of housewives.
NASA Astrophysics Data System (ADS)
Thompson, D. R.; Gotwols, B. L.
1994-05-01
Data ranging from L to Ka band were collected from radars mounted on the Forschungsplatform Nordsee during the Synthetic Aperture Radar and X Band Ocean Nonlinearities experiment in November 1990. In this paper we examine, for each of these radars, the total amplitude probability density function (pdf) of the field backscattered from the ocean surface. These pdfs are compared with predictions from a simulation based on our time-dependent scattering model. We find that for lower incidence angles (˜20°), the agreement between the measured and computed pdfs is generally quite good. At these small incidence angles the behavior of the pdfs is determined by the local tilting of the long-wave surface. No modulation of the shortwave spectral density over the long-wave phase is needed to obtain good agreement. For larger incidence angles (˜45°) the agreement between the measured and predicted pdfs is not so good; the major discrepancy is that the tails of the predicted pdfs are somewhat too short. In this study we have attempted to account for the hydrodynamic modulation of the short-scale waves using an approximate procedure based on the assumption that the hydrodynamic modulation is due to the interaction of the short-scale waves with the orbital velocity of the long waves. With this procedure we are able to obtain agreement between the measured and computed pdfs at 45° incidence, although the strength of the hydrodynamic modulation needs to be adjusted. Our simulation procedure will be discussed in some detail. Also, we will show how our results are related to more conventional measurements of so-called modulation transfer functions and give some arguments as to why in many cases the correlation between the backscattered power and the long-wave surface velocity can be rather low.
Normalization and Implementation of Three Gravitational Acceleration Models
NASA Technical Reports Server (NTRS)
Eckman, Randy A.; Brown, Aaron J.; Adamo, Daniel R.; Gottlieb, Robert G.
2016-01-01
Unlike the uniform density spherical shell approximations of Newton, the consequence of spaceflight in the real universe is that gravitational fields are sensitive to the asphericity of their generating central bodies. The gravitational potential of an aspherical central body is typically resolved using spherical harmonic approximations. However, attempting to directly calculate the spherical harmonic approximations results in at least two singularities that must be removed to generalize the method and solve for any possible orbit, including polar orbits. Samuel Pines, Bill Lear, and Robert Gottlieb developed three unique algorithms to eliminate these singularities. This paper documents the methodical normalization of two of the three known formulations for singularity-free gravitational acceleration (namely, the Lear and Gottlieb algorithms) and formulates a general method for defining normalization parameters used to generate normalized Legendre polynomials and Associated Legendre Functions (ALFs) for any algorithm. A treatment of the conventional formulation of the gravitational potential and acceleration is also provided, in addition to a brief overview of the philosophical differences between the three known singularity-free algorithms.
A Probabilistic Model of Student Nurses' Knowledge of Normal Nutrition.
ERIC Educational Resources Information Center
Passmore, David Lynn
1983-01-01
Vocational and technical education researchers need to be aware of the uses and limits of various statistical models. The author reviews the Rasch Model and applies it to results from a nutrition test given to student nurses. (Author)
NASA Technical Reports Server (NTRS)
1979-01-01
The computer model for erythropoietic control was adapted to the mouse system by altering system parameters originally given for the human to those which more realistically represent the mouse. Parameter values were obtained from a variety of literature sources. Using the mouse model, the mouse was studied as a potential experimental model for spaceflight. Simulation studies of dehydration and hypoxia were performed. A comparison of system parameters for the mouse and human models is presented. Aside from the obvious differences expected in fluid volumes, blood flows and metabolic rates, larger differences were observed in the following: erythrocyte life span, erythropoietin half-life, and normal arterial pO2.
A Collision Probability Model of Portal Vein Tumor Thrombus Formation in Hepatocellular Carcinoma
Xiong, Fei
2015-01-01
Hepatocellular carcinoma is one of the most common malignancies worldwide, with a high risk of portal vein tumor thrombus (PVTT). Some promising results have been achieved for venous metastases of hepatocellular carcinoma; however, the etiology of PVTT is largely unknown, and it is unclear why the incidence of PVTT is not proportional to its distance from the carcinoma. We attempted to address this issue using physical concepts and mathematical tools. Finally, we discuss the relationship between the probability of a collision event and the microenvironment of the PVTT. Our formulae suggest that the collision probability can alter the tumor microenvironment by increasing the number of tumor cells. PMID:26131562
ERIC Educational Resources Information Center
Edwards, William F.; Shiflett, Ray C.; Shultz, Harris
2008-01-01
The mathematical model used to describe independence between two events in probability has a non-intuitive consequence called dependent spaces. The paper begins with a very brief history of the development of probability, then defines dependent spaces, and reviews what is known about finite spaces with uniform probability. The study of finite…
SELECTION BIAS MODELING USING OBSERVED DATA AUGMENTED WITH IMPUTED RECORD-LEVEL PROBABILITIES
Thompson, Caroline A.; Arah, Onyebuchi A.
2014-01-01
PURPOSE Selection bias is a form of systematic error that can be severe in compromised study designs such as case-control studies with inappropriate selection mechanisms or follow-up studies that suffer from extensive attrition. External adjustment for selection bias is commonly undertaken when such bias is suspected, but the methods used can be overly simplistic, if not unrealistic, and fail to allow for simultaneous adjustment of associations of the exposure and covariates with the outcome, when of interest. Internal adjustment for selection bias via inverse-probability-weighting allows bias parameters to vary with levels of covariates but has only been formalized for longitudinal studies with covariate data on patients up until loss-to-follow-up. METHODS We demonstrate the use of inverse-probability-weighting and externally obtained bias parameters to perform internal adjustment of selection bias in studies lacking covariate data on unobserved participants. RESULTS The ‘true’ or selection-adjusted odds ratio for the association between exposure and outcome was successfully obtained by analyzing only data on those in the selected stratum (i.e. responders) weighted by the inverse probability of their being selected as function of their observed covariate data. CONCLUSIONS This internal adjustment technique using user-supplied bias parameters and inverse-probability-weighting for selection bias can be applied to any type of observational study. PMID:25175700
Spatial prediction models for the probable biological condition of streams and rivers in the USA
The National Rivers and Streams Assessment (NRSA) is a probability-based survey conducted by the US Environmental Protection Agency and its state and tribal partners. It provides information on the ecological condition of the rivers and streams in the conterminous USA, and the ex...
Delay or probability discounting in a model of impulsive behavior: effect of alcohol.
Richards, J B; Zhang, L; Mitchell, S H; de Wit, H
1999-03-01
Little is known about the acute effects of drugs of abuse on impulsivity and self-control. In this study, impulsivity was assessed in humans using a computer task that measured delay and probability discounting. Discounting describes how much the value of a reward (or punisher) is decreased when its occurrence is either delayed or uncertain. Twenty-four healthy adult volunteers ingested a moderate dose of ethanol (0.5 or 0.8 g/kg ethanol: n = 12 at each dose) or placebo before completing the discounting task. In the task the participants were given a series of choices between a small, immediate, certain amount of money and $10 that was either delayed (0, 2, 30, 180, or 365 days) or probabilistic (i.e., certainty of receipt was 1.0, .9, .75, .5, or .25). The point at which each individual was indifferent between the smaller immediate or certain reward and the $10 delayed or probabilistic reward was identified using an adjusting-amount procedure. The results indicated that (a) delay and probability discounting were well described by a hyperbolic function; (b) delay and probability discounting were positively correlated within subjects; (c) delay and probability discounting were moderately correlated with personality measures of impulsivity; and (d) alcohol had no effect on discounting. PMID:10220927
ERIC Educational Resources Information Center
Rasanen, Okko
2011-01-01
Word segmentation from continuous speech is a difficult task that is faced by human infants when they start to learn their native language. Several studies indicate that infants might use several different cues to solve this problem, including intonation, linguistic stress, and transitional probabilities between subsequent speech sounds. In this…
Random forest models for the probable biological condition of streams and rivers in the USA
The National Rivers and Streams Assessment (NRSA) is a probability based survey conducted by the US Environmental Protection Agency and its state and tribal partners. It provides information on the ecological condition of the rivers and streams in the conterminous USA, and the ex...
A simple derivation of risk-neutral probability in the binomial option pricing model
NASA Astrophysics Data System (ADS)
Orosi, Greg
2015-01-01
The traditional derivation of risk-neutral probability in the binomial option pricing framework used in introductory mathematical finance courses is straightforward, but employs several different concepts and is is not algebraically simple. In order to overcome this drawback of the standard approach, we provide an alternative derivation.
Latent Partially Ordered Classification Models and Normal Mixtures
ERIC Educational Resources Information Center
Tatsuoka, Curtis; Varadi, Ferenc; Jaeger, Judith
2013-01-01
Latent partially ordered sets (posets) can be employed in modeling cognitive functioning, such as in the analysis of neuropsychological (NP) and educational test data. Posets are cognitively diagnostic in the sense that classification states in these models are associated with detailed profiles of cognitive functioning. These profiles allow for…
Ding, Tian; Wang, Jun; Park, Myoung-Su; Hwang, Cheng-An; Oh, Deog-Hwan
2013-02-01
Bacillus cereus is frequently isolated from a variety of foods, including vegetables, dairy products, meats, and other raw and processed foods. The bacterium is capable of producing an enterotoxin and emetic toxin that can cause severe nausea, vomiting, and diarrhea. The objectives of this study were to assess and model the probability of enterotoxin production of B. cereus in a broth model as affected by the broth pH and storage temperature. A three-strain mixture of B. cereus was inoculated in tryptic soy broth adjusted to pH 5.0, 6.0, 7.2, 8.0, and 8.5, and the samples were stored at 15, 20, 25, 30, and 35°C for 24 h. A total of 25 combinations of pH and temperature, each with 10 samples, were tested. The presence of enterotoxin in broth was assayed using a commercial test kit. The probabilities of positive enterotoxin production in 25 treatments were fitted with a logistic regression to develop a probability model to describe the probability of toxin production as a function of pH and temperature. The resulting model showed that the probabilities of enterotoxin production of B. cereus in broth increased as the temperature increased and/or as the broth pH approached 7.0. The model described the experimental data satisfactorily and identified the boundary of pH and temperature for the production of enterotoxin. The model could provide information for assessing the food poisoning risk associated with enterotoxins of B. cereus and for the selection of product pH and storage temperature for foods to reduce the hazards associated with B. cereus.
Technology Transfer Automated Retrieval System (TEKTRAN)
Staphylococcus aureus is a foodborne pathogen widespread in the environment and found in various food products. This pathogen can produce enterotoxins that cause illnesses in humans. The objectives of this study were to develop a probability model of S. aureus enterotoxin production as affected by w...
We show that a conditional probability analysis that utilizes a stressor-response model based on a logistic regression provides a useful approach for developing candidate water quality criterai from empirical data. The critical step in this approach is transforming the response ...
Regional Permafrost Probability Modelling in the northwestern Cordillera, 59°N - 61°N, Canada
NASA Astrophysics Data System (ADS)
Bonnaventure, P. P.; Lewkowicz, A. G.
2010-12-01
High resolution (30 x 30 m) permafrost probability models were created for eight mountainous areas in the Yukon and northernmost British Columbia. Empirical-statistical modelling based on the Basal Temperature of Snow (BTS) method was used to develop spatial relationships. Model inputs include equivalent elevation (a variable that incorporates non-uniform temperature change with elevation), potential incoming solar radiation and slope. Probability relationships between predicted BTS and permafrost presence were developed for each area using late-summer physical observations in pits, or by using year-round ground temperature measurements. A high-resolution spatial model for the region has now been generated based on seven of the area models. Each was applied to the entire region, and their predictions were then blended based on a distance decay function from the model source area. The regional model is challenging to validate independently because there are few boreholes in the region. However, a comparison of results to a recently established inventory of rock glaciers for the Yukon suggests its validity because predicted permafrost probabilities were 0.8 or greater for almost 90% of these landforms. Furthermore, the regional model results have a similar spatial pattern to those modelled independently in the eighth area, although predicted probabilities using the regional model are generally higher. The regional model predicts that permafrost underlies about half of the non-glaciated terrain in the region, with probabilities increasing regionally from south to north and from east to west. Elevation is significant, but not always linked in a straightforward fashion because of weak or inverted trends in permafrost probability below treeline. Above treeline, however, permafrost probabilities increase and approach 1.0 in very high elevation areas throughout the study region. The regional model shows many similarities to previous Canadian permafrost maps (Heginbottom
A Statistical Model for Determining the Probability of Observing Exoplanetary Radio Emissions
NASA Astrophysics Data System (ADS)
Garcia, R.; Knapp, M.; Winterhalter, D.; Majid, W.
2015-12-01
The idea that extrasolar planets should emit radiation in the low-frequency radio regime is a generalization of the observation of decametric and kilometric radio emissions from magnetic planets in our own solar system, yet none of these emissions have been observed. Such radio emissions are a result of the interactions between the host star's magnetized wind and the planet's magnetosphere that accelerate electrons along the field lines, which leads to radio emissions at the electron gyrofrequency. To understand why these emissions had not yet been observed, and to guide in target selection for future detection efforts, we took a statistical approach to determine what the ideal location in parameter space was for these hypothesized exoplanetary radio emissions to be detected. We derived probability distribution functions from current datasets for the observably constrained parameters (such as the radius of the host star), and conducted a review of the literature to construct reasonable probability distribution functions to obtain the unconstrained parameters (such as the magnetic field strength of the exoplanet). We then used Monte Carlo sampling to develop a synthetic population of exoplanetary systems and calculated whether the radio emissions from the systems were detectable depending on the angle of beaming, frequency (above the ionospheric cutoff rate of 10 MHz) and flux density (above 5 mJy) of the emission. From millions of simulations we derived a probability distribution function in parameter space as a function of host star type, orbital radius and planetary or host star radius. The probability distribution function illustrated the optimal parameter values of an exoplanetary system that may make the system's radio emissions detectable to current and currently under development instruments such as the SKA. We found that detection of exoplanetary radio emissions favor planets larger than 5 Earth radii and within 1 AU of their M dwarf host.
2012-01-01
Background Osteoporotic hip fractures represent major cause of disability, loss of quality of life and even mortality among the elderly population. Decisions on drug therapy are based on the assessment of risk factors for fracture, from BMD measurements. The combination of biomechanical models with clinical studies could better estimate bone strength and supporting the specialists in their decision. Methods A model to assess the probability of fracture, based on the Damage and Fracture Mechanics has been developed, evaluating the mechanical magnitudes involved in the fracture process from clinical BMD measurements. The model is intended for simulating the degenerative process in the skeleton, with the consequent lost of bone mass and hence the decrease of its mechanical resistance which enables the fracture due to different traumatisms. Clinical studies were chosen, both in non-treatment conditions and receiving drug therapy, and fitted to specific patients according their actual BMD measures. The predictive model is applied in a FE simulation of the proximal femur. The fracture zone would be determined according loading scenario (sideway fall, impact, accidental loads, etc.), using the mechanical properties of bone obtained from the evolutionary model corresponding to the considered time. Results BMD evolution in untreated patients and in those under different treatments was analyzed. Evolutionary curves of fracture probability were obtained from the evolution of mechanical damage. The evolutionary curve of the untreated group of patients presented a marked increase of the fracture probability, while the curves of patients under drug treatment showed variable decreased risks, depending on the therapy type. Conclusion The FE model allowed to obtain detailed maps of damage and fracture probability, identifying high-risk local zones at femoral neck and intertrochanteric and subtrochanteric areas, which are the typical locations of osteoporotic hip fractures. The
Haber, M; An, Q; Foppa, I M; Shay, D K; Ferdinands, J M; Orenstein, W A
2015-05-01
As influenza vaccination is now widely recommended, randomized clinical trials are no longer ethical in many populations. Therefore, observational studies on patients seeking medical care for acute respiratory illnesses (ARIs) are a popular option for estimating influenza vaccine effectiveness (VE). We developed a probability model for evaluating and comparing bias and precision of estimates of VE against symptomatic influenza from two commonly used case-control study designs: the test-negative design and the traditional case-control design. We show that when vaccination does not affect the probability of developing non-influenza ARI then VE estimates from test-negative design studies are unbiased even if vaccinees and non-vaccinees have different probabilities of seeking medical care against ARI, as long as the ratio of these probabilities is the same for illnesses resulting from influenza and non-influenza infections. Our numerical results suggest that in general, estimates from the test-negative design have smaller bias compared to estimates from the traditional case-control design as long as the probability of non-influenza ARI is similar among vaccinated and unvaccinated individuals. We did not find consistent differences between the standard errors of the estimates from the two study designs.
Valve, explosive actuated, normally open, pyronetics model 1399
NASA Technical Reports Server (NTRS)
Avalos, E.
1971-01-01
Results of the tests to evaluate open valve, Model 1399 are reported for the the following tests: proof pressure leakage, actuation, disassembly, and burst pressure. It is concluded that the tests demonstrate the soundness of the structural integrity of the valve.
NASA Astrophysics Data System (ADS)
Tan, Elcin
A new physically-based methodology for probable maximum precipitation (PMP) estimation is developed over the American River Watershed (ARW) using the Weather Research and Forecast (WRF-ARW) model. A persistent moisture flux convergence pattern, called Pineapple Express, is analyzed for 42 historical extreme precipitation events, and it is found that Pineapple Express causes extreme precipitation over the basin of interest. An average correlation between moisture flux convergence and maximum precipitation is estimated as 0.71 for 42 events. The performance of the WRF model is verified for precipitation by means of calibration and independent validation of the model. The calibration procedure is performed only for the first ranked flood event 1997 case, whereas the WRF model is validated for 42 historical cases. Three nested model domains are set up with horizontal resolutions of 27 km, 9 km, and 3 km over the basin of interest. As a result of Chi-square goodness-of-fit tests, the hypothesis that "the WRF model can be used in the determination of PMP over the ARW for both areal average and point estimates" is accepted at the 5% level of significance. The sensitivities of model physics options on precipitation are determined using 28 microphysics, atmospheric boundary layer, and cumulus parameterization schemes combinations. It is concluded that the best triplet option is Thompson microphysics, Grell 3D ensemble cumulus, and YSU boundary layer (TGY), based on 42 historical cases, and this TGY triplet is used for all analyses of this research. Four techniques are proposed to evaluate physically possible maximum precipitation using the WRF: 1. Perturbations of atmospheric conditions; 2. Shift in atmospheric conditions; 3. Replacement of atmospheric conditions among historical events; and 4. Thermodynamically possible worst-case scenario creation. Moreover, climate change effect on precipitation is discussed by emphasizing temperature increase in order to determine the
Presenting Thin Media Models Affects Women's Choice of Diet or Normal Snacks
ERIC Educational Resources Information Center
Krahe, Barbara; Krause, Christina
2010-01-01
Our study explored the influence of thin- versus normal-size media models and of self-reported restrained eating behavior on women's observed snacking behavior. Fifty female undergraduates saw a set of advertisements for beauty products showing either thin or computer-altered normal-size female models, allegedly as part of a study on effective…
2-D Model for Normal and Sickle Cell Blood Microcirculation
NASA Astrophysics Data System (ADS)
Tekleab, Yonatan; Harris, Wesley
2011-11-01
Sickle cell disease (SCD) is a genetic disorder that alters the red blood cell (RBC) structure and function such that hemoglobin (Hb) cannot effectively bind and release oxygen. Previous computational models have been designed to study the microcirculation for insight into blood disorders such as SCD. Our novel 2-D computational model represents a fast, time efficient method developed to analyze flow dynamics, O2 diffusion, and cell deformation in the microcirculation. The model uses a finite difference, Crank-Nicholson scheme to compute the flow and O2 concentration, and the level set computational method to advect the RBC membrane on a staggered grid. Several sets of initial and boundary conditions were tested. Simulation data indicate a few parameters to be significant in the perturbation of the blood flow and O2 concentration profiles. Specifically, the Hill coefficient, arterial O2 partial pressure, O2 partial pressure at 50% Hb saturation, and cell membrane stiffness are significant factors. Results were found to be consistent with those of Le Floch [2010] and Secomb [2006].
Adamovich, Igor V.
2014-04-15
A three-dimensional, nonperturbative, semiclassical analytic model of vibrational energy transfer in collisions between a rotating diatomic molecule and an atom, and between two rotating diatomic molecules (Forced Harmonic Oscillator–Free Rotation model) has been extended to incorporate rotational relaxation and coupling between vibrational, translational, and rotational energy transfer. The model is based on analysis of semiclassical trajectories of rotating molecules interacting by a repulsive exponential atom-to-atom potential. The model predictions are compared with the results of three-dimensional close-coupled semiclassical trajectory calculations using the same potential energy surface. The comparison demonstrates good agreement between analytic and numerical probabilities of rotational and vibrational energy transfer processes, over a wide range of total collision energies, rotational energies, and impact parameter. The model predicts probabilities of single-quantum and multi-quantum vibrational-rotational transitions and is applicable up to very high collision energies and quantum numbers. Closed-form analytic expressions for these transition probabilities lend themselves to straightforward incorporation into DSMC nonequilibrium flow codes.
A two-stage approach in solving the state probabilities of the multi-queue M/G/1 model
NASA Astrophysics Data System (ADS)
Chen, Mu-Song; Yen, Hao-Wei
2016-04-01
The M/G/1 model is the fundamental basis of the queueing system in many network systems. Usually, the study of the M/G/1 is limited by the assumption of single queue and infinite capacity. In practice, however, these postulations may not be valid, particularly when dealing with many real-world problems. In this paper, a two-stage state-space approach is devoted to solving the state probabilities for the multi-queue finite-capacity M/G/1 model, i.e. q-M/G/1/Ki with Ki buffers in the ith queue. The state probabilities at departure instants are determined by solving a set of state transition equations. Afterward, an embedded Markov chain analysis is applied to derive the state probabilities with another set of state balance equations at arbitrary time instants. The closed forms of the state probabilities are also presented with theorems for reference. Applications of Little's theorem further present the corresponding results for queue lengths and average waiting times. Simulation experiments have demonstrated the correctness of the proposed approaches.
Mesh-Based Entry Vehicle and Explosive Debris Re-Contact Probability Modeling
NASA Technical Reports Server (NTRS)
McPherson, Mark A.; Mendeck, Gavin F.
2011-01-01
The risk to a crewed vehicle arising from potential re-contact with fragments from an explosive breakup of any jettisoned spacecraft segments during entry has long sought to be quantified. However, great difficulty lies in efficiently capturing the potential locations of each fragment and their collective threat to the vehicle. The method presented in this paper addresses this problem by using a stochastic approach that discretizes simulated debris pieces into volumetric cells, and then assesses strike probabilities accordingly. Combining spatial debris density and relative velocity between the debris and the entry vehicle, the strike probability can be calculated from the integral of the debris flux inside each cell over time. Using this technique it is possible to assess the risk to an entry vehicle along an entire trajectory as it separates from the jettisoned segment. By decoupling the fragment trajectories from that of the entry vehicle, multiple potential separation maneuvers can then be evaluated rapidly to provide an assessment of the best strategy to mitigate the re-contact risk.
DotKnot: pseudoknot prediction using the probability dot plot under a refined energy model
Sperschneider, Jana; Datta, Amitava
2010-01-01
RNA pseudoknots are functional structure elements with key roles in viral and cellular processes. Prediction of a pseudoknotted minimum free energy structure is an NP-complete problem. Practical algorithms for RNA structure prediction including restricted classes of pseudoknots suffer from high runtime and poor accuracy for longer sequences. A heuristic approach is to search for promising pseudoknot candidates in a sequence and verify those. Afterwards, the detected pseudoknots can be further analysed using bioinformatics or laboratory techniques. We present a novel pseudoknot detection method called DotKnot that extracts stem regions from the secondary structure probability dot plot and assembles pseudoknot candidates in a constructive fashion. We evaluate pseudoknot free energies using novel parameters, which have recently become available. We show that the conventional probability dot plot makes a wide class of pseudoknots including those with bulged stems manageable in an explicit fashion. The energy parameters now become the limiting factor in pseudoknot prediction. DotKnot is an efficient method for long sequences, which finds pseudoknots with higher accuracy compared to other known prediction algorithms. DotKnot is accessible as a web server at http://dotknot.csse.uwa.edu.au. PMID:20123730
Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation
Jordan, P.D.; Oldenburg, C.M.; Nicot, J.-P.
2011-05-15
Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.
Echoes from anharmonic normal modes in model glasses.
Burton, Justin C; Nagel, Sidney R
2016-03-01
Glasses display a wide array of nonlinear acoustic phenomena at temperatures T ≲ 1 K. This behavior has traditionally been explained by an ensemble of weakly coupled, two-level tunneling states, a theory that is also used to describe the thermodynamic properties of glasses at low temperatures. One of the most striking acoustic signatures in this regime is the existence of phonon echoes, a feature that has been associated with two-level systems with the same formalism as spin echoes in NMR. Here we report the existence of a distinctly different type of acoustic echo in classical models of glassy materials. Our simulations consist of finite-ranged, repulsive spheres and also particles with attractive forces using Lennard-Jones interactions. We show that these echoes are due to anharmonic, weakly coupled vibrational modes and perhaps provide an alternative explanation for the phonon echoes observed in glasses at low temperatures. PMID:27078434
Echoes from anharmonic normal modes in model glasses.
Burton, Justin C; Nagel, Sidney R
2016-03-01
Glasses display a wide array of nonlinear acoustic phenomena at temperatures T ≲ 1 K. This behavior has traditionally been explained by an ensemble of weakly coupled, two-level tunneling states, a theory that is also used to describe the thermodynamic properties of glasses at low temperatures. One of the most striking acoustic signatures in this regime is the existence of phonon echoes, a feature that has been associated with two-level systems with the same formalism as spin echoes in NMR. Here we report the existence of a distinctly different type of acoustic echo in classical models of glassy materials. Our simulations consist of finite-ranged, repulsive spheres and also particles with attractive forces using Lennard-Jones interactions. We show that these echoes are due to anharmonic, weakly coupled vibrational modes and perhaps provide an alternative explanation for the phonon echoes observed in glasses at low temperatures.
NASA Astrophysics Data System (ADS)
Wang, Zhengzi
2015-08-01
The influence of ambient temperature is a big challenge to robust infrared face recognition. This paper proposes a new ambient temperature normalization algorithm to improve the performance of infrared face recognition under variable ambient temperatures. Based on statistical regression theory, a second order polynomial model is learned to describe the ambient temperature's impact on infrared face image. Then, infrared image was normalized to reference ambient temperature by the second order polynomial model. Finally, this normalization method is applied to infrared face recognition to verify its efficiency. The experiments demonstrate that the proposed temperature normalization method is feasible and can significantly improve the robustness of infrared face recognition.
NASA Astrophysics Data System (ADS)
Fan, Niannian; Singh, Arvind; Guala, Michele; Foufoula-Georgiou, Efi; Wu, Baosheng
2016-04-01
Bed load transport is a highly stochastic, multiscale process, where particle advection and diffusion regimes are governed by the dynamics of each sediment grain during its motion and resting states. Having a quantitative understanding of the macroscale behavior emerging from the microscale interactions is important for proper model selection in the absence of individual grain-scale observations. Here we develop a semimechanistic sediment transport model based on individual particle dynamics, which incorporates the episodic movement (steps separated by rests) of sediment particles and study their macroscale behavior. By incorporating different types of probability distribution functions (PDFs) of particle resting times Tr, under the assumption of thin-tailed PDF of particle velocities, we study the emergent behavior of particle advection and diffusion regimes across a wide range of spatial and temporal scales. For exponential PDFs of resting times Tr, we observe normal advection and diffusion at long time scales. For a power-law PDF of resting times (i.e., f>(Tr>)˜Tr-ν), the tail thickness parameter ν is observed to affect the advection regimes (both sub and normal advective), and the diffusion regimes (both subdiffusive and superdiffusive). By comparing our semimechanistic model with two random walk models in the literature, we further suggest that in order to reproduce accurately the emerging diffusive regimes, the resting time model has to be coupled with a particle motion model able to produce finite particle velocities during steps, as the episodic model discussed here.
Nixon, Zachary; Michel, Jacqueline
2015-04-01
To better understand the distribution of remaining lingering subsurface oil residues from the Exxon Valdez oil spill (EVOS) along the shorelines of Prince William Sound (PWS), AK, we revised previous modeling efforts to allow spatially explicit predictions of the distribution of subsurface oil. We used a set of pooled field data and predictor variables stored as Geographic Information Systems (GIS) data to generate calibrated boosted tree models predicting the encounter probability of different categories of subsurface oil. The models demonstrated excellent predictive performance as evaluated by cross-validated performance statistics. While the average encounter probabilities at most shoreline locations are low across western PWS, clusters of shoreline locations with elevated encounter probabilities remain in the northern parts of the PWS, as well as more isolated locations. These results can be applied to estimate the location and amount of remaining oil, evaluate potential ongoing impacts, and guide remediation. This is the first application of quantitative machine-learning based modeling techniques in estimating the likelihood of ongoing, long-term shoreline oil persistence after a major oil spill.
Nixon, Zachary; Michel, Jacqueline
2015-04-01
To better understand the distribution of remaining lingering subsurface oil residues from the Exxon Valdez oil spill (EVOS) along the shorelines of Prince William Sound (PWS), AK, we revised previous modeling efforts to allow spatially explicit predictions of the distribution of subsurface oil. We used a set of pooled field data and predictor variables stored as Geographic Information Systems (GIS) data to generate calibrated boosted tree models predicting the encounter probability of different categories of subsurface oil. The models demonstrated excellent predictive performance as evaluated by cross-validated performance statistics. While the average encounter probabilities at most shoreline locations are low across western PWS, clusters of shoreline locations with elevated encounter probabilities remain in the northern parts of the PWS, as well as more isolated locations. These results can be applied to estimate the location and amount of remaining oil, evaluate potential ongoing impacts, and guide remediation. This is the first application of quantitative machine-learning based modeling techniques in estimating the likelihood of ongoing, long-term shoreline oil persistence after a major oil spill. PMID:25719970
Three-Dimensional Tissue Models of Normal and Diseased Skin
Carlson, Mark W.; Alt-Holland, Addy; Egles, Christophe; Garlick, Jonathan A.
2010-01-01
Over the last decade, the development of in vitro, human, three-dimensional (3D) tissue models, known as human skin equivalents (HSEs), has furthered understanding of epidermal cell biology and provided novel experimental systems. Signaling pathways that mediate the linkage between growth and differentiation function optimally when cells are spatially organized to display the architectural features seen in vivo, but are uncoupled and lost in two-dimensional culture systems. HSEs consist of a stratified squamous epithelium grown at an air-liquid interface on a collagen matrix populated with dermal fibroblasts. These 3D tissues demonstrate in vivo–like epithelial differentiation and morphology, and rates of cell division, similar to those found in human skin. This unit describes fabrication of HSEs, allowing the generation of human tissues that mimic the morphology, differentiation, and growth of human skin, as well as disease processes of cancer and wound re-epithelialization, providing powerful new tools for the study of diseases in humans. PMID:19085986
NASA Astrophysics Data System (ADS)
Akbari, Hamed; Fei, Baowei
2012-02-01
Numerical estimation of the size of the kidney is useful in evaluating conditions of the kidney, especially, when serial MR imaging is performed to evaluate the kidney function. This paper presents a new method for automatic segmentation of the kidney in three-dimensional (3D) MR images, by extracting texture features and statistical matching of geometrical shape of the kidney. A set of Wavelet-based support vector machines (W-SVMs) is trained on the MR images. The W-SVMs capture texture priors of MRI for classification of the kidney and non-kidney tissues in different zones around the kidney boundary. In the segmentation procedure, these W-SVMs are trained to tentatively label each voxel around the kidney model as a kidney or non-kidney voxel by texture matching. A probability kidney model is created using 10 segmented MRI data. The model is initially localized based on the intensity profiles in three directions. The weight functions are defined for each labeled voxel for each Wavelet-based, intensity-based, and model-based label. Consequently, each voxel has three labels and three weights for the Wavelet feature, intensity, and probability model. Using a 3D edge detection method, the model is re-localized and the segmented kidney is modified based on a region growing method in the model region. The probability model is re-localized based on the results and this loop continues until the segmentation converges. Experimental results with mouse MRI data show the good performance of the proposed method in segmenting the kidney in MR images.
NASA Astrophysics Data System (ADS)
Salis, Michele; Arca, Bachisio; Bacciu, Valentina; Spano, Donatella; Duce, Pierpaolo; Santoni, Paul; Ager, Alan; Finney, Mark
2010-05-01
Characterizing the spatial pattern of large fire occurrence and severity is an important feature of the fire management planning in the Mediterranean region. The spatial characterization of fire probabilities, fire behavior distributions and value changes are key components for quantitative risk assessment and for prioritizing fire suppression resources, fuel treatments and law enforcement. Because of the growing wildfire severity and frequency in recent years (e.g.: Portugal, 2003 and 2005; Italy and Greece, 2007 and 2009), there is an increasing demand for models and tools that can aid in wildfire prediction and prevention. Newer wildfire simulation systems offer promise in this regard, and allow for fine scale modeling of wildfire severity and probability. Several new applications has resulted from the development of a minimum travel time (MTT) fire spread algorithm (Finney, 2002), that models the fire growth searching for the minimum time for fire to travel among nodes in a 2D network. The MTT approach makes computationally feasible to simulate thousands of fires and generate burn probability and fire severity maps over large areas. The MTT algorithm is imbedded in a number of research and fire modeling applications. High performance computers are typically used for MTT simulations, although the algorithm is also implemented in the FlamMap program (www.fire.org). In this work, we described the application of the MTT algorithm to estimate spatial patterns of burn probability and to analyze wildfire severity in three fire prone areas of the Mediterranean Basin, specifically Sardinia (Italy), Sicily (Italy) and Corsica (France) islands. We assembled fuels and topographic data for the simulations in 500 x 500 m grids for the study areas. The simulations were run using 100,000 ignitions under weather conditions that replicated severe and moderate weather conditions (97th and 70th percentile, July and August weather, 1995-2007). We used both random ignition locations
Estimation of value at risk and conditional value at risk using normal mixture distributions model
NASA Astrophysics Data System (ADS)
Kamaruzzaman, Zetty Ain; Isa, Zaidi
2013-04-01
Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.
NASA Technical Reports Server (NTRS)
Chase, Thomas D.; Splawn, Keith; Christiansen, Eric L.
2007-01-01
The NASA Extravehicular Mobility Unit (EMU) micrometeoroid and orbital debris protection ability has recently been assessed against an updated, higher threat space environment model. The new environment was analyzed in conjunction with a revised EMU solid model using a NASA computer code. Results showed that the EMU exceeds the required mathematical Probability of having No Penetrations (PNP) of any suit pressure bladder over the remaining life of the program (2,700 projected hours of 2 person spacewalks). The success probability was calculated to be 0.94, versus a requirement of >0.91, for the current spacesuit s outer protective garment. In parallel to the probability assessment, potential improvements to the current spacesuit s outer protective garment were built and impact tested. A NASA light gas gun was used to launch projectiles at test items, at speeds of approximately 7 km per second. Test results showed that substantial garment improvements could be made, with mild material enhancements and moderate assembly development. The spacesuit s PNP would improve marginally with the tested enhancements, if they were available for immediate incorporation. This paper discusses the results of the model assessment process and test program. These findings add confidence to the continued use of the existing NASA EMU during International Space Station (ISS) assembly and Shuttle Operations. They provide a viable avenue for improved hypervelocity impact protection for the EMU, or for future space suits.
Notes on power of normality tests of error terms in regression models
Střelec, Luboš
2015-03-10
Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importance of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models.
A First Comparison of Multiple Probability Hazard Outputs from Three Global Flood Models
NASA Astrophysics Data System (ADS)
Trigg, M. A.; Bates, P. D.; Fewtrell, T. J.; Yamazaki, D.; Pappenberger, F.; Winsemius, H.
2014-12-01
With research advances in algorithms, remote sensing data sets and computing power, global flood models are now a practical reality. There are a number of different research models currently available or in development, and as these models mature and output becomes available for use, there is great interest in how these different models compare and how useful they may be at different scales. At the kick-off meeting of the Global Flood Partnership (GFP) in March 2014, the need to compare these new global flood models was identified as a research priority, both for developers of the models and users of the output. The Global Flood Partnership (GFP) is an informal network of scientists and practitioners from public, private and international organisations providing or using global flood monitoring, modelling and forecasting. (http://portal.gdacs.org/Global-Flood-Partnership). On behalf of the GFP, The Willis Research Network is undertaking this comparison research and the work presented here is the result of the first phase of this comparison for three models; CaMa-Flood, GLOFRIS & ECMWF. The comparison analysis is undertaken for the entire African continent, identified by GFP members as the best location to facilitate data sharing by model teams and where there was the most interest from potential users of the model outputs. Initial analysis results include flooded area for a range of hazard return periods (25, 50, 100, 250, 500, 1000 years) and this is also compared against catchment sizes and climatic zone. Results will be discussed in the context of the different model structures and input data used, while also addressing scale issues and practicalities of use. Finally, plans for the validation of the models against microwave and optical remote sensing data will be outlined.
Goldman, Saul
2007-08-01
Interconnected compartmental models have been used for decades in physiology and medicine to account for the observed multi-exponential washout kinetics of a variety of solutes (including inert gases) both from single tissues and from the body as a whole. They are used here as the basis for a new class of biophysical probabilistic decompression models. These models are characterized by a relatively well-perfused, risk-bearing, central compartment and one or two non-risk-bearing, relatively poorly perfused, peripheral compartment(s). The peripheral compartments affect risk indirectly by diffusive exchange of dissolved inert gas with the central compartment. On the basis of the accuracy of their respective predictions beyond the calibration regime, the three-compartment interconnected models were found to be significantly better than the two-compartment interconnected models. The former, on the basis of a number of criteria, was also better than a two-compartment parallel model used for comparative purposes. In these latter comparisons, the models all had the same number of fitted parameters (four), were based on linear kinetics, had the same risk function, and were calibrated against the same dataset. The interconnected models predict that inert gas washout during decompression is relatively fast, initially, but slows rapidly with time compared with the more uniform washout rate predicted by an independent parallel compartment model. If empirically verified, this may have important implications for diving practice.
Szczygieł, Bartłomiej; Dudyński, Marek; Kwiatkowski, Kamil; Lewenstein, Maciej; Lapeyre, Gerald John; Wehr, Jan
2016-02-01
We introduce a class of discrete-continuous percolation models and an efficient Monte Carlo algorithm for computing their properties. The class is general enough to include well-known discrete and continuous models as special cases. We focus on a particular example of such a model, a nanotube model of disintegration of activated carbon. We calculate its exact critical threshold in two dimensions and obtain a Monte Carlo estimate in three dimensions. Furthermore, we use this example to analyze and characterize the efficiency of our algorithm, by computing critical exponents and properties, finding that it compares favorably to well-known algorithms for simpler systems.
NASA Astrophysics Data System (ADS)
Morley, S. K.; Freeman, M. P.; Tanskanen, E. I.
2007-11-01
We compare the probability distributions of substorm magnetic bay magnitudes from observations and a minimal substorm model. The observed distribution was derived previously and independently using the IL index from the IMAGE magnetometer network. The model distribution is derived from a synthetic AL index time series created using real solar wind data and a minimal substorm model, which was previously shown to reproduce observed substorm waiting times. There are two free parameters in the model which scale the contributions to AL from the directly-driven DP2 electrojet and loading-unloading DP1 electrojet, respectively. In a limited region of the 2-D parameter space of the model, the probability distribution of modelled substorm bay magnitudes is not significantly different to the observed distribution. The ranges of the two parameters giving acceptable (95% confidence level) agreement are consistent with expectations using results from other studies. The approximately linear relationship between the two free parameters over these ranges implies that the substorm magnitude simply scales linearly with the solar wind power input at the time of substorm onset.
NASA Astrophysics Data System (ADS)
Mahanti, P.; Robinson, M. S.; Boyd, A. K.
2013-12-01
Craters ~20-km diameter and above significantly shaped the lunar landscape. The statistical nature of the slope distribution on their walls and floors dominate the overall slope distribution statistics for the lunar surface. Slope statistics are inherently useful for characterizing the current topography of the surface, determining accurate photometric and surface scattering properties, and in defining lunar surface trafficability [1-4]. Earlier experimental studies on the statistical nature of lunar surface slopes were restricted either by resolution limits (Apollo era photogrammetric studies) or by model error considerations (photoclinometric and radar scattering studies) where the true nature of slope probability distribution was not discernible at baselines smaller than a kilometer[2,3,5]. Accordingly, historical modeling of lunar surface slopes probability distributions for applications such as in scattering theory development or rover traversability assessment is more general in nature (use of simple statistical models such as the Gaussian distribution[1,2,5,6]). With the advent of high resolution, high precision topographic models of the Moon[7,8], slopes in lunar craters can now be obtained at baselines as low as 6-meters allowing unprecedented multi-scale (multiple baselines) modeling possibilities for slope probability distributions. Topographic analysis (Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) 2-m digital elevation models (DEM)) of ~20-km diameter Copernican lunar craters revealed generally steep slopes on interior walls (30° to 36°, locally exceeding 40°) over 15-meter baselines[9]. In this work, we extend the analysis from a probability distribution modeling point-of-view with NAC DEMs to characterize the slope statistics for the floors and walls for the same ~20-km Copernican lunar craters. The difference in slope standard deviations between the Gaussian approximation and the actual distribution (2-meter sampling) was
NASA Astrophysics Data System (ADS)
Tomas, A.; Menendez, M.; Mendez, F. J.; Coco, G.; Losada, I. J.
2012-04-01
In the last decades, freak or rogue waves have become an important topic in engineering and science. Forecasting the occurrence probability of freak waves is a challenge for oceanographers, engineers, physicists and statisticians. There are several mechanisms responsible for the formation of freak waves, and different theoretical formulations (primarily based on numerical models with simplifying assumption) have been proposed to predict the occurrence probability of freak wave in a sea state as a function of N (number of individual waves) and kurtosis (k). On the other hand, different attempts to parameterize k as a function of spectral parameters such as the Benjamin-Feir Index (BFI) and the directional spreading (Mori et al., 2011) have been proposed. The objective of this work is twofold: (1) develop a statistical model to describe the uncertainty of maxima individual wave height, Hmax, considering N and k as covariates; (2) obtain a predictive formulation to estimate k as a function of aggregated sea state spectral parameters. For both purposes, we use free surface measurements (more than 300,000 20-minutes sea states) from the Spanish deep water buoy network (Puertos del Estado, Spanish Ministry of Public Works). Non-stationary extreme value models are nowadays widely used to analyze the time-dependent or directional-dependent behavior of extreme values of geophysical variables such as significant wave height (Izaguirre et al., 2010). In this work, a Generalized Extreme Value (GEV) statistical model for the dimensionless maximum wave height (x=Hmax/Hs) in every sea state is used to assess the probability of freak waves. We allow the location, scale and shape parameters of the GEV distribution to vary as a function of k and N. The kurtosis-dependency is parameterized using third-order polynomials and the model is fitted using standard log-likelihood theory, obtaining a very good behavior to predict the occurrence probability of freak waves (x>2). Regarding the
NASA Astrophysics Data System (ADS)
Taylor, Faith E.; Santangelo, Michele; Marchesini, Ivan; Malamud, Bruce D.
2013-04-01
During a landslide triggering event, the tens to thousands of landslides resulting from the trigger (e.g., earthquake, heavy rainfall) may block a number of sections of the road network, posing a risk to rescue efforts, logistics and accessibility to a region. Here, we present initial results from a semi-stochastic model we are developing to evaluate the probability of landslides intersecting a road network and the network-accessibility implications of this across a region. This was performed in the open source GRASS GIS software, where we took 'model' landslides and dropped them on a 79 km2 test area region in Collazzone, Umbria, Central Italy, with a given road network (major and minor roads, 404 km in length) and already determined landslide susceptibilities. Landslide areas (AL) were randomly selected from a three-parameter inverse gamma probability density function, consisting of a power-law decay of about -2.4 for medium and large values of AL and an exponential rollover for small values of AL; the rollover (maximum probability) occurs at about AL = 400 m.2 The number of landslide areas selected for each triggered event iteration was chosen to have an average density of 1 landslide km-2, i.e. 79 landslide areas chosen randomly for each iteration. Landslides were then 'dropped' over the region semi-stochastically: (i) random points were generated across the study region; (ii) based on the landslide susceptibility map, points were accepted/rejected based on the probability of a landslide occurring at that location. After a point was accepted, it was assigned a landslide area (AL) and length to width ratio. Landslide intersections with roads were then assessed and indices such as the location, number and size of road blockage recorded. The GRASS-GIS model was performed 1000 times in a Monte-Carlo type simulation. Initial results show that for a landslide triggering event of 1 landslide km-2 over a 79 km2 region with 404 km of road, the number of road blockages
Per capita invasion probabilities: an empirical model to predict rates of invasion via ballast water
Reusser, Deborah A.; Lee, Henry; Frazier, Melanie; Ruiz, Gregory M.; Fofonoff, Paul W.; Minton, Mark S.; Miller, A. Whitman
2013-01-01
Ballast water discharges are a major source of species introductions into marine and estuarine ecosystems. To mitigate the introduction of new invaders into these ecosystems, many agencies are proposing standards that establish upper concentration limits for organisms in ballast discharge. Ideally, ballast discharge standards will be biologically defensible and adequately protective of the marine environment. We propose a new technique, the per capita invasion probability (PCIP), for managers to quantitatively evaluate the relative risk of different concentration-based ballast water discharge standards. PCIP represents the likelihood that a single discharged organism will become established as a new nonindigenous species. This value is calculated by dividing the total number of ballast water invaders per year by the total number of organisms discharged from ballast. Analysis was done at the coast-wide scale for the Atlantic, Gulf, and Pacific coasts, as well as the Great Lakes, to reduce uncertainty due to secondary invasions between estuaries on a single coast. The PCIP metric is then used to predict the rate of new ballast-associated invasions given various regulatory scenarios. Depending upon the assumptions used in the risk analysis, this approach predicts that approximately one new species will invade every 10–100 years with the International Maritime Organization (IMO) discharge standard of 50 μm per m3 of ballast. This approach resolves many of the limitations associated with other methods of establishing ecologically sound discharge standards, and it allows policy makers to use risk-based methodologies to establish biologically defensible discharge standards.
Kase, Yuki; Kanai, Tatsuaki; Matsufuji, Naruhiro; Furusawa, Yoshiya; Elsässer, Thilo; Scholz, Michael
2008-01-01
Both the microdosimetric kinetic model (MKM) and the local effect model (LEM) can be used to calculate the surviving fraction of cells irradiated by high-energy ion beams. In this study, amorphous track structure models instead of the stochastic energy deposition are used for the MKM calculation, and it is found that the MKM calculation is useful for predicting the survival curves of the mammalian cells in vitro for (3)He-, (12)C- and (20)Ne-ion beams. The survival curves are also calculated by two different implementations of the LEM, which inherently used an amorphous track structure model. The results calculated in this manner show good agreement with the experimental results especially for the modified LEM. These results are compared to those calculated by the MKM. Comparison of the two models reveals that both models require three basic constituents: target geometry, photon survival curve and track structure, although the implementation of each model is significantly different. In the context of the amorphous track structure model, the difference between the MKM and LEM is primarily the result of different approaches calculating the biological effects of the extremely high local dose in the center of the ion track. PMID:18182686
Boccio, J.L.; Usher, J.L.; Singhal, A.K.; Tam, L.T.
1985-08-01
A fire in a nuclear power plant (NPP) can damage equipment needed to safely operate the plant and thereby either directly cause an accident or else reduce the plant's margin of safety. The development of a field-model fire code to analyze the probable fire environments encountered within NPP is discussed. A set of fire tests carried out under the aegis of the US Nuclear Regulatory Commission (NRC) is described. The results of these tests are then utilized to validate the field model.
NASA Astrophysics Data System (ADS)
Peruzzo, Paolo; Pietro Viero, Daniele; Defina, Andrea
2016-11-01
The seeds of many aquatic plants, as well as many propagulae and larvae, are buoyant and transported at the water surface. These particles are therefore subject to surface tension, which may enhance their capture by emergent vegetation through capillary attraction. In this work, we develop a semi-empirical model that predicts the probability that a floating particle is retained by plant stems and branches piercing the water surface, due to capillarity, against the drag force exerted by the flowing water. Specific laboratory experiments are also performed to calibrate and validate the model.
NASA Technical Reports Server (NTRS)
Nemeth, Noel
2013-01-01
Models that predict the failure probability of monolithic glass and ceramic components under multiaxial loading have been developed by authors such as Batdorf, Evans, and Matsuo. These "unit-sphere" failure models assume that the strength-controlling flaws are randomly oriented, noninteracting planar microcracks of specified geometry but of variable size. This report develops a formulation to describe the probability density distribution of the orientation of critical strength-controlling flaws that results from an applied load. This distribution is a function of the multiaxial stress state, the shear sensitivity of the flaws, the Weibull modulus, and the strength anisotropy. Examples are provided showing the predicted response on the unit sphere for various stress states for isotropic and transversely isotropic (anisotropic) materials--including the most probable orientation of critical flaws for offset uniaxial loads with strength anisotropy. The author anticipates that this information could be used to determine anisotropic stiffness degradation or anisotropic damage evolution for individual brittle (or quasi-brittle) composite material constituents within finite element or micromechanics-based software
NASA Technical Reports Server (NTRS)
Courey, Karim J.; Asfour, Shihab S.; Onar, Arzu; Bayliss, Jon A.; Ludwig, Larry L.; Wright, Maria C.
2009-01-01
To comply with lead-free legislation, many manufacturers have converted from tin-lead to pure tin finishes of electronic components. However, pure tin finishes have a greater propensity to grow tin whiskers than tin-lead finishes. Since tin whiskers present an electrical short circuit hazard in electronic components, simulations have been developed to quantify the risk of said short circuits occurring. Existing risk simulations make the assumption that when a free tin whisker has bridged two adjacent exposed electrical conductors, the result is an electrical short circuit. This conservative assumption is made because shorting is a random event that had an unknown probability associated with it. Note however that due to contact resistance electrical shorts may not occur at lower voltage levels. In our first article we developed an empirical probability model for tin whisker shorting. In this paper, we develop a more comprehensive empirical model using a refined experiment with a larger sample size, in which we studied the effect of varying voltage on the breakdown of the contact resistance which leads to a short circuit. From the resulting data we estimated the probability distribution of an electrical short, as a function of voltage. In addition, the unexpected polycrystalline structure seen in the focused ion beam (FIB) cross section in the first experiment was confirmed in this experiment using transmission electron microscopy (TEM). The FIB was also used to cross section two card guides to facilitate the measurement of the grain size of each card guide's tin plating to determine its finish.
Greene, Earl A.; LaMotte, Andrew E.; Cullinan, Kerri-Ann
2005-01-01
The U.S. Geological Survey, in cooperation with the U.S. Environmental Protection Agency?s Regional Vulnerability Assessment Program, has developed a set of statistical tools to support regional-scale, ground-water quality and vulnerability assessments. The Regional Vulnerability Assessment Program?s goals are to develop and demonstrate approaches to comprehensive, regional-scale assessments that effectively inform managers and decision-makers as to the magnitude, extent, distribution, and uncertainty of current and anticipated environmental risks. The U.S. Geological Survey is developing and exploring the use of statistical probability models to characterize the relation between ground-water quality and geographic factors in the Mid-Atlantic Region. Available water-quality data obtained from U.S. Geological Survey National Water-Quality Assessment Program studies conducted in the Mid-Atlantic Region were used in association with geographic data (land cover, geology, soils, and others) to develop logistic-regression equations that use explanatory variables to predict the presence of a selected water-quality parameter exceeding a specified management concentration threshold. The resulting logistic-regression equations were transformed to determine the probability, P(X), of a water-quality parameter exceeding a specified management threshold. Additional statistical procedures modified by the U.S. Geological Survey were used to compare the observed values to model-predicted values at each sample point. In addition, procedures to evaluate the confidence of the model predictions and estimate the uncertainty of the probability value were developed and applied. The resulting logistic-regression models were applied to the Mid-Atlantic Region to predict the spatial probability of nitrate concentrations exceeding specified management thresholds. These thresholds are usually set or established by regulators or managers at National or local levels. At management thresholds of
Amundson, Courtney L.; Royle, J. Andrew; Handel, Colleen M.
2014-01-01
Imperfect detection during animal surveys biases estimates of abundance and can lead to improper conclusions regarding distribution and population trends. Farnsworth et al. (2005) developed a combined distance-sampling and time-removal model for point-transect surveys that addresses both availability (the probability that an animal is available for detection; e.g., that a bird sings) and perceptibility (the probability that an observer detects an animal, given that it is available for detection). We developed a hierarchical extension of the combined model that provides an integrated analysis framework for a collection of survey points at which both distance from the observer and time of initial detection are recorded. Implemented in a Bayesian framework, this extension facilitates evaluating covariates on abundance and detection probability, incorporating excess zero counts (i.e. zero-inflation), accounting for spatial autocorrelation, and estimating population density. Species-specific characteristics, such as behavioral displays and territorial dispersion, may lead to different patterns of availability and perceptibility, which may, in turn, influence the performance of such hierarchical models. Therefore, we first test our proposed model using simulated data under different scenarios of availability and perceptibility. We then illustrate its performance with empirical point-transect data for a songbird that consistently produces loud, frequent, primarily auditory signals, the Golden-crowned Sparrow (Zonotrichia atricapilla); and for 2 ptarmigan species (Lagopus spp.) that produce more intermittent, subtle, and primarily visual cues. Data were collected by multiple observers along point transects across a broad landscape in southwest Alaska, so we evaluated point-level covariates on perceptibility (observer and habitat), availability (date within season and time of day), and abundance (habitat, elevation, and slope), and included a nested point
NASA Technical Reports Server (NTRS)
Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor)
2012-01-01
This invention develops a mathematical model to describe battery behavior during individual discharge cycles as well as over its cycle life. The basis for the form of the model has been linked to the internal processes of the battery and validated using experimental data. Effects of temperature and load current have also been incorporated into the model. Subsequently, the model has been used in a Particle Filtering framework to make predictions of remaining useful life for individual discharge cycles as well as for cycle life. The prediction performance was found to be satisfactory as measured by performance metrics customized for prognostics for a sample case. The work presented here provides initial steps towards a comprehensive health management solution for energy storage devices.
de Uña-Álvarez, Jacobo; Meira-Machado, Luís
2015-06-01
Multi-state models are often used for modeling complex event history data. In these models the estimation of the transition probabilities is of particular interest, since they allow for long-term predictions of the process. These quantities have been traditionally estimated by the Aalen-Johansen estimator, which is consistent if the process is Markov. Several non-Markov estimators have been proposed in the recent literature, and their superiority with respect to the Aalen-Johansen estimator has been proved in situations in which the Markov condition is strongly violated. However, the existing estimators have the drawback of requiring that the support of the censoring distribution contains the support of the lifetime distribution, which is not often the case. In this article, we propose two new methods for estimating the transition probabilities in the progressive illness-death model. Some asymptotic results are derived. The proposed estimators are consistent regardless the Markov condition and the referred assumption about the censoring support. We explore the finite sample behavior of the estimators through simulations. The main conclusion of this piece of research is that the proposed estimators are much more efficient than the existing non-Markov estimators in most cases. An application to a clinical trial on colon cancer is included. Extensions to progressive processes beyond the three-state illness-death model are discussed.
Detection of the optic disc in fundus images by combining probability models.
Harangi, Balazs; Hajdu, Andras
2015-10-01
In this paper, we propose a combination method for the automatic detection of the optic disc (OD) in fundus images based on ensembles of individual algorithms. We have studied and adapted some of the state-of-the-art OD detectors and finally organized them into a complex framework in order to maximize the accuracy of the localization of the OD. The detection of the OD can be considered as a single-object detection problem. This object can be localized with high accuracy by several algorithms extracting single candidates for the center of the OD and the final location can be defined using a single majority voting rule. To include more information to support the final decision, we can use member algorithms providing more candidates which can be ranked based on the confidence ordered by the algorithms. In this case, a spatial weighted graph is defined where the candidates are considered as its nodes, and the final OD position is determined in terms of finding a maximum-weighted clique. Now, we examine how to apply in our ensemble-based framework all the accessible information supplied by the member algorithms by making them return confidence values for each image pixel. These confidence values inform us about the probability that a given pixel is the center point of the object. We apply axiomatic and Bayesian approaches, as in the case of aggregation of judgments of experts in decision and risk analysis, to combine these confidence values. According to our experimental study, the accuracy of the localization of OD increases further. Besides single localization, this approach can be adapted for the precise detection of the boundary of the OD. Comparative experimental results are also given for several publicly available datasets.
Reusser, Deborah A; Lee, Henry; Frazier, Melanie; Ruiz, Gregory M; Fofonoff, Paul W; Minton, Mark S; Miller, A Whitman
2013-03-01
Ballast water discharges are a major source of species introductions into marine and estuarine ecosystems. To mitigate the introduction of new invaders into these ecosystems, many agencies are proposing standards that establish upper concentration limits for organisms in ballast discharge. Ideally, ballast discharge standards will be biologically defensible and adequately protective of the marine environment. We propose a new technique, the per capita invasion probability (PCIP), for managers to quantitatively evaluate the relative risk of different concentration-based ballast water discharge standards. PCIP represents the likelihood that a single discharged organism will become established as a new nonindigenous species. This value is calculated by dividing the total number of ballast water invaders per year by the total number of organisms discharged from ballast. Analysis was done at the coast-wide scale for the Atlantic, Gulf, and Pacific coasts, as well as the Great Lakes, to reduce uncertainty due to secondary invasions between estuaries on a single coast. The PCIP metric is then used to predict the rate of new ballast-associated invasions given various regulatory scenarios. Depending upon the assumptions used in the risk analysis, this approach predicts that approximately one new species will invade every 10-100 years with the International Maritime Organization (IMO) discharge standard of < 10 organisms with body size > 50 microm per m3 of ballast. This approach resolves many of the limitations associated with other methods of establishing ecologically sound discharge standards, and it allows policy makers to use risk-based methodologies to establish biologically defensible discharge standards.
Evaluation of Geometrically Nonlinear Reduced Order Models with Nonlinear Normal Modes
Kuether, Robert J.; Deaner, Brandon J.; Hollkamp, Joseph J.; Allen, Matthew S.
2015-09-15
Several reduced-order modeling strategies have been developed to create low-order models of geometrically nonlinear structures from detailed finite element models, allowing one to compute the dynamic response of the structure at a dramatically reduced cost. But, the parameters of these reduced-order models are estimated by applying a series of static loads to the finite element model, and the quality of the reduced-order model can be highly sensitive to the amplitudes of the static load cases used and to the type/number of modes used in the basis. Our paper proposes to combine reduced-order modeling and numerical continuation to estimate the nonlinearmore » normal modes of geometrically nonlinear finite element models. Not only does this make it possible to compute the nonlinear normal modes far more quickly than existing approaches, but the nonlinear normal modes are also shown to be an excellent metric by which the quality of the reduced-order model can be assessed. Hence, the second contribution of this work is to demonstrate how nonlinear normal modes can be used as a metric by which nonlinear reduced-order models can be compared. Moreover, various reduced-order models with hardening nonlinearities are compared for two different structures to demonstrate these concepts: a clamped–clamped beam model, and a more complicated finite element model of an exhaust panel cover.« less
Evaluation of Geometrically Nonlinear Reduced Order Models with Nonlinear Normal Modes
Kuether, Robert J.; Deaner, Brandon J.; Hollkamp, Joseph J.; Allen, Matthew S.
2015-09-15
Several reduced-order modeling strategies have been developed to create low-order models of geometrically nonlinear structures from detailed finite element models, allowing one to compute the dynamic response of the structure at a dramatically reduced cost. But, the parameters of these reduced-order models are estimated by applying a series of static loads to the finite element model, and the quality of the reduced-order model can be highly sensitive to the amplitudes of the static load cases used and to the type/number of modes used in the basis. Our paper proposes to combine reduced-order modeling and numerical continuation to estimate the nonlinear normal modes of geometrically nonlinear finite element models. Not only does this make it possible to compute the nonlinear normal modes far more quickly than existing approaches, but the nonlinear normal modes are also shown to be an excellent metric by which the quality of the reduced-order model can be assessed. Hence, the second contribution of this work is to demonstrate how nonlinear normal modes can be used as a metric by which nonlinear reduced-order models can be compared. Moreover, various reduced-order models with hardening nonlinearities are compared for two different structures to demonstrate these concepts: a clamped–clamped beam model, and a more complicated finite element model of an exhaust panel cover.
Fitting the Normal-Ogive Factor Analytic Model to Scores on Tests.
ERIC Educational Resources Information Center
Ferrando, Pere J.; Lorenzo-Seva, Urbano
2001-01-01
Describes how the nonlinear factor analytic approach of R. McDonald to the normal ogive curve can be used to factor analyze test scores. Discusses the conditions in which this model is more appropriate than the linear model and illustrates the applicability of both models using an empirical example based on data from 1,769 adolescents who took the…
Drakos, Nicole E; Wahl, Lindi M
2015-12-01
Theoretical approaches are essential to our understanding of the complex dynamics of mobile genetic elements (MGEs) within genomes. Recently, the birth-death-diversification model was developed to describe the dynamics of mobile promoters (MPs), a particular class of MGEs in prokaryotes. A unique feature of this model is that genetic diversification of elements was included. To explore the implications of diversification on the longterm fate of MGE lineages, in this contribution we analyze the extinction probabilities, extinction times and equilibrium solutions of the birth-death-diversification model. We find that diversification increases both the survival and growth rate of MGE families, but the strength of this effect depends on the rate of horizontal gene transfer (HGT). We also find that the distribution of MGE families per genome is not necessarily monotonically decreasing, as observed for MPs, but may have a peak in the distribution that is related to the HGT rate. For MPs specifically, we find that new families have a high extinction probability, and predict that the number of MPs is increasing, albeit at a very slow rate. Additionally, we develop an extension of the birth-death-diversification model which allows MGEs in different regions of the genome, for example coding and non-coding, to be described by different rates. This extension may offer a potential explanation as to why the majority of MPs are located in non-promoter regions of the genome.
NASA Technical Reports Server (NTRS)
Smith, O. E.; Adelfang, S. I.
1981-01-01
A model of the largest gust amplitude and gust length is presented which uses the properties of the bivariate gamma distribution. The gust amplitude and length are strongly dependent on the filter function; the amplitude increases with altitude and is larger in winter than in summer.
ERIC Educational Resources Information Center
Nussbaum, E. Michael
2011-01-01
Toulmin's model of argumentation, developed in 1958, has guided much argumentation research in education. However, argumentation theory in philosophy and cognitive science has advanced considerably since 1958. There are currently several alternative frameworks of argumentation that can be useful for both research and practice in education. These…
ERIC Educational Resources Information Center
Mirman, Daniel; Estes, Katharine Graf; Magnuson, James S.
2010-01-01
Statistical learning mechanisms play an important role in theories of language acquisition and processing. Recurrent neural network models have provided important insights into how these mechanisms might operate. We examined whether such networks capture two key findings in human statistical learning. In Simulation 1, a simple recurrent network…
Analysis of a probability-based SATCOM situational awareness model for parameter estimation
NASA Astrophysics Data System (ADS)
Martin, Todd W.; Chang, Kuo-Chu; Tian, Xin; Chen, Genshe
2016-05-01
Emerging satellite communication (SATCOM) systems are envisioned to incorporate advanced capabilities for dynamically adapting link and network configurations to meet user performance needs. These advanced capabilities require an understanding of the operating environment as well as the potential outcomes of adaptation decisions. A SATCOM situational awareness and decision-making approach is needed that represents the cause and effect linkage of relevant phenomenology and operating conditions on link performance. Similarly, the model must enable a corresponding diagnostic capability that allows SATCOM payload managers to assess likely causes of observed effects. Prior work demonstrated the ability to use a probabilistic reasoning model for a SATCOM situational awareness model. It provided the theoretical basis and demonstrated the ability to realize such a model. This paper presents an analysis of the probabilistic reasoning approach in the context of its ability to be used for diagnostic purposes. A quantitative assessment is presented to demonstrate the impact of uncertainty on estimation accuracy for several key parameters. The paper also discusses how the results could be used by a higher-level reasoning process to evaluate likely causes of performance shortfalls such as atmospheric conditions, pointing errors, and jamming.
Allen, Jenica M; Terres, Maria A; Katsuki, Toshio; Iwamoto, Kojiro; Kobori, Hiromi; Higuchi, Hiroyoshi; Primack, Richard B; Wilson, Adam M; Gelfand, Alan; Silander, John A
2014-04-01
Understanding the drivers of phenological events is vital for forecasting species' responses to climate change. We developed flexible Bayesian survival regression models to assess a 29-year, individual-level time series of flowering phenology from four taxa of Japanese cherry trees (Prunus spachiana, Prunus × yedoensis, Prunus jamasakura, and Prunus lannesiana), from the Tama Forest Cherry Preservation Garden in Hachioji, Japan. Our modeling framework used time-varying (chill and heat units) and time-invariant (slope, aspect, and elevation) factors. We found limited differences among taxa in sensitivity to chill, but earlier flowering taxa, such as P. spachiana, were more sensitive to heat than later flowering taxa, such as P. lannesiana. Using an ensemble of three downscaled regional climate models under the A1B emissions scenario, we projected shifts in flowering timing by 2100. Projections suggest that each taxa will flower about 30 days earlier on average by 2100 with 2-6 days greater uncertainty around the species mean flowering date. Dramatic shifts in the flowering times of cherry trees may have implications for economically important cultural festivals in Japan and East Asia. The survival models used here provide a mechanistic modeling approach and are broadly applicable to any time-to-event phenological data, such as plant leafing, bird arrival time, and insect emergence. The ability to explicitly quantify uncertainty, examine phenological responses on a fine time scale, and incorporate conditions leading up to an event may provide future insight into phenologically driven changes in carbon balance and ecological mismatches of plants and pollinators in natural populations and horticultural crops.
NASA Astrophysics Data System (ADS)
Mahmud, Zamalia; Porter, Anne; Salikin, Masniyati; Ghani, Nor Azura Md
2015-12-01
Students' understanding of probability concepts have been investigated from various different perspectives. Competency on the other hand is often measured separately in the form of test structure. This study was set out to show that perceived understanding and competency can be calibrated and assessed together using Rasch measurement tools. Forty-four students from the STAT131 Understanding Uncertainty and Variation course at the University of Wollongong, NSW have volunteered to participate in the study. Rasch measurement which is based on a probabilistic model is used to calibrate the responses from two survey instruments and investigate the interactions between them. Data were captured from the e-learning platform Moodle where students provided their responses through an online quiz. The study shows that majority of the students perceived little understanding about conditional and independent events prior to learning about it but tend to demonstrate a slightly higher competency level afterward. Based on the Rasch map, there is indication of some increase in learning and knowledge about some probability concepts at the end of the two weeks lessons on probability concepts.
NASA Astrophysics Data System (ADS)
Brown, William; Wallin, Bruce; Lesniewski, Daniel; Gooding, David; Martin, James
2006-02-01
It is well known that atmospheric turbulence diminishes the performance of laser communications systems. Among the multiple degradations caused by turbulence is fading and surging of the received signal, usually referred to as scintillation. If a minimum probability of error receiver is employed for on-off keying (OOK), it is necessary to understand the two conditional probability densities (pdfs) corresponding to the transmission of ones and zeros. These probability densities are the distributions of signals received when the laser is on when sending binary ones and when the laser is off sending binary zeros. Many theoretical studies have determined the expected forms of the pdfs. An ongoing experimental study operating a low-power, low data rate link over a range of 9.3 Km has been started at Colorado State University-Pueblo to carefully examine the effects of atmospheric turbulence on laser communications. Experimental models of actual, true and typical pdfs have been obtained. The results do not always match theoretical predictions. The non-stationary nature of these pdfs is also a problem that must be addressed. This paper summarizes the experimental testing and shares a number of its conclusions.
Modeling of Kidney Hemodynamics: Probability-Based Topology of an Arterial Network
Postnov, Dmitry D.; Postnov, Dmitry E.; Braunstein, Thomas H.; Holstein-Rathlou, Niels-Henrik; Sosnovtseva, Olga
2016-01-01
Through regulation of the extracellular fluid volume, the kidneys provide important long-term regulation of blood pressure. At the level of the individual functional unit (the nephron), pressure and flow control involves two different mechanisms that both produce oscillations. The nephrons are arranged in a complex branching structure that delivers blood to each nephron and, at the same time, provides a basis for an interaction between adjacent nephrons. The functional consequences of this interaction are not understood, and at present it is not possible to address this question experimentally. We provide experimental data and a new modeling approach to clarify this problem. To resolve details of microvascular structure, we collected 3D data from more than 150 afferent arterioles in an optically cleared rat kidney. Using these results together with published micro-computed tomography (μCT) data we develop an algorithm for generating the renal arterial network. We then introduce a mathematical model describing blood flow dynamics and nephron to nephron interaction in the network. The model includes an implementation of electrical signal propagation along a vascular wall. Simulation results show that the renal arterial architecture plays an important role in maintaining adequate pressure levels and the self-sustained dynamics of nephrons. PMID:27447287
Keefer, Christopher E; Kauffman, Gregory W; Gupta, Rishi Raj
2013-02-25
A great deal of research has gone into the development of robust confidence in prediction and applicability domain (AD) measures for quantitative structure-activity relationship (QSAR) models in recent years. Much of the attention has historically focused on structural similarity, which can be defined in many forms and flavors. A concept that is frequently overlooked in the realm of the QSAR applicability domain is how the local activity landscape plays a role in how accurate a prediction is or is not. In this work, we describe an approach that pairs information about both the chemical similarity and activity landscape of a test compound's neighborhood into a single calculated confidence value. We also present an approach for converting this value into an interpretable confidence metric that has a simple and informative meaning across data sets. The approach will be introduced to the reader in the context of models built upon four diverse literature data sets. The steps we will outline include the definition of similarity used to determine nearest neighbors (NN), how we incorporate the NN activity landscape with a similarity-weighted root-mean-square distance (wRMSD) value, and how that value is then calibrated to generate an intuitive confidence metric for prospective application. Finally, we will illustrate the prospective performance of the approach on five proprietary models whose predictions and confidence metrics have been tracked for more than a year.
NASA Astrophysics Data System (ADS)
Mandal, K. G.; Padhi, J.; Kumar, A.; Ghosh, S.; Panda, D. K.; Mohanty, R. K.; Raychaudhuri, M.
2015-08-01
Rainfed agriculture plays and will continue to play a dominant role in providing food and livelihoods for an increasing world population. Rainfall analyses are helpful for proper crop planning under changing environment in any region. Therefore, in this paper, an attempt has been made to analyse 16 years of rainfall (1995-2010) at the Daspalla region in Odisha, eastern India for prediction using six probability distribution functions, forecasting the probable date of onset and withdrawal of monsoon, occurrence of dry spells by using Markov chain model and finally crop planning for the region. For prediction of monsoon and post-monsoon rainfall, log Pearson type III and Gumbel distribution were the best-fit probability distribution functions. The earliest and most delayed week of the onset of rainy season was the 20th standard meteorological week (SMW) (14th-20th May) and 25th SMW (18th-24th June), respectively. Similarly, the earliest and most delayed week of withdrawal of rainfall was the 39th SMW (24th-30th September) and 47th SMW (19th-25th November), respectively. The longest and shortest length of rainy season was 26 and 17 weeks, respectively. The chances of occurrence of dry spells are high from the 1st-22nd SMW and again the 42nd SMW to the end of the year. The probability of weeks (23rd-40th SMW) remaining wet varies between 62 and 100 % for the region. Results obtained through this analysis would be utilised for agricultural planning and mitigation of dry spells at the Daspalla region in Odisha, India.
People's conditional probability judgments follow probability theory (plus noise).
Costello, Fintan; Watts, Paul
2016-09-01
A common view in current psychology is that people estimate probabilities using various 'heuristics' or rules of thumb that do not follow the normative rules of probability theory. We present a model where people estimate conditional probabilities such as P(A|B) (the probability of A given that B has occurred) by a process that follows standard frequentist probability theory but is subject to random noise. This model accounts for various results from previous studies of conditional probability judgment. This model predicts that people's conditional probability judgments will agree with a series of fundamental identities in probability theory whose form cancels the effect of noise, while deviating from probability theory in other expressions whose form does not allow such cancellation. Two experiments strongly confirm these predictions, with people's estimates on average agreeing with probability theory for the noise-cancelling identities, but deviating from probability theory (in just the way predicted by the model) for other identities. This new model subsumes an earlier model of unconditional or 'direct' probability judgment which explains a number of systematic biases seen in direct probability judgment (Costello & Watts, 2014). This model may thus provide a fully general account of the mechanisms by which people estimate probabilities.
People's conditional probability judgments follow probability theory (plus noise).
Costello, Fintan; Watts, Paul
2016-09-01
A common view in current psychology is that people estimate probabilities using various 'heuristics' or rules of thumb that do not follow the normative rules of probability theory. We present a model where people estimate conditional probabilities such as P(A|B) (the probability of A given that B has occurred) by a process that follows standard frequentist probability theory but is subject to random noise. This model accounts for various results from previous studies of conditional probability judgment. This model predicts that people's conditional probability judgments will agree with a series of fundamental identities in probability theory whose form cancels the effect of noise, while deviating from probability theory in other expressions whose form does not allow such cancellation. Two experiments strongly confirm these predictions, with people's estimates on average agreeing with probability theory for the noise-cancelling identities, but deviating from probability theory (in just the way predicted by the model) for other identities. This new model subsumes an earlier model of unconditional or 'direct' probability judgment which explains a number of systematic biases seen in direct probability judgment (Costello & Watts, 2014). This model may thus provide a fully general account of the mechanisms by which people estimate probabilities. PMID:27570097
NASA Astrophysics Data System (ADS)
Lopes Cardozo, David; Holdsworth, Peter C. W.
2016-04-01
The magnetization probability density in d = 2 and 3 dimensional Ising models in slab geometry of volume L\\paralleld-1× {{L}\\bot} is computed through Monte-Carlo simulation at the critical temperature and zero magnetic field. The finite-size scaling of this distribution and its dependence on the system aspect-ratio ρ =\\frac{{{L}\\bot}}{{{L}\\parallel}} and boundary conditions are discussed. In the limiting case ρ \\to 0 of a macroscopically large slab ({{L}\\parallel}\\gg {{L}\\bot} ) the distribution is found to scale as a Gaussian function for all tested system sizes and boundary conditions.
Jacobsen, J L; Saleur, H
2008-02-29
We determine exactly the probability distribution of the number N_(c) of valence bonds connecting a subsystem of length L>1 to the rest of the system in the ground state of the XXX antiferromagnetic spin chain. This provides, in particular, the asymptotic behavior of the valence-bond entanglement entropy S_(VB)=N_(c)ln2=4ln2/pi(2)lnL disproving a recent conjecture that this should be related with the von Neumann entropy, and thus equal to 1/3lnL. Our results generalize to the Q-state Potts model.
Mason, Christine R; Idrobo, Fabio; Early, Susan J; Abibi, Ayome; Zheng, Ling; Harrison, J Michael; Carney, Laurel H
2003-05-01
Experimental studies were performed using a Pavlovian-conditioned eyeblink response to measure detection of a variable-sound-level tone (T) in a fixed-sound-level masking noise (N) in rabbits. Results showed an increase in the asymptotic probability of conditioned responses (CRs) to the reinforced TN trials and a decrease in the asymptotic rate of eyeblink responses to the non-reinforced N presentations as a function of the sound level of the T. These observations are consistent with expected behaviour in an auditory masked detection task, but they are not consistent with predictions from a traditional application of the Rescorla-Wagner or Pearce models of associative learning. To implement these models, one typically considers only the actual stimuli and reinforcement on each trial. We found that by considering perceptual interactions and concepts from signal detection theory, these models could predict the CS dependence on the sound level of the T. In these alternative implementations, the animals response probabilities were used as a guide in making assumptions about the "effective stimuli".
Chmelevsky, D.; Barclay, D.; Kellerer, A.M. |; Tomasek, L.; Kunz, E.; Placek, V.
1994-07-01
The estimates of lung cancer risk due to the exposure to radon decay products are based on different data sets from underground mining and on different mathematical models that are used to fit the data. Diagrams of the excess relative rate per 100 working level months in its dependence on age at exposure and age attained are shown to be a useful tool to elucidate the influence that is due to the choice of the model, and to assess the differences between the data from the major western cohorts and those from the Czech uranium miners. It is seen that the influence of the choice of the model is minor compared to the difference between the data sets. The results are used to derive attributable lifetime risks and probabilities of causation for lung cancer following radon progeny exposures. 23 refs., 9 figs.
Model-Based Calculations of the Probability of a Country's Nuclear Proliferation Decisions
Li, Jun; Yim, Man-Sung; McNelis, David N.
2007-07-01
explain the occurrences of proliferation decisions. However, predicting major historical proliferation events using model-based predictions has been unreliable. Nuclear proliferation decisions by a country is affected by three main factors: (1) technology; (2) finance; and (3) political motivation [1]. Technological capability is important as nuclear weapons development needs special materials, detonation mechanism, delivery capability, and the supporting human resources and knowledge base. Financial capability is likewise important as the development of the technological capabilities requires a serious financial commitment. It would be difficult for any state with a gross national product (GNP) significantly less than that of about $100 billion to devote enough annual governmental funding to a nuclear weapon program to actually achieve positive results within a reasonable time frame (i.e., 10 years). At the same time, nuclear proliferation is not a matter determined by a mastery of technical details or overcoming financial constraints. Technology or finance is a necessary condition but not a sufficient condition for nuclear proliferation. At the most fundamental level, the proliferation decision by a state is controlled by its political motivation. To effectively address the issue of predicting proliferation events, all three of the factors must be included in the model. To the knowledge of the authors, none of the exiting models considered the 'technology' variable as part of the modeling. This paper presents an attempt to develop a methodology for statistical modeling and predicting a country's nuclear proliferation decisions. The approach is based on the combined use of data on a country's nuclear technical capability profiles economic development status, security environment factors and internal political and cultural factors. All of the information utilized in the study was from open source literature. (authors)
Binary logistic regression modelling: Measuring the probability of relapse cases among drug addict
NASA Astrophysics Data System (ADS)
Ismail, Mohd Tahir; Alias, Siti Nor Shadila
2014-07-01
For many years Malaysia faced the drug addiction issues. The most serious case is relapse phenomenon among treated drug addict (drug addict who have under gone the rehabilitation programme at Narcotic Addiction Rehabilitation Centre, PUSPEN). Thus, the main objective of this study is to find the most significant factor that contributes to relapse to happen. The binary logistic regression analysis was employed to model the relationship between independent variables (predictors) and dependent variable. The dependent variable is the status of the drug addict either relapse, (Yes coded as 1) or not, (No coded as 0). Meanwhile the predictors involved are age, age at first taking drug, family history, education level, family crisis, community support and self motivation. The total of the sample is 200 which the data are provided by AADK (National Antidrug Agency). The finding of the study revealed that age and self motivation are statistically significant towards the relapse cases..
NASA Astrophysics Data System (ADS)
Varouchakis, Emmanouil; Kourgialas, Nektarios; Karatzas, George; Giannakis, Georgios; Lilli, Maria; Nikolaidis, Nikolaos
2014-05-01
Riverbank erosion affects the river morphology and the local habitat and results in riparian land loss, damage to property and infrastructures, ultimately weakening flood defences. An important issue concerning riverbank erosion is the identification of the areas vulnerable to erosion, as it allows for predicting changes and assists with stream management and restoration. One way to predict the vulnerable to erosion areas is to determine the erosion probability by identifying the underlying relations between riverbank erosion and the geomorphological and/or hydrological variables that prevent or stimulate erosion. A statistical model for evaluating the probability of erosion based on a series of independent local variables and by using logistic regression is developed in this work. The main variables affecting erosion are vegetation index (stability), the presence or absence of meanders, bank material (classification), stream power, bank height, river bank slope, riverbed slope, cross section width and water velocities (Luppi et al. 2009). In statistics, logistic regression is a type of regression analysis used for predicting the outcome of a categorical dependent variable, e.g. binary response, based on one or more predictor variables (continuous or categorical). The probabilities of the possible outcomes are modelled as a function of independent variables using a logistic function. Logistic regression measures the relationship between a categorical dependent variable and, usually, one or several continuous independent variables by converting the dependent variable to probability scores. Then, a logistic regression is formed, which predicts success or failure of a given binary variable (e.g. 1 = "presence of erosion" and 0 = "no erosion") for any value of the independent variables. The regression coefficients are estimated by using maximum likelihood estimation. The erosion occurrence probability can be calculated in conjunction with the model deviance regarding
Normality of raw data in general linear models: The most widespread myth in statistics
Kery, Marc; Hatfield, Jeff S.
2003-01-01
In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.
Modeling speech intelligibility in quiet and noise in listeners with normal and impaired hearing.
Rhebergen, Koenraad S; Lyzenga, Johannes; Dreschler, Wouter A; Festen, Joost M
2010-03-01
The speech intelligibility index (SII) is an often used calculation method for estimating the proportion of audible speech in noise. For speech reception thresholds (SRTs), measured in normally hearing listeners using various types of stationary noise, this model predicts a fairly constant speech proportion of about 0.33, necessary for Dutch sentence intelligibility. However, when the SII model is applied for SRTs in quiet, the estimated speech proportions are often higher, and show a larger inter-subject variability, than found for speech in noise near normal speech levels [65 dB sound pressure level (SPL)]. The present model attempts to alleviate this problem by including cochlear compression. It is based on a loudness model for normally hearing and hearing-impaired listeners of Moore and Glasberg [(2004). Hear. Res. 188, 70-88]. It estimates internal excitation levels for speech and noise and then calculates the proportion of speech above noise and threshold using similar spectral weighting as used in the SII. The present model and the standard SII were used to predict SII values in quiet and in stationary noise for normally hearing and hearing-impaired listeners. The present model predicted SIIs for three listener types (normal hearing, noise-induced, and age-induced hearing loss) with markedly less variability than the standard SII.
NASA Astrophysics Data System (ADS)
Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo
2016-07-01
Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.
Bejaei, M; Wiseman, K; Cheng, K M
2015-01-01
Consumers' interest in specialty eggs appears to be growing in Europe and North America. The objective of this research was to develop logistic regression models that utilise purchaser attributes and demographics to predict the probability of a consumer purchasing a specific type of table egg including regular (white and brown), non-caged (free-run, free-range and organic) or nutrient-enhanced eggs. These purchase prediction models, together with the purchasers' attributes, can be used to assess market opportunities of different egg types specifically in British Columbia (BC). An online survey was used to gather data for the models. A total of 702 completed questionnaires were submitted by BC residents. Selected independent variables included in the logistic regression to develop models for different egg types to predict the probability of a consumer purchasing a specific type of table egg. The variables used in the model accounted for 54% and 49% of variances in the purchase of regular and non-caged eggs, respectively. Research results indicate that consumers of different egg types exhibit a set of unique and statistically significant characteristics and/or demographics. For example, consumers of regular eggs were less educated, older, price sensitive, major chain store buyers, and store flyer users, and had lower awareness about different types of eggs and less concern regarding animal welfare issues. However, most of the non-caged egg consumers were less concerned about price, had higher awareness about different types of table eggs, purchased their eggs from local/organic grocery stores, farm gates or farmers markets, and they were more concerned about care and feeding of hens compared to consumers of other eggs types. PMID:26103791
Bejaei, M; Wiseman, K; Cheng, K M
2015-01-01
Consumers' interest in specialty eggs appears to be growing in Europe and North America. The objective of this research was to develop logistic regression models that utilise purchaser attributes and demographics to predict the probability of a consumer purchasing a specific type of table egg including regular (white and brown), non-caged (free-run, free-range and organic) or nutrient-enhanced eggs. These purchase prediction models, together with the purchasers' attributes, can be used to assess market opportunities of different egg types specifically in British Columbia (BC). An online survey was used to gather data for the models. A total of 702 completed questionnaires were submitted by BC residents. Selected independent variables included in the logistic regression to develop models for different egg types to predict the probability of a consumer purchasing a specific type of table egg. The variables used in the model accounted for 54% and 49% of variances in the purchase of regular and non-caged eggs, respectively. Research results indicate that consumers of different egg types exhibit a set of unique and statistically significant characteristics and/or demographics. For example, consumers of regular eggs were less educated, older, price sensitive, major chain store buyers, and store flyer users, and had lower awareness about different types of eggs and less concern regarding animal welfare issues. However, most of the non-caged egg consumers were less concerned about price, had higher awareness about different types of table eggs, purchased their eggs from local/organic grocery stores, farm gates or farmers markets, and they were more concerned about care and feeding of hens compared to consumers of other eggs types.
Predicted probabilities' relationship to inclusion probabilities.
Fang, Di; Chong, Jenny; Wilson, Jeffrey R
2015-05-01
It has been shown that under a general multiplicative intercept model for risk, case-control (retrospective) data can be analyzed by maximum likelihood as if they had arisen prospectively, up to an unknown multiplicative constant, which depends on the relative sampling fraction. (1) With suitable auxiliary information, retrospective data can also be used to estimate response probabilities. (2) In other words, predictive probabilities obtained without adjustments from retrospective data will likely be different from those obtained from prospective data. We highlighted this using binary data from Medicare to determine the probability of readmission into the hospital within 30 days of discharge, which is particularly timely because Medicare has begun penalizing hospitals for certain readmissions. (3).
Blood Vessel Normalization in the Hamster Oral Cancer Model for Experimental Cancer Therapy Studies
Ana J. Molinari; Romina F. Aromando; Maria E. Itoiz; Marcela A. Garabalino; Andrea Monti Hughes; Elisa M. Heber; Emiliano C. C. Pozzi; David W. Nigg; Veronica A. Trivillin; Amanda E. Schwint
2012-07-01
Normalization of tumor blood vessels improves drug and oxygen delivery to cancer cells. The aim of this study was to develop a technique to normalize blood vessels in the hamster cheek pouch model of oral cancer. Materials and Methods: Tumor-bearing hamsters were treated with thalidomide and were compared with controls. Results: Twenty eight hours after treatment with thalidomide, the blood vessels of premalignant tissue observable in vivo became narrower and less tortuous than those of controls; Evans Blue Dye extravasation in tumor was significantly reduced (indicating a reduction in aberrant tumor vascular hyperpermeability that compromises blood flow), and tumor blood vessel morphology in histological sections, labeled for Factor VIII, revealed a significant reduction in compressive forces. These findings indicated blood vessel normalization with a window of 48 h. Conclusion: The technique developed herein has rendered the hamster oral cancer model amenable to research, with the potential benefit of vascular normalization in head and neck cancer therapy.
Modelling the Shear Behaviour of Rock Joints with Asperity Damage Under Constant Normal Stiffness
NASA Astrophysics Data System (ADS)
Indraratna, Buddhima; Thirukumaran, Sivanathan; Brown, E. T.; Zhu, Song-Ping
2015-01-01
The shear behaviour of a rough rock joint depends largely on the surface properties of the joint, as well as the boundary conditions applied across the joint interface. This paper proposes a new analytical model to describe the complete shear behaviour of rough joints under constant normal stiffness (CNS) boundary conditions by incorporating the effect of damage to asperities. In particular, the effects of initial normal stress levels and joint surface roughness on the shear behaviour of joints under CNS conditions were studied, and the analytical model was validated through experimental results. Finally, the practical application of the model to a jointed rock slope stability analysis is presented.
Field, Edward H.
2015-01-01
A methodology is presented for computing elastic‐rebound‐based probabilities in an unsegmented fault or fault system, which involves computing along‐fault averages of renewal‐model parameters. The approach is less biased and more self‐consistent than a logical extension of that applied most recently for multisegment ruptures in California. It also enables the application of magnitude‐dependent aperiodicity values, which the previous approach does not. Monte Carlo simulations are used to analyze long‐term system behavior, which is generally found to be consistent with that of physics‐based earthquake simulators. Results cast doubt that recurrence‐interval distributions at points on faults look anything like traditionally applied renewal models, a fact that should be considered when interpreting paleoseismic data. We avoid such assumptions by changing the "probability of what" question (from offset at a point to the occurrence of a rupture, assuming it is the next event to occur). The new methodology is simple, although not perfect in terms of recovering long‐term rates in Monte Carlo simulations. It represents a reasonable, improved way to represent first‐order elastic‐rebound predictability, assuming it is there in the first place, and for a system that clearly exhibits other unmodeled complexities, such as aftershock triggering.
Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu
2013-01-01
Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ .
NASA Astrophysics Data System (ADS)
He, Jingjing; Wang, Dengjiang; Zhang, Weifang
2015-03-01
This study presents an experimental and modeling study for damage detection and quantification in riveted lap joints. Embedded lead zirconate titanate piezoelectric (PZT) ceramic wafer-type sensors are employed to perform in-situ non-destructive testing during fatigue cyclical loading. A multi-feature integration method is developed to quantify the crack size using signal features of correlation coefficient, amplitude change, and phase change. In addition, probability of detection (POD) model is constructed to quantify the reliability of the developed sizing method. Using the developed crack size quantification method and the resulting POD curve, probabilistic fatigue life prediction can be performed to provide comprehensive information for decision-making. The effectiveness of the overall methodology is demonstrated and validated using several aircraft lap joint specimens from different manufactures and under different loading conditions.
NASA Astrophysics Data System (ADS)
Wang, Haifeng; Popov, Pavel; Hiremath, Varun; Lantz, Steven; Viswanathan, Sharadha; Pope, Stephen
2010-11-01
A large-eddy simulation (LES)/probability density function (PDF) code is developed and applied to the study of local extinction and re-ignition in Sandia Flame E. The modified Curl mixing model is used to account for the sub-filter scalar mixing; the ARM1 mechanism is used for the chemical reaction; and the in- situ adaptive tabulation (ISAT) algorithm is used to accelerate the chemistry calculations. Calculations are performed on different grids to study the resolution requirement for this flame. Then, with sufficient grid resolution, full-scale LES/PDF calculations are performed to study the flame characteristics and the turbulence-chemistry interactions. Sensitivity to the mixing frequency model is explored in order to understand the behavior of sub-filter scalar mixing in the context of LES. The simulation results are compared to the experimental data to demonstrate the capability of the code. Comparison is also made to previous RANS/PDF simulations.
NASA Astrophysics Data System (ADS)
Koch, J.; Nowak, W.
2015-02-01
Improper storage and disposal of nonaqueous-phase liquids (NAPLs) has resulted in widespread contamination of the subsurface, threatening the quality of groundwater as a freshwater resource. The high frequency of contaminated sites and the difficulties of remediation efforts demand rational decisions based on a sound risk assessment. Due to sparse data and natural heterogeneities, this risk assessment needs to be supported by appropriate predictive models with quantified uncertainty. This study proposes a physically and stochastically coherent model concept to simulate and predict crucial impact metrics for DNAPL contaminated sites, such as contaminant mass discharge and DNAPL source longevity. To this end, aquifer parameters and the contaminant source architecture are conceptualized as random space functions. The governing processes are simulated in a three-dimensional, highly resolved, stochastic, and coupled model that can predict probability density functions of mass discharge and source depletion times. While it is not possible to determine whether the presented model framework is sufficiently complex or not, we can investigate whether and to which degree the desired model predictions are sensitive to simplifications often found in the literature. By testing four commonly made simplifications, we identified aquifer heterogeneity, groundwater flow irregularity, uncertain and physically based contaminant source zones, and their mutual interlinkages as indispensable components of a sound model framework.
NASA Astrophysics Data System (ADS)
Chanrion, M.-A.; Sauerwein, W.; Jelen, U.; Wittig, A.; Engenhart-Cabillic, R.; Beuve, M.
2014-06-01
In carbon ion beams, biological effects vary along the ion track; hence, to quantify them, specific radiobiological models are needed. One of them, the local effect model (LEM), in particular version I (LEM I), is implemented in treatment planning systems (TPS) clinically used in European particle therapy centers. From the physical properties of the specific ion radiation, the LEM calculates the survival probabilities of the cell or tissue type under study, provided that some determinant input parameters are initially defined. Mathematical models can be used to predict, for instance, the tumor control probability (TCP), and then evaluate treatment outcomes. This work studies the influence of the LEM I input parameters on the TCP predictions in the specific case of prostate cancer. Several published input parameters and their combinations were tested. Their influence on the dose distributions calculated for a water phantom and for a patient geometry was evaluated using the TPS TRiP98. Changing input parameters induced clinically significant modifications of the mean dose (up to a factor of 3.5), spatial dose distribution, and TCP predictions (up to factor of 2.6 for D50). TCP predictions were found to be more sensitive to the parameter threshold dose (Dt) than to the biological parameters α and β. Additionally, an analytical expression was derived for correlating α, β and Dt, and this has emphasized the importance of \\frac{D_t}{\\alpha /\\beta }. The improvement of radiobiological models for particle TPS will only be achieved when more patient outcome data with well-defined patient groups, fractionation schemes and well-defined end-points are available.
Tang, An-Min; Tang, Nian-Sheng
2015-02-28
We propose a semiparametric multivariate skew-normal joint model for multivariate longitudinal and multivariate survival data. One main feature of the posited model is that we relax the commonly used normality assumption for random effects and within-subject error by using a centered Dirichlet process prior to specify the random effects distribution and using a multivariate skew-normal distribution to specify the within-subject error distribution and model trajectory functions of longitudinal responses semiparametrically. A Bayesian approach is proposed to simultaneously obtain Bayesian estimates of unknown parameters, random effects and nonparametric functions by combining the Gibbs sampler and the Metropolis-Hastings algorithm. Particularly, a Bayesian local influence approach is developed to assess the effect of minor perturbations to within-subject measurement error and random effects. Several simulation studies and an example are presented to illustrate the proposed methodologies. PMID:25404574
NASA Astrophysics Data System (ADS)
Gill, Wonpyong
2016-01-01
This study calculated the growing probability of additional offspring with the advantageous reversal allele in an asymmetric sharply-peaked landscape using the decoupled continuous-time mutation-selection model. The growing probability was calculated for various population sizes, N, sequence lengths, L, selective advantages, s, fitness parameters, k and measuring parameters, C. The saturated growing probability in the stochastic region was approximately the effective selective advantage, s*, when C≫1/Ns* and s*≪1. The present study suggests that the growing probability in the stochastic region in the decoupled continuous-time mutation-selection model can be described using the theoretical formula for the growing probability in the Moran two-allele model. The selective advantage ratio, which represents the ratio of the effective selective advantage to the selective advantage, does not depend on the population size, selective advantage, measuring parameter and fitness parameter; instead the selective advantage ratio decreases with the increasing sequence length.
Shankar Subramaniam
2009-04-01
This final project report summarizes progress made towards the objectives described in the proposal entitled “Developing New Mathematical Models for Multiphase Flows Based on a Fundamental Probability Density Function Approach”. Substantial progress has been made in theory, modeling and numerical simulation of turbulent multiphase flows. The consistent mathematical framework based on probability density functions is described. New models are proposed for turbulent particle-laden flows and sprays.
NASA Astrophysics Data System (ADS)
Gomez, Thomas A.; Winget, Donald E.; Montgomery, Michael H.; Kilcrease, Dave; Nagayama, Taisuke
2016-01-01
White dwarfs are interesting for a number of applications including studying equations of state, stellar pulsations, and determining the age of the universe.These interesting applications require accurate determination of surface conditions: temperatures and surface gravity (or mass).The most common technique to estimate the temperature and gravity is to find the model spectrun that best fits the observed spectra of a star (known as the spectroscopic method); however, this model rests on our ability to accurately model the hydrogen spectrum at high densities.There are currently disagreements between the spectroscopic method and other techniques to determine mass.We seek to resolve this issue by exploring the continuum lowering (or disappearance of states) of the hydrogen atom.The current formalism, called "occupation probability," defines some criteria for the isolated atom's bound state to be ionized, then extrapolates the continuous spectrum to the same energy threshold.The two are then combined to create the final cross-section.I introduce a new way of calculating the atomic spectrum by doing some averaging of the plasma interaction potential energy (previously used in the physics community) and directly integrating the Schrodinger equation.This technique is a major improvement over the Taylor expansion used to describe the ion-emitter interaction and removes the need of the occupation probability and treats continuum states and discrete states on the same footing in the spectrum calculation.The resulting energy spectrum is in fact many discrete states that when averaged over the electric field distribution in the plasma appears to be a continuum.In the low density limit, the two methods are in agreement, but show some differences at high densities (above 10$^{17} e/cc$) including line shifts near the ``continuum'' edge.
Ivanek, R.; Gröhn, Y. T.; Wells, M. T.; Lembo, A. J.; Sauders, B. D.; Wiedmann, M.
2009-01-01
Many pathogens have the ability to survive and multiply in abiotic environments, representing a possible reservoir and source of human and animal exposure. Our objective was to develop a methodological framework to study spatially explicit environmental and meteorological factors affecting the probability of pathogen isolation from a location. Isolation of Listeria spp. from the natural environment was used as a model system. Logistic regression and classification tree methods were applied, and their predictive performances were compared. Analyses revealed that precipitation and occurrence of alternating freezing and thawing temperatures prior to sample collection, loam soil, water storage to a soil depth of 50 cm, slope gradient, and cardinal direction to the north are key predictors for isolation of Listeria spp. from a spatial location. Different combinations of factors affected the probability of isolation of Listeria spp. from the soil, vegetation, and water layers of a location, indicating that the three layers represent different ecological niches for Listeria spp. The predictive power of classification trees was comparable to that of logistic regression. However, the former were easier to interpret, making them more appealing for field applications. Our study demonstrates how the analysis of a pathogen's spatial distribution improves understanding of the predictors of the pathogen's presence in a particular location and could be used to propose novel control strategies to reduce human and animal environmental exposure. PMID:19648372
Pailos, Eliseo; Bará, Salvador
2014-06-01
This Letter studies the statistics of wavefront aberrations in a sample of eyes with normal vision. Methods relying on the statistics of the measured wavefront slopes are used, not including the aberration estimation stage. Power-law aberration models, an extension of the Kolmogorov one, are rejected by χ2-tests performed on fits to the slope structure function data. This is due to the large weight of defocus and astigmatism variations in normal eyes. Models of only second-order changes are not ruled out. The results are compared with previous works in the area.
Probability distributions for magnetotellurics
Stodt, John A.
1982-11-01
Estimates of the magnetotelluric transfer functions can be viewed as ratios of two complex random variables. It is assumed that the numerator and denominator are governed approximately by a joint complex normal distribution. Under this assumption, probability distributions are obtained for the magnitude, squared magnitude, logarithm of the squared magnitude, and the phase of the estimates. Normal approximations to the distributions are obtained by calculating mean values and variances from error propagation, and the distributions are plotted with their normal approximations for different percentage errors in the numerator and denominator of the estimates, ranging from 10% to 75%. The distribution of the phase is approximated well by a normal distribution for the range of errors considered, while the distribution of the logarithm of the squared magnitude is approximated by a normal distribution for a much larger range of errors than is the distribution of the squared magnitude. The distribution of the squared magnitude is most sensitive to the presence of noise in the denominator of the estimate, in which case the true distribution deviates significantly from normal behavior as the percentage errors exceed 10%. In contrast, the normal approximation to the distribution of the logarithm of the magnitude is useful for errors as large as 75%.
Lee, Tsair-Fwu; Lin, Wei-Chun; Wang, Hung-Yu; Lin, Shu-Yuan; Wu, Li-Fu; Guo, Shih-Sian; Huang, Hsiang-Jui; Ting, Hui-Min; Chao, Pei-Ju
2015-01-01
To develop the logistic and the probit models to analyse electromyographic (EMG) equivalent uniform voltage- (EUV-) response for the tenderness of tennis elbow. In total, 78 hands from 39 subjects were enrolled. In this study, surface EMG (sEMG) signal is obtained by an innovative device with electrodes over forearm region. The analytical endpoint was defined as Visual Analog Score (VAS) 3+ tenderness of tennis elbow. The logistic and the probit diseased probability (DP) models were established for the VAS score and EMG absolute voltage-time histograms (AVTH). TV50 is the threshold equivalent uniform voltage predicting a 50% risk of disease. Twenty-one out of 78 samples (27%) developed VAS 3+ tenderness of tennis elbow reported by the subject and confirmed by the physician. The fitted DP parameters were TV50 = 153.0 mV (CI: 136.3-169.7 mV), γ 50 = 0.84 (CI: 0.78-0.90) and TV50 = 155.6 mV (CI: 138.9-172.4 mV), m = 0.54 (CI: 0.49-0.59) for logistic and probit models, respectively. When the EUV ≥ 153 mV, the DP of the patient is greater than 50% and vice versa. The logistic and the probit models are valuable tools to predict the DP of VAS 3+ tenderness of tennis elbow. PMID:26380281
Gronewold, Andrew D; Wolpert, Robert L
2008-07-01
Most probable number (MPN) and colony-forming-unit (CFU) estimates of fecal coliform bacteria concentration are common measures of water quality in coastal shellfish harvesting and recreational waters. Estimating procedures for MPN and CFU have intrinsic variability and are subject to additional uncertainty arising from minor variations in experimental protocol. It has been observed empirically that the standard multiple-tube fermentation (MTF) decimal dilution analysis MPN procedure is more variable than the membrane filtration CFU procedure, and that MTF-derived MPN estimates are somewhat higher on average than CFU estimates, on split samples from the same water bodies. We construct a probabilistic model that provides a clear theoretical explanation for the variability in, and discrepancy between, MPN and CFU measurements. We then compare our model to water quality samples analyzed using both MPN and CFU procedures, and find that the (often large) observed differences between MPN and CFU values for the same water body are well within the ranges predicted by our probabilistic model. Our results indicate that MPN and CFU intra-sample variability does not stem from human error or laboratory procedure variability, but is instead a simple consequence of the probabilistic basis for calculating the MPN. These results demonstrate how probabilistic models can be used to compare samples from different analytical procedures, and to determine whether transitions from one procedure to another are likely to cause a change in quality-based management decisions.
Lee, Tsair-Fwu; Lin, Wei-Chun; Wang, Hung-Yu; Lin, Shu-Yuan; Wu, Li-Fu; Guo, Shih-Sian; Huang, Hsiang-Jui; Ting, Hui-Min; Chao, Pei-Ju
2015-01-01
To develop the logistic and the probit models to analyse electromyographic (EMG) equivalent uniform voltage- (EUV-) response for the tenderness of tennis elbow. In total, 78 hands from 39 subjects were enrolled. In this study, surface EMG (sEMG) signal is obtained by an innovative device with electrodes over forearm region. The analytical endpoint was defined as Visual Analog Score (VAS) 3+ tenderness of tennis elbow. The logistic and the probit diseased probability (DP) models were established for the VAS score and EMG absolute voltage-time histograms (AVTH). TV50 is the threshold equivalent uniform voltage predicting a 50% risk of disease. Twenty-one out of 78 samples (27%) developed VAS 3+ tenderness of tennis elbow reported by the subject and confirmed by the physician. The fitted DP parameters were TV50 = 153.0 mV (CI: 136.3–169.7 mV), γ50 = 0.84 (CI: 0.78–0.90) and TV50 = 155.6 mV (CI: 138.9–172.4 mV), m = 0.54 (CI: 0.49–0.59) for logistic and probit models, respectively. When the EUV ≥ 153 mV, the DP of the patient is greater than 50% and vice versa. The logistic and the probit models are valuable tools to predict the DP of VAS 3+ tenderness of tennis elbow. PMID:26380281
Bandyopadhyay, Dipankar; Lachos, Victor H.; Castro, Luis M.; Dey, Dipak K.
2012-01-01
Often in biomedical studies, the routine use of linear mixed-effects models (based on Gaussian assumptions) can be questionable when the longitudinal responses are skewed in nature. Skew-normal/elliptical models are widely used in those situations. Often, those skewed responses might also be subjected to some upper and lower quantification limits (viz. longitudinal viral load measures in HIV studies), beyond which they are not measurable. In this paper, we develop a Bayesian analysis of censored linear mixed models replacing the Gaussian assumptions with skew-normal/independent (SNI) distributions. The SNI is an attractive class of asymmetric heavy-tailed distributions that includes the skew-normal, the skew-t, skew-slash and the skew-contaminated normal distributions as special cases. The proposed model provides flexibility in capturing the effects of skewness and heavy tail for responses which are either left- or right-censored. For our analysis, we adopt a Bayesian framework and develop a MCMC algorithm to carry out the posterior analyses. The marginal likelihood is tractable, and utilized to compute not only some Bayesian model selection measures but also case-deletion influence diagnostics based on the Kullback-Leibler divergence. The newly developed procedures are illustrated with a simulation study as well as a HIV case study involving analysis of longitudinal viral loads. PMID:22685005
Ganjali, Mojtaba; Baghfalaki, Taban; Berridge, Damon
2015-01-01
In this paper, the problem of identifying differentially expressed genes under different conditions using gene expression microarray data, in the presence of outliers, is discussed. For this purpose, the robust modeling of gene expression data using some powerful distributions known as normal/independent distributions is considered. These distributions include the Student's t and normal distributions which have been used previously, but also include extensions such as the slash, the contaminated normal and the Laplace distributions. The purpose of this paper is to identify differentially expressed genes by considering these distributional assumptions instead of the normal distribution. A Bayesian approach using the Markov Chain Monte Carlo method is adopted for parameter estimation. Two publicly available gene expression data sets are analyzed using the proposed approach. The use of the robust models for detecting differentially expressed genes is investigated. This investigation shows that the choice of model for differentiating gene expression data is very important. This is due to the small number of replicates for each gene and the existence of outlying data. Comparison of the performance of these models is made using different statistical criteria and the ROC curve. The method is illustrated using some simulation studies. We demonstrate the flexibility of these robust models in identifying differentially expressed genes. PMID:25910040
Modeling absolute differences in life expectancy with a censored skew-normal regression approach
Clough-Gorr, Kerri; Zwahlen, Marcel
2015-01-01
Parameter estimates from commonly used multivariable parametric survival regression models do not directly quantify differences in years of life expectancy. Gaussian linear regression models give results in terms of absolute mean differences, but are not appropriate in modeling life expectancy, because in many situations time to death has a negative skewed distribution. A regression approach using a skew-normal distribution would be an alternative to parametric survival models in the modeling of life expectancy, because parameter estimates can be interpreted in terms of survival time differences while allowing for skewness of the distribution. In this paper we show how to use the skew-normal regression so that censored and left-truncated observations are accounted for. With this we model differences in life expectancy using data from the Swiss National Cohort Study and from official life expectancy estimates and compare the results with those derived from commonly used survival regression models. We conclude that a censored skew-normal survival regression approach for left-truncated observations can be used to model differences in life expectancy across covariates of interest. PMID:26339544
López, E; Ibarz, E; Herrera, A; Puértolas, S; Gabarre, S; Más, Y; Mateo, J; Gil-Albarova, J; Gracia, L
2016-07-01
Osteoporotic vertebral fractures represent a major cause of disability, loss of quality of life and even mortality among the elderly population. Decisions on drug therapy are based on the assessment of risk factors for fracture from bone mineral density (BMD) measurements. A previously developed model, based on the Damage and Fracture Mechanics, was applied for the evaluation of the mechanical magnitudes involved in the fracture process from clinical BMD measurements. BMD evolution in untreated patients and in patients with seven different treatments was analyzed from clinical studies in order to compare the variation in the risk of fracture. The predictive model was applied in a finite element simulation of the whole lumbar spine, obtaining detailed maps of damage and fracture probability, identifying high-risk local zones at vertebral body. For every vertebra, strontium ranelate exhibits the highest decrease, whereas minimum decrease is achieved with oral ibandronate. All the treatments manifest similar trends for every vertebra. Conversely, for the natural BMD evolution, as bone stiffness decreases, the mechanical damage and fracture probability show a significant increase (as it occurs in the natural history of BMD). Vertebral walls and external areas of vertebral end plates are the zones at greatest risk, in coincidence with the typical locations of osteoporotic fractures, characterized by a vertebral crushing due to the collapse of vertebral walls. This methodology could be applied for an individual patient, in order to obtain the trends corresponding to different treatments, in identifying at-risk individuals in early stages of osteoporosis and might be helpful for treatment decisions. PMID:27265047
NASA Astrophysics Data System (ADS)
Kaur, Arshdeep; Chopra, Sahila; Gupta, Raj K.
2014-08-01
The compound nucleus (CN) fusion/formation probability PCN is defined and its detailed variations with the CN excitation energy E*, center-of-mass energy Ec .m., fissility parameter χ, CN mass number ACN, and Coulomb interaction parameter Z1Z2 are studied for the first time within the dynamical cluster-decay model (DCM). The model is a nonstatistical description of the decay of a CN to all possible processes. The (total) fusion cross section σfusion is the sum of the CN and noncompound nucleus (nCN) decay cross sections, each calculated as the dynamical fragmentation process. The CN cross section σCN is constituted of evaporation residues and fusion-fission, including intermediate-mass fragments, each calculated for all contributing decay fragments (A1, A2) in terms of their formation and barrier penetration probabilities P0 and P. The nCN cross section σnCN is determined as the quasi-fission (qf) process, where P0=1 and P is calculated for the entrance-channel nuclei. The DCM, with effects of deformations and orientations of nuclei included in it, is used to study the PCN for about a dozen "hot" fusion reactions forming a CN of mass number A ˜100 to superheavy nuclei and for various different nuclear interaction potentials. Interesting results are that PCN=1 for complete fusion, but PCN<1 or PCN≪1 due to the nCN contribution, depending strongly on different parameters of the entrance-channel reaction but found to be independent of the nuclear interaction potentials used.
May, Carl; Finch, Tracy; Mair, Frances; Ballini, Luciana; Dowrick, Christopher; Eccles, Martin; Gask, Linda; MacFarlane, Anne; Murray, Elizabeth; Rapley, Tim; Rogers, Anne; Treweek, Shaun; Wallace, Paul; Anderson, George; Burns, Jo; Heaven, Ben
2007-01-01
Background The Normalization Process Model is a theoretical model that assists in explaining the processes by which complex interventions become routinely embedded in health care practice. It offers a framework for process evaluation and also for comparative studies of complex interventions. It focuses on the factors that promote or inhibit the routine embedding of complex interventions in health care practice. Methods A formal theory structure is used to define the model, and its internal causal relations and mechanisms. The model is broken down to show that it is consistent and adequate in generating accurate description, systematic explanation, and the production of rational knowledge claims about the workability and integration of complex interventions. Results The model explains the normalization of complex interventions by reference to four factors demonstrated to promote or inhibit the operationalization and embedding of complex interventions (interactional workability, relational integration, skill-set workability, and contextual integration). Conclusion The model is consistent and adequate. Repeated calls for theoretically sound process evaluations in randomized controlled trials of complex interventions, and policy-makers who call for a proper understanding of implementation processes, emphasize the value of conceptual tools like the Normalization Process Model. PMID:17880693
Ding, Tian; Yu, Yan-Yan; Hwang, Cheng-An; Dong, Qing-Li; Chen, Shi-Guo; Ye, Xing-Qian; Liu, Dong-Hong
2016-01-01
The objectives of this study were to develop a probability model of Staphylococcus aureus enterotoxin A (SEA) production as affected by water activity (a(w)), pH, and temperature in broth and assess its applicability for milk. The probability of SEA production was assessed in tryptic soy broth using 24 combinations of a(w) (0.86 to 0.99), pH (5.0 to 7.0), and storage temperature (10 to 30°C). The observed probabilities were fitted with a logistic regression to develop a probability model. The model had a concordant value of 97.5% and concordant index of 0.98, indicating that the model satisfactorily describes the probability of SEA production. The model showed that a(w), pH, and temperature were significant factors affecting the probability of toxin production. The model predictions were in good agreement with the observed values obtained from milk. The model may help manufacturers in selecting product pH and a(w) and storage temperatures to prevent SEA production. PMID:26735042
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1992-01-01
Turbulent combustion can not be simulated adequately by conventional moment closure turbulent models. The probability density function (PDF) method offers an attractive alternative: in a PDF model, the chemical source terms are closed and do not require additional models. Because the number of computational operations grows only linearly in the Monte Carlo scheme, it is chosen over finite differencing schemes. A grid dependent Monte Carlo scheme following J.Y. Chen and W. Kollmann has been studied in the present work. It was found that in order to conserve the mass fractions absolutely, one needs to add further restrictions to the scheme, namely alpha(sub j) + gamma(sub j) = alpha(sub j - 1) + gamma(sub j + 1). A new algorithm was devised that satisfied this restriction in the case of pure diffusion or uniform flow problems. Using examples, it is shown that absolute conservation can be achieved. Although for non-uniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.
How To Generate Non-normal Data for Simulation of Structural Equation Models.
ERIC Educational Resources Information Center
Mattson, Stefan
1997-01-01
A procedure is proposed to generate non-normal data for simulation of structural equation models. The procedure uses a simple transformation of univariate random variables for the generation of data on latent and error variables under some restrictions for the elements of the covariance matrices for these variables. (SLD)
ERIC Educational Resources Information Center
Bulcock, J. W.; And Others
Advantages of normalization regression estimation over ridge regression estimation are demonstrated by reference to Bloom's model of school learning. Theoretical concern centered on the structure of scholastic achievement at grade 10 in Canadian high schools. Data on 886 students were randomly sampled from the Carnegie Human Resources Data Bank.…
ERIC Educational Resources Information Center
MacIntosh, Randall
1997-01-01
Presents KANT, a FORTRAN 77 software program that tests assumptions of multivariate normality in a data set. Based on the test developed by M. V. Mardia (1985), the KANT program is useful for those engaged in structural equation modeling with latent variables. (SLD)
NASA Astrophysics Data System (ADS)
Tai, An; Liu, Feng; Gore, Elizabeth; Li, X. Allen
2016-05-01
We report a modeling study of tumor response after stereotactic body radiation therapy (SBRT) for early-stage non-small-cell lung carcinoma using published clinical data with a regrowth model. A linear-quadratic inspired regrowth model was proposed to analyze the tumor control probability (TCP) based on a series of published data of SBRT, in which a tumor is controlled for an individual patient if number of tumor cells is smaller than a critical value K cr. The regrowth model contains radiobiological parameters such as α, α/β the potential doubling time T p. This model also takes into account the heterogeneity of tumors and tumor regrowth after radiation treatment. The model was first used to fit TCP data from a single institution. The extracted fitting parameters were then used to predict the TCP data from another institution with a similar dose fractionation scheme. Finally, the model was used to fit the pooled TCP data selected from 48 publications available in the literature at the time when this manuscript was written. Excellent agreement between model predictions and single-institution data was found and the extracted radiobiological parameters were α = 0.010 ± 0.001 Gy‑1, α /β = 21.5 ± 1.0 Gy and T p = 133.4 ± 7.6 d. These parameters were α = 0.072 ± 0.006 Gy‑1, α/β = 15.9 ± 1.0 Gy and T p = 85.6 ± 24.7 d when extracted from multi-institution data. This study shows that TCP saturates at a BED of around 120 Gy. A few new dose-fractionation schemes were proposed based on the extracted model parameters from multi-institution data. It is found that the regrowth model with an α/β around 16 Gy can be used to predict the dose response of lung tumors treated with SBRT. The extracted radiobiological parameters may be useful for comparing clinical outcome data of various SBRT trials and for designing new treatment regimens.
NASA Astrophysics Data System (ADS)
Tai, An; Liu, Feng; Gore, Elizabeth; Li, X. Allen
2016-05-01
We report a modeling study of tumor response after stereotactic body radiation therapy (SBRT) for early-stage non-small-cell lung carcinoma using published clinical data with a regrowth model. A linear-quadratic inspired regrowth model was proposed to analyze the tumor control probability (TCP) based on a series of published data of SBRT, in which a tumor is controlled for an individual patient if number of tumor cells is smaller than a critical value K cr. The regrowth model contains radiobiological parameters such as α, α/β the potential doubling time T p. This model also takes into account the heterogeneity of tumors and tumor regrowth after radiation treatment. The model was first used to fit TCP data from a single institution. The extracted fitting parameters were then used to predict the TCP data from another institution with a similar dose fractionation scheme. Finally, the model was used to fit the pooled TCP data selected from 48 publications available in the literature at the time when this manuscript was written. Excellent agreement between model predictions and single-institution data was found and the extracted radiobiological parameters were α = 0.010 ± 0.001 Gy-1, α /β = 21.5 ± 1.0 Gy and T p = 133.4 ± 7.6 d. These parameters were α = 0.072 ± 0.006 Gy-1, α/β = 15.9 ± 1.0 Gy and T p = 85.6 ± 24.7 d when extracted from multi-institution data. This study shows that TCP saturates at a BED of around 120 Gy. A few new dose-fractionation schemes were proposed based on the extracted model parameters from multi-institution data. It is found that the regrowth model with an α/β around 16 Gy can be used to predict the dose response of lung tumors treated with SBRT. The extracted radiobiological parameters may be useful for comparing clinical outcome data of various SBRT trials and for designing new treatment regimens.
Takam, Rungdham; Bezak, Eva; Yeoh, Eric E.; Marcu, Loredana
2010-09-15
Purpose: Normal tissue complication probability (NTCP) of the rectum, bladder, urethra, and femoral heads following several techniques for radiation treatment of prostate cancer were evaluated applying the relative seriality and Lyman models. Methods: Model parameters from literature were used in this evaluation. The treatment techniques included external (standard fractionated, hypofractionated, and dose-escalated) three-dimensional conformal radiotherapy (3D-CRT), low-dose-rate (LDR) brachytherapy (I-125 seeds), and high-dose-rate (HDR) brachytherapy (Ir-192 source). Dose-volume histograms (DVHs) of the rectum, bladder, and urethra retrieved from corresponding treatment planning systems were converted to biological effective dose-based and equivalent dose-based DVHs, respectively, in order to account for differences in radiation treatment modality and fractionation schedule. Results: Results indicated that with hypofractionated 3D-CRT (20 fractions of 2.75 Gy/fraction delivered five times/week to total dose of 55 Gy), NTCP of the rectum, bladder, and urethra were less than those for standard fractionated 3D-CRT using a four-field technique (32 fractions of 2 Gy/fraction delivered five times/week to total dose of 64 Gy) and dose-escalated 3D-CRT. Rectal and bladder NTCPs (5.2% and 6.6%, respectively) following the dose-escalated four-field 3D-CRT (2 Gy/fraction to total dose of 74 Gy) were the highest among analyzed treatment techniques. The average NTCP for the rectum and urethra were 0.6% and 24.7% for LDR-BT and 0.5% and 11.2% for HDR-BT. Conclusions: Although brachytherapy techniques resulted in delivering larger equivalent doses to normal tissues, the corresponding NTCPs were lower than those of external beam techniques other than the urethra because of much smaller volumes irradiated to higher doses. Among analyzed normal tissues, the femoral heads were found to have the lowest probability of complications as most of their volume was irradiated to lower
Stein, Ross S.
2008-01-01
The Working Group for California Earthquake Probabilities must transform fault lengths and their slip rates into earthquake moment-magnitudes. First, the down-dip coseismic fault dimension, W, must be inferred. We have chosen the Nazareth and Hauksson (2004) method, which uses the depth above which 99% of the background seismicity occurs to assign W. The product of the observed or inferred fault length, L, with the down-dip dimension, W, gives the fault area, A. We must then use a scaling relation to relate A to moment-magnitude, Mw. We assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2007) equations. The former uses a single logarithmic relation fitted to the M=6.5 portion of data of Wells and Coppersmith (1994); the latter uses a bilinear relation with a slope change at M=6.65 (A=537 km2) and also was tested against a greatly expanded dataset for large continental transform earthquakes. We also present an alternative power law relation, which fits the newly expanded Hanks and Bakun (2007) data best, and captures the change in slope that Hanks and Bakun attribute to a transition from area- to length-scaling of earthquake slip. We have not opted to use the alternative relation for the current model. The selections and weights were developed by unanimous consensus of the Executive Committee of the Working Group, following an open meeting of scientists, a solicitation of outside opinions from additional scientists, and presentation of our approach to the Scientific Review Panel. The magnitude-area relations and their assigned weights are unchanged from that used in Working Group (2003).
Normal fault growth above pre-existing structures: insights from discrete element modelling
NASA Astrophysics Data System (ADS)
Wrona, Thilo; Finch, Emma; Bell, Rebecca; Jackson, Christopher; Gawthorpe, Robert; Phillips, Thomas
2016-04-01
In extensional systems, pre-existing structures such as shear zones may affect the growth, geometry and location of normal faults. Recent seismic reflection-based observations from the North Sea suggest that shear zones not only localise deformation in the host rock, but also in the overlying sedimentary succession. While pre-existing weaknesses are known to localise deformation in the host rock, their effect on deformation in the overlying succession is less well understood. Here, we use 3-D discrete element modelling to determine if and how kilometre-scale shear zones affect normal fault growth in the overlying succession. Discrete element models use a large number of interacting particles to describe the dynamic evolution of complex systems. The technique has therefore been applied to describe fault and fracture growth in a variety of geological settings. We model normal faulting by extending a 60×60×30 km crustal rift-basin model including brittle and ductile interactions and gravitation and isostatic forces by 30%. An inclined plane of weakness which represents a pre-existing shear zone is introduced in the lower section of the upper brittle layer at the start of the experiment. The length, width, orientation and dip of the weak zone are systematically varied between experiments to test how these parameters control the geometric and kinematic development of overlying normal fault systems. Consistent with our seismic reflection-based observations, our results show that strain is indeed localised in and above these weak zones. In the lower brittle layer, normal faults nucleate, as expected, within the zone of weakness and control the initiation and propagation of neighbouring faults. Above this, normal faults nucleate throughout the overlying strata where their orientations are strongly influenced by the underlying zone of weakness. These results challenge the notion that overburden normal faults simply form due to reactivation and upwards propagation of pre
Thurmond, M C; Branscum, A J; Johnson, W O; Bedrick, E J; Hanson, T E
2005-05-10
Although abortion contributes substantially to poor reproductive health of dairy herds, little is known about the predictability of abortion based on age, previous abortion or gravidity (number of previous pregnancies). A poor understanding of effects of maternal factors on abortion risk exists, in part, because of methodological difficulties related to non-independence of multiple pregnancies of the same cow in analysis of fetal survival data. We prospectively examined sequential pregnancies to investigate relationships between fetal survival and putative dam risk factors for 2991 abortions from 24,706 pregnancies of 13,145 cows in nine California dairy herds. Relative risks and predicted probabilities of abortion (PPA) were estimated using a previously described hierarchical Bayesian logistic-survival model generalized to incorporate longitudinal data of multiple pregnancies from a single cow. The PPA increased with increasing dam age at conception, with increasing number of previous abortions, and if the previous pregnancy was aborted >60 days in gestation. The PPA decreased with increasing gravidity and with increasing number of days open. For cows that aborted, the median time to fetal death decreased slightly as gravidity increased. The study considers several methodological issues faced in epidemiologic investigations of fetal health, including multi-modal hazard functions, extensive censoring and non-independence of multiple pregnancies. The model improves our ability to predict bovine abortion and to characterize fetal survival, which have important applications to herd health management.
NASA Astrophysics Data System (ADS)
Hong, Ban Zhen; Keong, Lau Kok; Shariff, Azmi Mohd
2016-05-01
The employment of different mathematical models to address specifically for the bubble nucleation rates of water vapour and dissolved air molecules is essential as the physics for them to form bubble nuclei is different. The available methods to calculate bubble nucleation rate in binary mixture such as density functional theory are complicated to be coupled along with computational fluid dynamics (CFD) approach. In addition, effect of dissolved gas concentration was neglected in most study for the prediction of bubble nucleation rates. The most probable bubble nucleation rate for the water vapour and dissolved air mixture in a 2D quasi-stable flow across a cavitating nozzle in current work was estimated via the statistical mean of all possible bubble nucleation rates of the mixture (different mole fractions of water vapour and dissolved air) and the corresponding number of molecules in critical cluster. Theoretically, the bubble nucleation rate is greatly dependent on components' mole fraction in a critical cluster. Hence, the dissolved gas concentration effect was included in current work. Besides, the possible bubble nucleation rates were predicted based on the calculated number of molecules required to form a critical cluster. The estimation of components' mole fraction in critical cluster for water vapour and dissolved air mixture was obtained by coupling the enhanced classical nucleation theory and CFD approach. In addition, the distribution of bubble nuclei of water vapour and dissolved air mixture could be predicted via the utilisation of population balance model.
Boitard, Simon; Loisel, Patrice
2007-05-01
The probability distribution of haplotype frequencies in a population, and the way it is influenced by genetical forces such as recombination, selection, random drift ...is a question of fundamental interest in population genetics. For large populations, the distribution of haplotype frequencies for two linked loci under the classical Wright-Fisher model is almost impossible to compute because of numerical reasons. However the Wright-Fisher process can in such cases be approximated by a diffusion process and the transition density can then be deduced from the Kolmogorov equations. As no exact solution has been found for these equations, we developed a numerical method based on finite differences to solve them. It applies to transient states and models including selection or mutations. We show by several tests that this method is accurate for computing the conditional joint density of haplotype frequencies given that no haplotype has been lost. We also prove that it is far less time consuming than other methods such as Monte Carlo simulations. PMID:17316725
NASA Astrophysics Data System (ADS)
Chodera, John D.; Noé, Frank
2010-09-01
Discrete-state Markov (or master equation) models provide a useful simplified representation for characterizing the long-time statistical evolution of biomolecules in a manner that allows direct comparison with experiments as well as the elucidation of mechanistic pathways for an inherently stochastic process. A vital part of meaningful comparison with experiment is the characterization of the statistical uncertainty in the predicted experimental measurement, which may take the form of an equilibrium measurement of some spectroscopic signal, the time-evolution of this signal following a perturbation, or the observation of some statistic (such as the correlation function) of the equilibrium dynamics of a single molecule. Without meaningful error bars (which arise from both approximation and statistical error), there is no way to determine whether the deviations between model and experiment are statistically meaningful. Previous work has demonstrated that a Bayesian method that enforces microscopic reversibility can be used to characterize the statistical component of correlated uncertainties in state-to-state transition probabilities (and functions thereof) for a model inferred from molecular simulation data. Here, we extend this approach to include the uncertainty in observables that are functions of molecular conformation (such as surrogate spectroscopic signals) characterizing each state, permitting the full statistical uncertainty in computed spectroscopic experiments to be assessed. We test the approach in a simple model system to demonstrate that the computed uncertainties provide a useful indicator of statistical variation, and then apply it to the computation of the fluorescence autocorrelation function measured for a dye-labeled peptide previously studied by both experiment and simulation.
Glenn E McCreery; Keith G Condie
2006-09-01
The Very High Temperature Reactor (VHTR) is the leading candidate for the Next Generation Nuclear Power (NGNP) Project in the U.S. which has the goal of demonstrating the production of emissions free electricity and hydrogen by 2015. The present document addresses experimental modeling of flow and thermal mixing phenomena of importance during normal or reduced power operation and during a loss of forced reactor cooling (pressurized conduction cooldown) scenario. The objectives of the experiments are, 1), provide benchmark data for assessment and improvement of codes proposed for NGNP designs and safety studies, and, 2), obtain a better understanding of related phenomena, behavior and needs. Physical models of VHTR vessel upper and lower plenums which use various working fluids to scale phenomena of interest are described. The models may be used to both simulate natural convection conditions during pressurized conduction cooldown and turbulent lower plenum flow during normal or reduced power operation.
A single period inventory model with a truncated normally distributed fuzzy random variable demand
NASA Astrophysics Data System (ADS)
Dey, Oshmita; Chakraborty, Debjani
2012-03-01
In this article, a single period inventory model has been considered in the mixed fuzzy random environment by assuming the annual customer demand to be a fuzzy random variable. Since assuming demand to be normally distributed implies that some amount of demand information is being automatically taken to be negative, the model has been developed for two cases, using the non-truncated and the truncated normal distributions. The problem has been developed to represent scenarios where the aim of the decision-maker is to determine the optimal order quantity such that the expected profit is greater than or equal to a predetermined target. This 'greater than or equal to' inequality has been modelled as a fuzzy inequality and a methodology has been developed to this effect. This methodology has been illustrated through a numerical example.
NASA Astrophysics Data System (ADS)
Liang, Zach; Lee, George C.
2012-09-01
The current AASHTO load and resistance factor design (LRFD) guidelines are formulated based on bridge reliability, which interprets traditional design safety factors into more rigorously deduced factors based on the theory of probability. This is a major advancement in bridge design specifications. However, LRFD is only calibrated for dead and live loads. In cases when extreme loads are significant, they need to be individually assessed. Combining regular loads with extreme loads has been a major challenge, mainly because the extreme loads are time variables and cannot be directly combined with time invariant loads to formulate the probability of structural failure. To overcome these difficulties, this paper suggests a methodology of comprehensive reliability, by introducing the concept of partial failure probability to separate the loads so that each individual load combination under a certain condition can be approximated as time invariant. Based on these conditions, the extreme loads (also referred to as multiple hazard or MH loads) can be broken down into single effects. In Part II of this paper, a further breakdown of these conditional occurrence probabilities into pure conditions is discussed by using a live truck and earthquake loads on a bridge as an example. There are three major steps in establishing load factors from MH load distributions: (1) formulate the failure probabilities; (2) normalize various load distributions; and (3) establish design limit state equations. This paper describes the formulation of the failure probabilities of single and combined loads.
Zupančič, Daša; Kreft, Mateja Erdani; Romih, Rok
2014-01-01
Bladder cancer adjuvant intravesical therapy could be optimized by more selective targeting of neoplastic tissue via specific binding of lectins to plasma membrane carbohydrates. Our aim was to establish rat and mouse models of bladder carcinogenesis to investigate in vivo and ex vivo binding of selected lectins to the luminal surface of normal and neoplastic urothelium. Male rats and mice were treated with 0.05 % N-butyl-N-(4-hydroxybutyl)nitrosamine (BBN) in drinking water and used for ex vivo and in vivo lectin binding experiments. Urinary bladder samples were also used for paraffin embedding, scanning electron microscopy and immunofluorescence labelling of uroplakins. During carcinogenesis, the structure of the urinary bladder luminal surface changed from microridges to microvilli and ropy ridges and the expression of urothelial-specific glycoproteins uroplakins was decreased. Ex vivo and in vivo lectin binding experiments gave comparable results. Jacalin (lectin from Artocarpus integrifolia) exhibited the highest selectivity for neoplastic compared to normal urothelium of rats and mice. The binding of lectin from Amaranthus caudatus decreased in rat model and increased in mouse carcinogenesis model, indicating interspecies variations of plasma membrane glycosylation. Lectin from Datura stramonium showed higher affinity for neoplastic urothelium compared to the normal in rat and mouse model. The BBN-induced animal models of bladder carcinogenesis offer a promising approach for lectin binding experiments and further lectin-mediated targeted drug delivery research. Moreover, in vivo lectin binding experiments are comparable to ex vivo experiments, which should be considered when planning and optimizing future research.
Lee, Jeong-Hyun; Kim, Hye-Lee; Lee, Mi Hee; You, Kyung Eun; Kwon, Byeong-Ju; Seo, Hyok Jin; Park, Jong-Chul
2012-10-15
Wound healing proceeds through a complex collaborative process involving many types of cells. Keratinocytes and fibroblasts of epidermal and dermal layers of the skin play prominent roles in this process. Asiaticoside, an active component of Centella asiatica, is known for beneficial effects on keloid and hypertrophic scar. However, the effects of this compound on normal human skin cells are not well known. Using in vitro systems, we observed the effects of asiaticoside on normal human skin cell behaviors related to healing. In a wound closure seeding model, asiaticoside increased migration rates of skin cells. By observing the numbers of cells attached and the area occupied by the cells, we concluded that asiaticoside also enhanced the initial skin cell adhesion. In cell proliferation assays, asiaticoside induced an increase in the number of normal human dermal fibroblasts. In conclusion, asiaticoside promotes skin cell behaviors involved in wound healing; and as a bioactive component of an artificial skin, may have therapeutic value.
Beer's-law-based, simple spectral model for direct normal and diffuse horizontal irradiance
Bird, R.E.
1982-12-01
A spectral model for cloudless days that uses simple mathematical expressions and tabulated look-up tables to generate direct normal and diffuse horizontal irradiance is presented. The model is based on modifications to previously published simple models and comparisons with rigorous radiative transfer codes. This model is expected to be more accurate and to be applicable to a broader range of atmospheric conditions than previous simple models. The prime significance of this model is its simplicity, which allows it to be used on small desk-top computers. The spctrum produced by this model is limited to 0.3 to 4.0 ..mu..m wavelength with an approximate resolution of 10 nm.
NASA Technical Reports Server (NTRS)
Butler, Doug; Bauman, David; Johnson-Throop, Kathy
2011-01-01
The Integrated Medical Model (IMM) Project has been developing a probabilistic risk assessment tool, the IMM, to help evaluate in-flight crew health needs and impacts to the mission due to medical events. This package is a follow-up to a data package provided in June 2009. The IMM currently represents 83 medical conditions and associated ISS resources required to mitigate medical events. IMM end state forecasts relevant to the ISS PRA model include evacuation (EVAC) and loss of crew life (LOCL). The current version of the IMM provides the basis for the operational version of IMM expected in the January 2011 timeframe. The objectives of this data package are: 1. To provide a preliminary understanding of medical risk data used to update the ISS PRA Model. The IMM has had limited validation and an initial characterization of maturity has been completed using NASA STD 7009 Standard for Models and Simulation. The IMM has been internally validated by IMM personnel but has not been validated by an independent body external to the IMM Project. 2. To support a continued dialogue between the ISS PRA and IMM teams. To ensure accurate data interpretation, and that IMM output format and content meets the needs of the ISS Risk Management Office and ISS PRA Model, periodic discussions are anticipated between the risk teams. 3. To help assess the differences between the current ISS PRA and IMM medical risk forecasts of EVAC and LOCL. Follow-on activities are anticipated based on the differences between the current ISS PRA medical risk data and the latest medical risk data produced by IMM.
Teaching normal birth, normally.
Hotelling, Barbara A
2009-01-01
Teaching normal-birth Lamaze classes normally involves considering the qualities that make birth normal and structuring classes to embrace those qualities. In this column, teaching strategies are suggested for classes that unfold naturally, free from unnecessary interventions. PMID:19436595
Widesott, Lamberto; Pierelli, Alessio; Fiorino, Claudio; Lomax, Antony J.; Amichetti, Maurizio; Cozzarini, Cesare; Soukup, Martin; Schneider, Ralf; Hug, Eugen; Di Muzio, Nadia; Calandrino, Riccardo; Schwarz, Marco
2011-08-01
Purpose: To compare intensity-modulated proton therapy (IMPT) and helical tomotherapy (HT) treatment plans for high-risk prostate cancer (HRPCa) patients. Methods and Materials: The plans of 8 patients with HRPCa treated with HT were compared with IMPT plans with two quasilateral fields set up (-100{sup o}; 100{sup o}) and optimized with the Hyperion treatment planning system. Both techniques were optimized to simultaneously deliver 74.2 Gy/Gy relative biologic effectiveness (RBE) in 28 fractions on planning target volumes (PTVs)3-4 (P + proximal seminal vesicles), 65.5 Gy/Gy(RBE) on PTV2 (distal seminal vesicles and rectum/prostate overlapping), and 51.8 Gy/Gy(RBE) to PTV1 (pelvic lymph nodes). Normal tissue calculation probability (NTCP) calculations were performed for the rectum, and generalized equivalent uniform dose (gEUD) was estimated for the bowel cavity, penile bulb and bladder. Results: A slightly better PTV coverage and homogeneity of target dose distribution with IMPT was found: the percentage of PTV volume receiving {>=}95% of the prescribed dose (V{sub 95%}) was on average >97% in HT and >99% in IMPT. The conformity indexes were significantly lower for protons than for photons, and there was a statistically significant reduction of the IMPT dosimetric parameters, up to 50 Gy/Gy(RBE) for the rectum and bowel and 60 Gy/Gy(RBE) for the bladder. The NTCP values for the rectum were higher in HT for all the sets of parameters, but the gain was small and in only a few cases statistically significant. Conclusions: Comparable PTV coverage was observed. Based on NTCP calculation, IMPT is expected to allow a small reduction in rectal toxicity, and a significant dosimetric gain with IMPT, both in medium-dose and in low-dose range in all OARs, was observed.
NASA Astrophysics Data System (ADS)
Mandache, C.; Khan, M.; Fahr, A.; Yanishevsky, M.
2011-03-01
Probability of detection (PoD) studies are broadly used to determine the reliability of specific nondestructive inspection procedures, as well as to provide data for damage tolerance life estimations and calculation of inspection intervals for critical components. They require inspections on a large set of samples, a fact that makes these statistical assessments time- and cost-consuming. Physics-based numerical simulations of nondestructive testing inspections could be used as a cost-effective alternative to empirical investigations. They realistically predict the inspection outputs as functions of the input characteristics related to the test piece, transducer and instrument settings, which are subsequently used to partially substitute and/or complement inspection data in PoD analysis. This work focuses on the numerical modelling aspects of eddy current testing for the bolt hole inspections of wing box structures typical of the Lockheed Martin C-130 Hercules and P-3 Orion aircraft, found in the air force inventory of many countries. Boundary element-based numerical modelling software was employed to predict the eddy current signal responses when varying inspection parameters related to probe characteristics, crack geometry and test piece properties. Two demonstrator exercises were used for eddy current signal prediction when lowering the driver probe frequency and changing the material's electrical conductivity, followed by subsequent discussions and examination of the implications on using simulated data in the PoD analysis. Despite some simplifying assumptions, the modelled eddy current signals were found to provide similar results to the actual inspections. It is concluded that physics-based numerical simulations have the potential to partially substitute or complement inspection data required for PoD studies, reducing the cost, time, effort and resources necessary for a full empirical PoD assessment.
NASA Technical Reports Server (NTRS)
1995-01-01
The success of any solution methodology for studying gas-turbine combustor flows depends a great deal on how well it can model various complex, rate-controlling processes associated with turbulent transport, mixing, chemical kinetics, evaporation and spreading rates of the spray, convective and radiative heat transfer, and other phenomena. These phenomena often strongly interact with each other at disparate time and length scales. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and evaporation in many practical combustion devices. Turbulence manifests its influence in a diffusion flame in several forms depending on how turbulence interacts with various flame scales. These forms range from the so-called wrinkled, or stretched, flamelets regime, to the distributed combustion regime. Conventional turbulence closure models have difficulty in treating highly nonlinear reaction rates. A solution procedure based on the joint composition probability density function (PDF) approach holds the promise of modeling various important combustion phenomena relevant to practical combustion devices such as extinction, blowoff limits, and emissions predictions because it can handle the nonlinear chemical reaction rates without any approximation. In this approach, mean and turbulence gas-phase velocity fields are determined from a standard turbulence model; the joint composition field of species and enthalpy are determined from the solution of a modeled PDF transport equation; and a Lagrangian-based dilute spray model is used for the liquid-phase representation with appropriate consideration of the exchanges of mass, momentum, and energy between the two phases. The PDF transport equation is solved by a Monte Carlo method, and existing state-of-the-art numerical representations are used to solve the mean gasphase velocity and turbulence fields together with the liquid-phase equations. The joint composition PDF
Langenderfer, Joseph E.; Carpenter, James E.; Johnson, Marjorie E.; An, Kai-nan; Hughes, Richard E.
2006-01-01
The reigning paradigm of musculoskeletal modeling is to construct deterministic models from parameters of an “average” subject and make predictions for muscle forces and joint torques with this model. This approach is limited because it does not perform well for outliers, and it does not model the effects of population parameter variability. The purpose of this study was to simulate variability in musculoskeletal parameters on glenohumeral external rotation strength in healthy normals, and in rotator cuff tear cases using a Monte Carlo model. The goal was to determine if variability in musculoskeletal parameters could quantifiably explain variability in glenohumeral external rotation strength. Multivariate Gamma distributions for musculoskeletal architecture and moment arm were constructed from empirical data. Gamma distributions of measured joint strength were constructed. Parameters were sampled from the distributions and input to the model to predict muscle forces and joint torques. The model predicted measured joint torques for healthy normals, subjects with supraspinatus tears, and subjects with infraspinatus–supraspinatus tears with small error. Muscle forces for the three conditions were predicted and compared. Variability in measured torques can be explained by differences in parameter variability. PMID:16474916
Bröder, Arndt; Herwig, Andrea; Teipel, Stefan; Fast, Kristina
2008-06-01
The authors compared patients with mild cognitive impairment with healthy older adults and young control participants in a free recall test in order to locate potential qualitative differences in normal and pathological memory decline. Analysis with an extended multitrial version of W. H. Batchelder and D. M. Riefer's (1980) pair-clustering model revealed globally decelerated learning and an additional retrieval deficit in patients with mild cognitive impairment but not in healthy older adults. Results thus suggest differences in memory decline between normal and pathological aging that may be useful for the detection of risk groups for dementia, and they illustrate the value of model-based disentangling of processes and of multitrial tests for early detection of dementia.
NASA Technical Reports Server (NTRS)
Demoulin, P.; Forbes, T. G.
1992-01-01
A technique which incorporates both photospheric and prominence magnetic field observations is used to analyze the magnetic support of solar prominences in two dimensions. The prominence is modeled by a mass-loaded current sheet which is supported against gravity by magnetic fields from a bipolar source in the photosphere and a massless line current in the corona. It is found that prominence support can be achieved in three different kinds of configurations: an arcade topology with a normal polarity; a helical topology with a normal polarity; and a helical topology with an inverse polarity. In all cases the important parameter is the variation of the horizontal component of the prominence field with height. Adding a line current external to the prominence eliminates the nonsupport problem which plagues virtually all previous prominence models with inverse polarity.
NASA Astrophysics Data System (ADS)
Ma, Lei; Huang, Ai-Qun; Li, Jun
2011-03-01
This paper studies the normal state properties of itinerant electrons in a toy model, which is constructed according to the model for coexisting ferromagnetism and superconductivity proposed by Suhl [Suhl H 2001 Phys. Rev. Lett. 87 167007]. In this theory with ferromagnetic ordering based on localized spins, the exchange interaction J between conduction electrons and localized spin is taken as the pairing glue for s-wave superconductivity. It shows that this J term will first renormalize the normal state single conduction electron structures substantially. It finds dramatically enhanced or suppressed magnetization of itinerant electrons for positive or negative J. Singlet Cooper pairing can be ruled out due to strong spin polarisation in the J > 0 case while a narrow window for s-wave superconductivity is opened around some ferromagnetic J. Project supported by the National Natural Science Foundation of China (Grant No. 10574063).
NASA Astrophysics Data System (ADS)
Gupta, N.; Callaghan, S.; Graves, R.; Mehta, G.; Zhao, L.; Deelman, E.; Jordan, T. H.; Kesselman, C.; Okaya, D.; Cui, Y.; Field, E.; Gupta, V.; Vahi, K.; Maechling, P. J.
2006-12-01
Researchers from the SCEC Community Modeling Environment (SCEC/CME) project are utilizing the CyberShake computational platform and a distributed high performance computing environment that includes USC High Performance Computer Center and the NSF TeraGrid facilities to calculate physics-based probabilistic seismic hazard curves for several sites in the Southern California area. Traditionally, probabilistic seismic hazard analysis (PSHA) is conducted using intensity measure relationships based on empirical attenuation relationships. However, a more physics-based approach using waveform modeling could lead to significant improvements in seismic hazard analysis. Members of the SCEC/CME Project have integrated leading-edge PSHA software tools, SCEC-developed geophysical models, validated anelastic wave modeling software, and state-of-the-art computational technologies on the TeraGrid to calculate probabilistic seismic hazard curves using 3D waveform-based modeling. The CyberShake calculations for a single probablistic seismic hazard curve require tens of thousands of CPU hours and multiple terabytes of disk storage. The CyberShake workflows are run on high performance computing systems including multiple TeraGrid sites (currently SDSC and NCSA), and the USC Center for High Performance Computing and Communications. To manage the extensive job scheduling and data requirements, CyberShake utilizes a grid-based scientific workflow system based on the Virtual Data System (VDS), the Pegasus meta-scheduler system, and the Globus toolkit. Probabilistic seismic hazard curves for spectral acceleration at 3.0 seconds have been produced for eleven sites in the Southern California region, including rock and basin sites. At low ground motion levels, there is little difference between the CyberShake and attenuation relationship curves. At higher ground motion (lower probability) levels, the curves are similar for some sites (downtown LA, I-5/SR-14 interchange) but different for
Recognition of sine wave modeled consonants by normal hearing and hearing-impaired individuals
NASA Astrophysics Data System (ADS)
Balachandran, Rupa
Sine wave modeling is a parametric tool for representing the speech signal with a limited number of sine waves. It involves replacing the peaks of the speech spectrum with sine waves and discarding the rest of the lower amplitude components during synthesis. It has the potential to be used as a speech enhancement technique for hearing-impaired adults. The present study answers the following basic questions: (1) Are sine wave synthesized speech tokens more intelligible than natural speech tokens? (2) What is the effect of varying the number of sine waves on consonant recognition in quiet? (3) What is the effect of varying the number of sine waves on consonant recognition in noise? (4) How does sine wave modeling affect the transmission of speech feature in quiet and in noise? (5) Are there differences in recognition performance between normal hearing and hearing-impaired listeners? VCV syllables representing 20 consonants (/p/, /t/, /k/, /b/, /d/, /g/, /f/, /theta/, /s/, /∫/, /v/, /z/, /t∫/, /dy/, /j/, /w/, /r/, /l/, /m/, /n/) in three vowel contexts (/a/, /i/, /u/) were modeled with 4, 8, 12, and 16 sine waves. A consonant recognition task was performed in quiet, and in background noise (+10 dB and 0 dB SNR). Twenty hearing-impaired listeners and six normal hearing listeners were tested under headphones at their most comfortable listening level. The main findings were: (1) Recognition of unprocessed speech was better that of sine wave modeled speech. (2) Asymptotic performance was reached with 8 sine waves in quiet for both normal hearing and hearing-impaired listeners. (3) Consonant recognition performance in noise improved with increasing number of sine waves. (4) As the number of sine waves was decreased, place information was lost first, followed by manner, and finally voicing. (5) Hearing-impaired listeners made more errors then normal hearing listeners, but there were no differences in the error patterns made by both groups.
NASA Technical Reports Server (NTRS)
Ruggier, C. J.
1992-01-01
The probability of exceeding interference power levels and the duration of interference at the Deep Space Network (DSN) antenna is calculated parametrically when the state vector of an Earth-orbiting satellite over the DSN station view area is not known. A conditional probability distribution function is derived, transformed, and then convolved with the interference signal uncertainties to yield the probability distribution of interference at any given instant during the orbiter's mission period. The analysis is applicable to orbiting satellites having circular orbits with known altitude and inclination angle.
A log-normal distribution model for the molecular weight of aquatic fulvic acids
Cabaniss, S.E.; Zhou, Q.; Maurice, P.A.; Chin, Y.-P.; Aiken, G.R.
2000-01-01
The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a lognormal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured M(n) and M(w) and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several types of molecular weight data, including the shapes of high- pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a log-normal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured Mn and Mw and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several type's of molecular weight data, including the shapes of high-pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.
Human Normal Bronchial Epithelial Cells: A Novel In Vitro Cell Model for Toxicity Evaluation
Huang, Haiyan; Xia, Bo; Liu, Hongya; Li, Jie; Lin, Shaolin; Li, Tiyuan; Liu, Jianjun; Li, Hui
2015-01-01
Human normal cell-based systems are needed for drug discovery and toxicity evaluation. hTERT or viral genes transduced human cells are currently widely used for these studies, while these cells exhibited abnormal differentiation potential or response to biological and chemical signals. In this study, we established human normal bronchial epithelial cells (HNBEC) using a defined primary epithelial cell culture medium without transduction of exogenous genes. This system may involve decreased IL-1 signaling and enhanced Wnt signaling in cells. Our data demonstrated that HNBEC exhibited a normal diploid karyotype. They formed well-defined spheres in matrigel 3D culture while cancer cells (HeLa) formed disorganized aggregates. HNBEC cells possessed a normal cellular response to DNA damage and did not induce tumor formation in vivo by xenograft assays. Importantly, we assessed the potential of these cells in toxicity evaluation of the common occupational toxicants that may affect human respiratory system. Our results demonstrated that HNBEC cells are more sensitive to exposure of 10~20 nm-sized SiO2, Cr(VI) and B(a)P compared to 16HBE cells (a SV40-immortalized human bronchial epithelial cells). This study provides a novel in vitro human cells-based model for toxicity evaluation, may also be facilitating studies in basic cell biology, cancer biology and drug discovery. PMID:25861018
Normal diffusion in crystal structures and higher-dimensional billiard models with gaps.
Sanders, David P
2008-12-01
We show, both heuristically and numerically, that three-dimensional periodic Lorentz gases-clouds of particles scattering off crystalline arrays of hard spheres-often exhibit normal diffusion, even when there are gaps through which particles can travel without ever colliding-i.e., when the system has an infinite horizon. This is the case provided that these gaps are not "too large," as measured by their dimension. The results are illustrated with simulations of a simple three-dimensional model having different types of diffusive regime and are then extended to higher-dimensional billiard models, which include hard-sphere fluids.
Application of the global geopotential models for a determination of the leveling normal correction
NASA Astrophysics Data System (ADS)
Margański, Stanisław; Kalinczuk-Stanałowska, Katarzyna; Olszak, Tomasz
2010-05-01
Vertical networks in Poland are processed in the normal heights system called Kronstadt'86. Leveling evaluation of high precision measurements requires a determinations of normal leveling corrections, based on gravity Faye's anomalies. The purposes of the paper is to analyze the possibility of using in this computations the anomalies generated from global geopotential models while maintaining accuracy analysis which indicate a possible accuracy of Faye's anomalies. Authors have compared gravity anomalies obtained directly from measurements with gravity anomalies generated from different global geopotential models EGM96, GPM96, EGM2008. Authors have proposed an algorithm of gravity anomalies computation on geoid surface after free-air moving anomalies from geopotential models. Comparison was made on points of vertical networks located in medium-hill areas (around Starachowice near Kielce) and lowland (around Gostyn and Grudziadz) and on others test fields in Poland. The ability to use the geopotential models depends on their resolution. As a result of analysis concluded that only in the plain and lowland areas is possible to use data from global geopotential models with resolution above degree 720.
Zupančič, Daša; Kreft, Mateja Erdani; Romih, Rok
2014-01-01
Bladder cancer adjuvant intravesical therapy could be optimized by more selective targeting of neoplastic tissue via specific binding of lectins to plasma membrane carbohydrates. Our aim was to establish rat and mouse models of bladder carcinogenesis to investigate in vivo and ex vivo binding of selected lectins to the luminal surface of normal and neoplastic urothelium. Male rats and mice were treated with 0.05 % N-butyl-N-(4-hydroxybutyl)nitrosamine (BBN) in drinking water and used for ex vivo and in vivo lectin binding experiments. Urinary bladder samples were also used for paraffin embedding, scanning electron microscopy and immunofluorescence labelling of uroplakins. During carcinogenesis, the structure of the urinary bladder luminal surface changed from microridges to microvilli and ropy ridges and the expression of urothelial-specific glycoproteins uroplakins was decreased. Ex vivo and in vivo lectin binding experiments gave comparable results. Jacalin (lectin from Artocarpus integrifolia) exhibited the highest selectivity for neoplastic compared to normal urothelium of rats and mice. The binding of lectin from Amaranthus caudatus decreased in rat model and increased in mouse carcinogenesis model, indicating interspecies variations of plasma membrane glycosylation. Lectin from Datura stramonium showed higher affinity for neoplastic urothelium compared to the normal in rat and mouse model. The BBN-induced animal models of bladder carcinogenesis offer a promising approach for lectin binding experiments and further lectin-mediated targeted drug delivery research. Moreover, in vivo lectin binding experiments are comparable to ex vivo experiments, which should be considered when planning and optimizing future research. PMID:23828036
Computing with a canonical neural circuits model with pool normalization and modulating feedback.
Brosch, Tobias; Neumann, Heiko
2014-12-01
Evidence suggests that the brain uses an operational set of canonical computations like normalization, input filtering, and response gain enhancement via reentrant feedback. Here, we propose a three-stage columnar architecture of cascaded model neurons to describe a core circuit combining signal pathways of feedforward and feedback processing and the inhibitory pooling of neurons to normalize the activity. We present an analytical investigation of such a circuit by first reducing its detail through the lumping of initial feedforward response filtering and reentrant modulating signal amplification. The resulting excitatory-inhibitory pair of neurons is analyzed in a 2D phase-space. The inhibitory pool activation is treated as a separate mechanism exhibiting different effects. We analyze subtractive as well as divisive (shunting) interaction to implement center-surround mechanisms that include normalization effects in the characteristics of real neurons. Different variants of a core model architecture are derived and analyzed--in particular, individual excitatory neurons (without pool inhibition), the interaction with an inhibitory subtractive or divisive (i.e., shunting) pool, and the dynamics of recurrent self-excitation combined with divisive inhibition. The stability and existence properties of these model instances are characterized, which serve as guidelines to adjust these properties through proper model parameterization. The significance of the derived results is demonstrated by theoretical predictions of response behaviors in the case of multiple interacting hypercolumns in a single and in multiple feature dimensions. In numerical simulations, we confirm these predictions and provide some explanations for different neural computational properties. Among those, we consider orientation contrast-dependent response behavior, different forms of attentional modulation, contrast element grouping, and the dynamic adaptation of the silent surround in extraclassical
A coupling model for quasi-normal modes of photonic resonators
NASA Astrophysics Data System (ADS)
Vial, Benjamin; Hao, Yang
2016-11-01
We develop a model for the coupling of quasi-normal modes in open photonic systems consisting of two resonators. By expressing the modes of the coupled system as a linear combination of the modes of the individual particles, we obtain a generalized eigenvalue problem involving small size dense matrices. We apply this technique to dielectric rod dimmer of rectangular cross section for transverse electric polarization in a two-dimensional setup. The results of our model show excellent agreement with full wave finite element simulations. We provide a convergence analysis, and a simplified model with a few modes to study the influence of the relative position of the two resonators. This model provides interesting physical insights on the coupling scheme at stake in such systems and pave the way for systematic and efficient design and optimization of resonances in more complicated systems, for applications including sensing, antennae and spectral filtering.
Modelling of the hygroelastic behaviour of normal and compression wood tracheids.
Joffre, Thomas; Neagu, R Cristian; Bardage, Stig L; Gamstedt, E Kristofer
2014-01-01
Compression wood conifer tracheids show different swelling and stiffness properties than those of usual normal wood, which has a practical function in the living plant: when a conifer shoot is moved from its vertical position, compression wood is formed in the under part of the shoot. The growth rate of the compression wood is faster than in the upper part resulting in a renewed horizontal growth. The actuating and load-carrying function of the compression wood is addressed, on the basis of its special ultrastructure and shape of the tracheids. As a first step, a quantitative model is developed to predict the difference of moisture-induced expansion and axial stiffness between normal wood and compression wood. The model is based on a state space approach using concentric cylinders with anisotropic helical structure for each cell-wall layer, whose hygroelastic properties are in turn determined by a self-consistent concentric cylinder assemblage of the constituent wood polymers. The predicted properties compare well with experimental results found in the literature. Significant differences in both stiffness and hygroexpansion are found for normal and compression wood, primarily due to the large difference in microfibril angle and lignin content. On the basis of these numerical results, some functional arguments for the reason of high microfibril angle, high lignin content and cylindrical structure of compression wood tracheids are supported.
NASA Astrophysics Data System (ADS)
Duan, Zhenyun; Chen, Houjun; Ju, Zhilan; Liu, Jian
2012-09-01
In this paper, loxodromic-type normal circulararc spiral bevel gear is proposed as a novel application of the circular-arc tooth profile at the gear transmission with intersecting axes. Based on the principle of molding-surface conjugation, the study develops a mathematical model for the tooth alignment curve and the computational flow at the design stage to enable the generation of the tooth surface. Machining of the tooth surface is then carried out to determine the interference-free tool path of the numerical control (NC). Moreover, a pair of loxodromic-type normal circular-arc spiral bevel gears is manufactured on computer numerical control (CNC) machine tools. The proposed theory and method are experimentally investigated, and the obtained results primarily reflect the superior performance of the proposed novel gear.
Implementation of Combined Feather and Surface-Normal Ice Growth Models in LEWICE/X
NASA Technical Reports Server (NTRS)
Velazquez, M. T.; Hansman, R. J., Jr.
1995-01-01
Experimental observations have shown that discrete rime ice growths called feathers, which grow in approximately the direction of water droplet impingement, play an important role in the growth of ice on accreting surfaces for some thermodynamic conditions. An improved physical model of ice accretion has been implemented in the LEWICE 2D panel-based ice accretion code maintained by the NASA Lewis Research Center. The LEWICE/X model of ice accretion explicitly simulates regions of feather growth within the framework of the LEWICE model. Water droplets impinging on an accreting surface are withheld from the normal LEWICE mass/energy balance and handled in a separate routine; ice growth resulting from these droplets is performed with enhanced convective heat transfer approximately along droplet impingement directions. An independent underlying ice shape is grown along surface normals using the unmodified LEWICE method. The resulting dual-surface ice shape models roughness-induced feather growth observed in icing wind tunnel tests. Experiments indicate that the exact direction of feather growth is dependent on external conditions. Data is presented to support a linear variation of growth direction with temperature and cloud water content. Test runs of LEWICE/X indicate that the sizes of surface regions containing feathers are influenced by initial roughness element height. This suggests that a previous argument that feather region size is determined by boundary layer transition may be incorrect. Simulation results for two typical test cases give improved shape agreement over unmodified LEWICE.
Kinetic modeling of hyperpolarized 13C 1-pyruvate metabolism in normal rats and TRAMP mice
NASA Astrophysics Data System (ADS)
Zierhut, Matthew L.; Yen, Yi-Fen; Chen, Albert P.; Bok, Robert; Albers, Mark J.; Zhang, Vickie; Tropp, Jim; Park, Ilwoo; Vigneron, Daniel B.; Kurhanewicz, John; Hurd, Ralph E.; Nelson, Sarah J.
2010-01-01
PurposeTo investigate metabolic exchange between 13C 1-pyruvate, 13C 1-lactate, and 13C 1-alanine in pre-clinical model systems using kinetic modeling of dynamic hyperpolarized 13C spectroscopic data and to examine the relationship between fitted parameters and dose-response. Materials and methodsDynamic 13C spectroscopy data were acquired in normal rats, wild type mice, and mice with transgenic prostate tumors (TRAMP) either within a single slice or using a one-dimensional echo-planar spectroscopic imaging (1D-EPSI) encoding technique. Rate constants were estimated by fitting a set of exponential equations to the dynamic data. Variations in fitted parameters were used to determine model robustness in 15 mm slices centered on normal rat kidneys. Parameter values were used to investigate differences in metabolism between and within TRAMP and wild type mice. ResultsThe kinetic model was shown here to be robust when fitting data from a rat given similar doses. In normal rats, Michaelis-Menten kinetics were able to describe the dose-response of the fitted exchange rate constants with a 13.65% and 16.75% scaled fitting error (SFE) for kpyr→lac and kpyr→ala, respectively. In TRAMP mice, kpyr→lac increased an average of 94% after up to 23 days of disease progression, whether the mice were untreated or treated with casodex. Parameters estimated from dynamic 13C 1D-EPSI data were able to differentiate anatomical structures within both wild type and TRAMP mice. ConclusionsThe metabolic parameters estimated using this approach may be useful for in vivo monitoring of tumor progression and treatment efficacy, as well as to distinguish between various tissues based on metabolic activity.
Non-Gaussian Photon Probability Distribution
NASA Astrophysics Data System (ADS)
Solomon, Benjamin T.
2010-01-01
This paper investigates the axiom that the photon's probability distribution is a Gaussian distribution. The Airy disc empirical evidence shows that the best fit, if not exact, distribution is a modified Gamma mΓ distribution (whose parameters are α = r, βr/√u ) in the plane orthogonal to the motion of the photon. This modified Gamma distribution is then used to reconstruct the probability distributions along the hypotenuse from the pinhole, arc from the pinhole, and a line parallel to photon motion. This reconstruction shows that the photon's probability distribution is not a Gaussian function. However, under certain conditions, the distribution can appear to be Normal, thereby accounting for the success of quantum mechanics. This modified Gamma distribution changes with the shape of objects around it and thus explains how the observer alters the observation. This property therefore places additional constraints to quantum entanglement experiments. This paper shows that photon interaction is a multi-phenomena effect consisting of the probability to interact Pi, the probabilistic function and the ability to interact Ai, the electromagnetic function. Splitting the probability function Pi from the electromagnetic function Ai enables the investigation of the photon behavior from a purely probabilistic Pi perspective. The Probabilistic Interaction Hypothesis is proposed as a consistent method for handling the two different phenomena, the probability function Pi and the ability to interact Ai, thus redefining radiation shielding, stealth or cloaking, and invisibility as different effects of a single phenomenon Pi of the photon probability distribution. Sub wavelength photon behavior is successfully modeled as a multi-phenomena behavior. The Probabilistic Interaction Hypothesis provides a good fit to Otoshi's (1972) microwave shielding, Schurig et al. (2006) microwave cloaking, and Oulton et al. (2008) sub wavelength confinement; thereby providing a strong case that
Martens-Kuin models of normal and inverse polarity filament eruptions and coronal mass ejections
NASA Technical Reports Server (NTRS)
Smith, D. F.; Hildner, E.; Kuin, N. P. M.
1992-01-01
An analysis is made of the Martens-Kuin filament eruption model in relation to observations of coronal mass ejections (CMEs). The field lines of this model are plotted in the vacuum or infinite resistivity approximation with two background fields. The first is the dipole background field of the model and the second is the potential streamer model of Low. The Martens-Kuin model predicts that, as the filament erupts, the overlying coronal magnetic field lines rise in a manner inconsistent with observations of CMEs associated with eruptive filaments. This model and, by generalization the whole class of so-called Kuperus-Raadu configurations in which a neutral point occurs below the filament, are of questionable utility for CME modeling. An alternate case is considered in which the directions of currents in the Martens-Kuin model are reversed resulting in a so-called normal polarity configuration of the filament magnetic field. The background field lines now distort to support the filament and help eject it. While the vacuum field results make this configuration appear very promising, a full two- or more-dimensional MHD simulations is required to properly analyze the dynamics resulting from this configuration.
Modeling the Redshift Evolution of the Normal Galaxy X-Ray Luminosity Function
NASA Technical Reports Server (NTRS)
Tremmel, M.; Fragos, T.; Lehmer, B. D.; Tzanavaris, P.; Belczynski, K.; Kalogera, V.; Basu-Zych, A. R.; Farr, W. M.; Hornschemeier, A.; Jenkins, L.; Ptak, A.; Zezas, A.
2013-01-01
Emission from X-ray binaries (XRBs) is a major component of the total X-ray luminosity of normal galaxies, so X-ray studies of high-redshift galaxies allow us to probe the formation and evolution of XRBs on very long timescales (approximately 10 Gyr). In this paper, we present results from large-scale population synthesis models of binary populations in galaxies from z = 0 to approximately 20. We use as input into our modeling the Millennium II Cosmological Simulation and the updated semi-analytic galaxy catalog by Guo et al. to self-consistently account for the star formation history (SFH) and metallicity evolution of each galaxy. We run a grid of 192 models, varying all the parameters known from previous studies to affect the evolution of XRBs. We use our models and observationally derived prescriptions for hot gas emission to create theoretical galaxy X-ray luminosity functions (XLFs) for several redshift bins. Models with low common envelope efficiencies, a 50% twins mass ratio distribution, a steeper initial mass function exponent, and high stellar wind mass-loss rates best match observational results from Tzanavaris & Georgantopoulos, though they significantly underproduce bright early-type and very bright (L(sub x) greater than 10(exp 41)) late-type galaxies. These discrepancies are likely caused by uncertainties in hot gas emission and SFHs, active galactic nucleus contamination, and a lack of dynamically formed low-mass XRBs. In our highest likelihood models, we find that hot gas emission dominates the emission for most bright galaxies. We also find that the evolution of the normal galaxy X-ray luminosity density out to z = 4 is driven largely by XRBs in galaxies with X-ray luminosities between 10(exp 40) and 10(exp 41) erg s(exp -1).
NASA Astrophysics Data System (ADS)
Bodaghi, M.; Damanpack, A. R.; Liao, W. H.
2016-07-01
The aim of this article is to develop a robust macroscopic bi-axial model to capture self-accommodation, martensitic transformation/orientation/reorientation, normal-shear deformation coupling and asymmetric/anisotropic strain generation in polycrystalline shape memory alloys. By considering the volume fraction of martensite and its preferred direction as scalar and directional internal variables, constitutive relations are derived to describe basic mechanisms of accommodation, transformation and orientation/reorientation of martensite variants. A new definition is introduced for maximum recoverable strain, which allows the model to capture the effects of tension-compression asymmetry and transformation anisotropy. Furthermore, the coupling effects between normal and shear deformation modes are considered by merging inelastic strain components together. By introducing a calibration approach, material and kinetic parameters of the model are recast in terms of common quantities that characterize a uniaxial phase kinetic diagram. The solution algorithm of the model is presented based on an elastic-predictor inelastic-corrector return mapping process. In order to explore and demonstrate capabilities of the proposed model, theoretical predictions are first compared with existing experimental results on uniaxial tension, compression, torsion and combined tension-torsion tests. Afterwards, experimental results of uniaxial tension, compression, pure bending and buckling tests on {{NiTi}} rods and tubes are replicated by implementing a finite element method along with the Newton-Raphson and Riks techniques to trace non-linear equilibrium path. A good qualitative and quantitative correlation is observed between numerical and experimental results, which verifies the accuracy of the model and the solution procedure.
Safaeian, Navid; David, Tim
2013-10-01
The oxygen exchange and correlation between the cerebral blood flow (CBF) and cerebral metabolic rate of oxygen consumption (CMRO2) in the cortical capillary levels for normal and pathologic brain functions remain the subject of debate. A 3D realistic mesoscale model of the cortical capillary network (non-tree like) is constructed using a random Voronoi tessellation in which each edge represents a capillary segment. The hemodynamics and oxygen transport are numerically simulated in the model, which involves rheological laws in the capillaries, oxygen diffusion, and non-linear binding of oxygen to hemoglobin, respectively. The findings show that the cerebral hypoxia due to a significant decreased perfusion (as can occur in stroke) can be avoided by a moderate reduction in oxygen demand. Oxygen extraction fraction (OEF) can be an important indicator for the brain oxygen metabolism under normal perfusion and misery-perfusion syndrome (leading to ischemia). The results demonstrated that a disproportionately large increase in blood supply is required for a small increase in the oxygen demand, which, in turn, is strongly dependent on the resting OEF. The predicted flow-metabolism coupling in the model supports the experimental studies of spatiotemporal stimulations in humans by positron emission tomography and functional magnetic resonance imaging.
Lateral dynamic flight stability of a model hoverfly in normal and inclined stroke-plane hovering.
Xu, Na; Sun, Mao
2014-09-01
Many insects hover with their wings beating in a horizontal plane ('normal hovering'), while some insects, e.g., hoverflies and dragonflies, hover with inclined stroke-planes. Here, we investigate the lateral dynamic flight stability of a hovering model hoverfly. The aerodynamic derivatives are computed using the method of computational fluid dynamics, and the equations of motion are solved by the techniques of eigenvalue and eigenvector analysis. The following is shown: The flight of the insect is unstable at normal hovering (stroke-plane angle equals 0) and the instability becomes weaker as the stroke-plane angle increases; the flight becomes stable at a relatively large stroke-plane angle (larger than about 24°). As previously shown, the instability at normal hovering is due to a positive roll-moment/side-velocity derivative produced by the 'changing-LEV-axial-velocity' effect. When the stroke-plane angle increases, the wings bend toward the back of the body, and the 'changing-LEV-axial-velocity' effect decreases; in addition, another effect, called the 'changing-relative-velocity' effect (the 'lateral wind', which is due to the side motion of the insect, changes the relative velocity of its wings), becomes increasingly stronger. This causes the roll-moment/side-velocity derivative to first decrease and then become negative, resulting in the above change in stability as a function of the stroke-plane angle.
NASA Astrophysics Data System (ADS)
Verleysdonk, Sarah; Flores-Orozco, Adrian; Krautblatter, Michael; Kemna, Andreas
2010-05-01
Electrical resistivity tomography (ERT) has been used for the monitoring of permafrost-affected rock walls for some years now. To further enhance the interpretation of ERT measurements a deeper insight into error sources and the influence of error model parameters on the imaging results is necessary. Here, we present the effect of different statistical schemes for the determination of error parameters from the discrepancies between normal and reciprocal measurements - bin analysis and histogram analysis - using a smoothness-constrained inversion code (CRTomo) with an incorporated appropriate error model. The study site is located in galleries adjacent to the Zugspitze North Face (2800 m a.s.l.) at the border between Austria and Germany. A 20 m * 40 m rock permafrost body and its surroundings have been monitored along permanently installed transects - with electrode spacings of 1.5 m and 4.6 m - from 2007 to 2009. For data acquisition, a conventional Wenner survey was conducted as this array has proven to be the most robust array in frozen rock walls. Normal and reciprocal data were collected directly one after another to ensure identical conditions. The ERT inversion results depend strongly on the chosen parameters of the employed error model, i.e., the absolute resistance error and the relative resistance error. These parameters were derived (1) for large normal/reciprocal data sets by means of bin analyses and (2) for small normal/reciprocal data sets by means of histogram analyses. Error parameters were calculated independently for each data set of a monthly monitoring sequence to avoid the creation of artefacts (over-fitting of the data) or unnecessary loss of contrast (under-fitting of the data) in the images. The inversion results are assessed with respect to (1) raw data quality as described by the error model parameters, (2) validation via available (rock) temperature data and (3) the interpretation of the images from a geophysical as well as a
Nabawy, Mostafa R A; Crowther, William J
2014-05-01
This paper introduces a generic, transparent and compact model for the evaluation of the aerodynamic performance of insect-like flapping wings in hovering flight. The model is generic in that it can be applied to wings of arbitrary morphology and kinematics without the use of experimental data, is transparent in that the aerodynamic components of the model are linked directly to morphology and kinematics via physical relationships and is compact in the sense that it can be efficiently evaluated for use within a design optimization environment. An important aspect of the model is the method by which translational force coefficients for the aerodynamic model are obtained from first principles; however important insights are also provided for the morphological and kinematic treatments that improve the clarity and efficiency of the overall model. A thorough analysis of the leading-edge suction analogy model is provided and comparison of the aerodynamic model with results from application of the leading-edge suction analogy shows good agreement. The full model is evaluated against experimental data for revolving wings and good agreement is obtained for lift and drag up to 90° incidence. Comparison of the model output with data from computational fluid dynamics studies on a range of different insect species also shows good agreement with predicted weight support ratio and specific power. The validated model is used to evaluate the relative impact of different contributors to the induced power factor for the hoverfly and fruitfly. It is shown that the assumption of an ideal induced power factor (k = 1) for a normal hovering hoverfly leads to a 23% overestimation of the generated force owing to flapping. PMID:24554578
Nabawy, Mostafa R A; Crowther, William J
2014-05-01
This paper introduces a generic, transparent and compact model for the evaluation of the aerodynamic performance of insect-like flapping wings in hovering flight. The model is generic in that it can be applied to wings of arbitrary morphology and kinematics without the use of experimental data, is transparent in that the aerodynamic components of the model are linked directly to morphology and kinematics via physical relationships and is compact in the sense that it can be efficiently evaluated for use within a design optimization environment. An important aspect of the model is the method by which translational force coefficients for the aerodynamic model are obtained from first principles; however important insights are also provided for the morphological and kinematic treatments that improve the clarity and efficiency of the overall model. A thorough analysis of the leading-edge suction analogy model is provided and comparison of the aerodynamic model with results from application of the leading-edge suction analogy shows good agreement. The full model is evaluated against experimental data for revolving wings and good agreement is obtained for lift and drag up to 90° incidence. Comparison of the model output with data from computational fluid dynamics studies on a range of different insect species also shows good agreement with predicted weight support ratio and specific power. The validated model is used to evaluate the relative impact of different contributors to the induced power factor for the hoverfly and fruitfly. It is shown that the assumption of an ideal induced power factor (k = 1) for a normal hovering hoverfly leads to a 23% overestimation of the generated force owing to flapping.
Comparing tests appear in model-check for normal regression with spatially correlated observations
NASA Astrophysics Data System (ADS)
Somayasa, Wayan; Wibawa, Gusti A.
2016-06-01
The problem of investigating the appropriateness of an assumed model in regression analysis was traditionally handled by means of F test under independent observations. In this work we propose a more modern method based on the so-called set-indexed partial sums processes of the least squares residuals of the observations. We consider throughout this work univariate and multivariate regression models with spatially correlated observations, which are frequently encountered in the statistical modelling in geosciences as well as in mining. The decision is drawn by performing asymptotic test of statistical hypothesis based on the Kolmogorov-Smirnov and Cramér-von Misses functionals of the processes. We compare the two tests by investigating the power functions of the test. The finite sample size behavior of the tests are studied by simulating the empirical probability of rejections of H 0. It is shown that for univariate model the KS test seems to be more powerful. Conversely the Cramér-von Mises test tends to be more powerful than the KS test in the multivariate case.
Comparison of model estimated and measured direct-normal solar irradiance
Halthore, R.N.; Schwartz, S.E.; Michalsky, J.J.; Anderson, G.P.; Holben, B.N.; Ten Brink, H.M.
1997-12-01
Direct-normal solar irradiance (DNSI), the energy in the solar spectrum incident in unit time at the Earth{close_quote}s surface on a unit area perpendicular to the direction to the Sun, depends only on atmospheric extinction of solar energy without regard to the details of the extinction, whether absorption or scattering. Here we report a set of closure experiments performed in north central Oklahoma in April 1996 under cloud-free conditions, wherein measured atmospheric composition and aerosol optical thickness are input to a radiative transfer model, MODTRAN 3, to estimate DNSI, which is then compared with measured values obtained with normal incidence pyrheliometers and absolute cavity radiometers. Uncertainty in aerosol optical thickness (AOT) dominates the uncertainty in DNSI calculation. AOT measured by an independently calibrated Sun photometer and a rotating shadow-band radiometer agree to within the uncertainties of each measurement. For 36 independent comparisons the agreement between measured and model-estimated values of DNSI falls within the combined uncertainties in the measurement (0.3{endash}0.7{percent}) and model calculation (1.8{percent}), albeit with a slight average model underestimate ({minus}0.18{plus_minus}0.94){percent}; for a DNSI of 839Wm{sup {minus}2} this corresponds to {minus}1.5{plus_minus}7.9Wm{sup {minus}2}. The agreement is nearly independent of air mass and water-vapor path abundance. These results thus establish the accuracy of the current knowledge of the solar spectrum, its integrated power, and the atmospheric extinction as a function of wavelength as represented in MODTRAN 3. An important consequence is that atmospheric absorption of short-wave energy is accurately parametrized in the model to within the above uncertainties. {copyright} 1997 American Geophysical Union
Normal D-region models for weapon-effects code. Technical report, 1 January-24 August 1985
Gambill
1985-09-18
This report examines several normal D-region models and their application to vlf/f propagation predictions. Special emphasis is placed on defining models that reproduce measured normal propagation data and also provide reasonable departure/recovery conditions after an ionospheric disturbance. An interim numerical model is described that provides for selection of a range of normal D-region electron profiles and also provides for a smooth transition to disturbed profiles. Requirements are also examined for defining prescribed D-region profiles using complex aero-chemistry models.
NASA Astrophysics Data System (ADS)
Steinberg, Idan; Harbater, Osnat; Gannot, Israel
2014-07-01
The diffusion approximation is useful for many optical diagnostics modalities, such as near-infrared spectroscopy. However, the simple normal incidence, semi-infinite layer model may prove lacking in estimation of deep-tissue optical properties such as required for monitoring cerebral hemodynamics, especially in neonates. To answer this need, we present an analytical multilayered, oblique incidence diffusion model. Initially, the model equations are derived in vector-matrix form to facilitate fast and simple computation. Then, the spatiotemporal reflectance predicted by the model for a complex neonate head is compared with time-resolved Monte Carlo (TRMC) simulations under a wide range of physiologically feasible parameters. The high accuracy of the multilayer model is demonstrated in that the deviation from TRMC simulations is only a few percent even under the toughest conditions. We then turn to solve the inverse problem and estimate the oxygen saturation of deep brain tissues based on the temporal and spatial behaviors of the reflectance. Results indicate that temporal features of the reflectance are more sensitive to deep-layer optical parameters. The accuracy of estimation is shown to be more accurate and robust than the commonly used single-layer diffusion model. Finally, the limitations of such approaches are discussed thoroughly.
Anatomically realistic multiscale models of normal and abnormal gastrointestinal electrical activity
Cheng, Leo K; Komuro, Rie; Austin, Travis M; Buist, Martin L; Pullan, Andrew J
2007-01-01
One of the major aims of the International Union of Physiological Sciences (IUPS) Physiome Project is to develop multiscale mathematical and computer models that can be used to help understand human health. We present here a small facet of this broad plan that applies to the gastrointestinal system. Specifically, we present an anatomically and physiologically based modelling framework that is capable of simulating normal and pathological electrical activity within the stomach and small intestine. The continuum models used within this framework have been created using anatomical information derived from common medical imaging modalities and data from the Visible Human Project. These models explicitly incorporate the various smooth muscle layers and networks of interstitial cells of Cajal (ICC) that are known to exist within the walls of the stomach and small bowel. Electrical activity within individual ICCs and smooth muscle cells is simulated using a previously published simplified representation of the cell level electrical activity. This simulated cell level activity is incorporated into a bidomain representation of the tissue, allowing electrical activity of the entire stomach or intestine to be simulated in the anatomically derived models. This electrical modelling framework successfully replicates many of the qualitative features of the slow wave activity within the stomach and intestine and has also been used to investigate activity associated with functional uncoupling of the stomach. PMID:17457969
Cheng, Leo K; Komuro, Rie; Austin, Travis M; Buist, Martin L; Pullan, Andrew J
2007-03-01
One of the major aims of the International Union of Physiological Sciences (IUPS) Physiome Project is to develop multiscale mathematical and computer models that can be used to help understand human health. We present here a small facet of this broad plan that applies to the gastrointestinal system. Specifically, we present an anatomically and physiologically based modelling framework that is capable of simulating normal and pathological electrical activity within the stomach and small intestine. The continuum models used within this framework have been created using anatomical information derived from common medical imaging modalities and data from the Visible Human Project. These models explicitly incorporate the various smooth muscle layers and networks of interstitial cells of Cajal (ICC) that are known to exist within the walls of the stomach and small bowel. Electrical activity within individual ICCs and smooth muscle cells is simulated using a previously published simplified representation of the cell level electrical activity. This simulated cell level activity is incorporated into a bidomain representation of the tissue, allowing electrical activity of the entire stomach or intestine to be simulated in the anatomically derived models. This electrical modelling framework successfully replicates many of the qualitative features of the slow wave activity within the stomach and intestine and has also been used to investigate activity associated with functional uncoupling of the stomach.
Cooper, Emily A
2016-04-01
Biological sensory systems share a number of organizing principles. One such principle is the formation of parallel streams. In the visual system, information about bright and dark features is largely conveyed via two separate streams: theONandOFFpathways. While brightness and darkness can be considered symmetric and opposite forms of visual contrast, the response properties of cells in theONandOFFpathways are decidedly asymmetric. Here, we ask whether a simple contrast-encoding model predicts asymmetries for brights and darks that are similar to the asymmetries found in theONandOFFpathways. Importantly, this model does not include any explicit differences in how the visual system represents brights and darks, but it does include a common normalization mechanism. The phenomena captured by the model include (1) nonlinear contrast response functions, (2) greater nonlinearities in the responses to darks, and (3) larger responses to dark contrasts. We report a direct, quantitative comparison between these model predictions and previously published electrophysiological measurements from the retina and thalamus (guinea pig and cat, respectively). This work suggests that the simple computation of visual contrast may account for a range of early visual processing nonlinearities. Assessing explicit models of sensory representations is essential for understanding which features of neuronal activity these models can and cannot predict, and for investigating how early computations may reverberate through the sensory pathways. PMID:27044852
Cooper, Emily A
2016-04-01
Biological sensory systems share a number of organizing principles. One such principle is the formation of parallel streams. In the visual system, information about bright and dark features is largely conveyed via two separate streams: theONandOFFpathways. While brightness and darkness can be considered symmetric and opposite forms of visual contrast, the response properties of cells in theONandOFFpathways are decidedly asymmetric. Here, we ask whether a simple contrast-encoding model predicts asymmetries for brights and darks that are similar to the asymmetries found in theONandOFFpathways. Importantly, this model does not include any explicit differences in how the visual system represents brights and darks, but it does include a common normalization mechanism. The phenomena captured by the model include (1) nonlinear contrast response functions, (2) greater nonlinearities in the responses to darks, and (3) larger responses to dark contrasts. We report a direct, quantitative comparison between these model predictions and previously published electrophysiological measurements from the retina and thalamus (guinea pig and cat, respectively). This work suggests that the simple computation of visual contrast may account for a range of early visual processing nonlinearities. Assessing explicit models of sensory representations is essential for understanding which features of neuronal activity these models can and cannot predict, and for investigating how early computations may reverberate through the sensory pathways.
A quantum probability perspective on borderline vagueness.
Blutner, Reinhard; Pothos, Emmanuel M; Bruza, Peter
2013-10-01
The term "vagueness" describes a property of natural concepts, which normally have fuzzy boundaries, admit borderline cases, and are susceptible to Zeno's sorites paradox. We will discuss the psychology of vagueness, especially experiments investigating the judgment of borderline cases and contradictions. In the theoretical part, we will propose a probabilistic model that describes the quantitative characteristics of the experimental finding and extends Alxatib's and Pelletier's () theoretical analysis. The model is based on a Hopfield network for predicting truth values. Powerful as this classical perspective is, we show that it falls short of providing an adequate coverage of the relevant empirical results. In the final part, we will argue that a substantial modification of the analysis put forward by Alxatib and Pelletier and its probabilistic pendant is needed. The proposed modification replaces the standard notion of probabilities by quantum probabilities. The crucial phenomenon of borderline contradictions can be explained then as a quantum interference phenomenon. PMID:24039093
A quantum probability perspective on borderline vagueness.
Blutner, Reinhard; Pothos, Emmanuel M; Bruza, Peter
2013-10-01
The term "vagueness" describes a property of natural concepts, which normally have fuzzy boundaries, admit borderline cases, and are susceptible to Zeno's sorites paradox. We will discuss the psychology of vagueness, especially experiments investigating the judgment of borderline cases and contradictions. In the theoretical part, we will propose a probabilistic model that describes the quantitative characteristics of the experimental finding and extends Alxatib's and Pelletier's () theoretical analysis. The model is based on a Hopfield network for predicting truth values. Powerful as this classical perspective is, we show that it falls short of providing an adequate coverage of the relevant empirical results. In the final part, we will argue that a substantial modification of the analysis put forward by Alxatib and Pelletier and its probabilistic pendant is needed. The proposed modification replaces the standard notion of probabilities by quantum probabilities. The crucial phenomenon of borderline contradictions can be explained then as a quantum interference phenomenon.
Brake, M. R. W.
2015-02-17
Impact between metallic surfaces is a phenomenon that is ubiquitous in the design and analysis of mechanical systems. We found that to model this phenomenon, a new formulation for frictional elastic–plastic contact between two surfaces is developed. The formulation is developed to consider both frictional, oblique contact (of which normal, frictionless contact is a limiting case) and strain hardening effects. The constitutive model for normal contact is developed as two contiguous loading domains: the elastic regime and a transitionary region in which the plastic response of the materials develops and the elastic response abates. For unloading, the constitutive model is based on an elastic process. Moreover, the normal contact model is assumed to only couple one-way with the frictional/tangential contact model, which results in the normal contact model being independent of the frictional effects. Frictional, tangential contact is modeled using a microslip model that is developed to consider the pressure distribution that develops from the elastic–plastic normal contact. This model is validated through comparisons with experimental results reported in the literature, and is demonstrated to be significantly more accurate than 10 other normal contact models and three other tangential contact models found in the literature.
Brake, M. R. W.
2015-02-17
Impact between metallic surfaces is a phenomenon that is ubiquitous in the design and analysis of mechanical systems. We found that to model this phenomenon, a new formulation for frictional elastic–plastic contact between two surfaces is developed. The formulation is developed to consider both frictional, oblique contact (of which normal, frictionless contact is a limiting case) and strain hardening effects. The constitutive model for normal contact is developed as two contiguous loading domains: the elastic regime and a transitionary region in which the plastic response of the materials develops and the elastic response abates. For unloading, the constitutive model ismore » based on an elastic process. Moreover, the normal contact model is assumed to only couple one-way with the frictional/tangential contact model, which results in the normal contact model being independent of the frictional effects. Frictional, tangential contact is modeled using a microslip model that is developed to consider the pressure distribution that develops from the elastic–plastic normal contact. This model is validated through comparisons with experimental results reported in the literature, and is demonstrated to be significantly more accurate than 10 other normal contact models and three other tangential contact models found in the literature.« less
A Spherical Chandrasekhar-Mass Delayed-Detonation Model for a Normal Type Ia Supernova
NASA Astrophysics Data System (ADS)
Blondin, Stéphane; Dessart, Luc; Hillier, D. John
2015-06-01
The most widely-accepted model for Type Ia supernovae (SNe Ia) is the thermonuclear disruption of a White Dwarf (WD) star in a binary system, although there is ongoing discussion about the combustion mode, the progenitor mass, and the nature of the binary companion. Observational evidence for diversity in the SN Ia population seems to require multiple progenitor channels or explosion mechanisms. In the standard single-degenerate (SD) scenario, the WD grows in mass through accretion of H-rich or He-rich material from a non-degenerate donor (e.g., a main-sequence star, a subgiant, a He star, or a red giant). When the WD is sufficiently close to the Chandrasekhar limit (˜1.4 M⊙), a subsonic deflagration front forms near the WD center which eventually transitions to a supersonic detonation (the so-called “delayed-detonation” model) and unbinds the star. The efficiency of the WD growth in mass remains uncertain, as repeated nova outbursts during the accretion process result in mass ejection from the WD surface. Moreover, the lack of observational signatures of the binary companion has cast some doubts on the SD scenario, and recent hydrodynamical simulations have put forward WD-WD mergers and collisions as viable alternatives. However, as shown here, the standard Chandrasekhar-mass delayed-detonation model remains adequate to explain many normal SNe Ia, in particular those displaying broad Si II 6355 Å lines. We present non-local-thermodynamic-equilibrium time-dependent radiative transfer simulations performed with CMFGEN of a spherically-symmetric delayed-detonation model from a Chandrasekhar-mass WD progenitor with 0.51 M⊙ of 56Ni (Fig. 1 and Table 1), and confront our results to the observed light curves and spectra of the normal Type Ia SN 2002bo over the first 100 days of its evolution. With no fine tuning, the model reproduces well the bolometric (Fig. 2) and multi-band light curves, the secondary near-infrared maxima (Fig. 3), and the spectroscopic
NASA Technical Reports Server (NTRS)
Watson, Willie R.; Jones, Michael G.; Tanner, Sharon E.; Parrott, Tony L.
1995-01-01
A propagation model method for extracting the normal incidence impedance of an acoustic material installed as a finite length segment in a wall of a duct carrying a nonprogressive wave field is presented. The method recasts the determination of the unknown impedance as the minimization of the normalized wall pressure error function. A finite element propagation model is combined with a coarse/fine grid impedance plane search technique to extract the impedance of the material. Results are presented for three different materials for which the impedance is known. For each material, the input data required for the prediction scheme was computed from modal theory and then contaminated by random error. The finite element method reproduces the known impedance of each material almost exactly for random errors typical of those found in many measurement environments. Thus, the method developed here provides a means for determining the impedance of materials in a nonprogressirve wave environment such as that usually encountered in a commercial aircraft engine and most laboratory settings.
New animal models to study the role of tyrosinase in normal retinal development.
Lavado, Alfonso; Montoliu, Lluis
2006-01-01
Albino animals display a hypopigmented phenotype associated with several visual abnormalities, including rod photoreceptor cell deficits, abnormal patterns of connections between the eye and the brain and a general underdevelopment of central retina. Oculocutaneous albinism type I, a common form of albinism, is caused by mutations in the tyrosinase gene. In mice, the albino phenotype can be corrected by functional tyrosinase transgenes. Tyrosinase transgenic animals not only show normal pigmentation but the correction of all visual abnormalities associated with albinism, confirming a role of tyrosinase, a key enzyme in melanin biosynthesis, in normal retinal development. Here, we will discuss recent work carried out with new tyrosinase transgenic mouse models, to further analyse the role of tyrosinase in retinal development. We will first report a transgenic model with inducible tyrosinase expression that has been used to address the regulated activation of this gene and its associated effects on the development of the visual system. Second, we will comment on an interesting yeast artificial chromosome (YAC)-tyrosinase transgene, lacking important regulatory elements, that has highlighted the significance of local interactions between the retinal pigment epithelium (RPE) and developing neural retina.
NASA Astrophysics Data System (ADS)
Shubitidze, F.; O'Neill, K.; Barrowes, B. E.; Shamatava, I.; Fernández, J. P.; Sun, K.; Paulsen, K. D.
2007-03-01
This paper presents an application of the normalized surface magnetic charge (NSMC) model to discriminate objects of interest, such as unexploded ordnance (UXO), from innocuous items in cases when UXO electromagnetic induction (EMI) responses are contaminated by signals from other objects. Over the entire EMI spectrum considered here (tens of Hertz up to several hundreds of kHz), the scattered magnetic field outside the object can be produced mathematically by equivalent magnetic charges. The amplitudes of these charges are determined from measurement data and normalized by the excitation field. The model takes into account the scatterer's heterogeneity and near- and far-field effects. For classification algorithms, the frequency spectrum of the total NSMC is proposed and investigated as a discriminant. The NSMC is combined with the differential evolution (DE) algorithm in a two-step inversion procedure. To illustrate the applicability of the DE-NSMC algorithm, blind test data are processed and analyzed for cases in which signals from nearby objects frequently overlap. The method was highly successful in distinguishing UXO from accompanying clutter.
New classification of lingual arch form in normal occlusion using three dimensional virtual models
Park, Kyung Hee; Bayome, Mohamed; Park, Jae Hyun; Lee, Jeong Woo; Baek, Seung-Hak
2015-01-01
Objective The purposes of this study were 1) to classify lingual dental arch form types based on the lingual bracket points and 2) to provide a new lingual arch form template based on this classification for clinical application through the analysis of three-dimensional virtual models of normal occlusion sample. Methods Maxillary and mandibular casts of 115 young adults with normal occlusion were scanned in their occluded positions and lingual bracket points were digitized on the virtual models by using Rapidform 2006 software. Sixty-eight cases (dataset 1) were used in K-means cluster analysis to classify arch forms with intercanine, interpremolar and intermolar widths and width/depth ratios as determinants. The best-fit curves of the mean arch forms were generated. The remaining cases (dataset 2) were mapped into the obtained clusters and a multivariate test was performed to assess the differences between the clusters. Results Four-cluster classification demonstrated maximum intercluster distance. Wide, narrow, tapering, and ovoid types were described according to the intercanine and intermolar widths and their best-fit curves were depicted. No significant differences in arch depths existed among the clusters. Strong to moderate correlations were found between maxillary and mandibular arch widths. Conclusions Lingual arch forms have been classified into 4 types based on their anterior and posterior dimensions. A template of the 4 arch forms has been depicted. Three-dimensional analysis of the lingual bracket points provides more accurate identification of arch form and, consequently, archwire selection. PMID:25798413
Robust normal mode constraints on inner-core anisotropy from model space search.
Beghein, Caroline; Trampert, Jeannot
2003-01-24
A technique for searching full model space that was applied to measurements of anomalously split normal modes showed a robust pattern of P-wave and S-wave anisotropy in the inner core. The parameter describing P-wave anisotropy changes sign around a radius of 400 kilometers, whereas S-wave anisotropy is small in the upper two-thirds of the inner core and becomes negative at greater depths. Our results agree with observed travel-time anomalies of rays traveling at epicentral distances varying from 150 degrees to 180 degrees. The models may be explained by progressively tilted hexagonal close-packed iron in the upper half of the inner core and could suggest a different iron phase in the center.
Food addiction spectrum: a theoretical model from normality to eating and overeating disorders.
Piccinni, Armando; Marazziti, Donatella; Vanelli, Federica; Franceschini, Caterina; Baroni, Stefano; Costanzo, Davide; Cremone, Ivan Mirko; Veltri, Antonello; Dell'Osso, Liliana
2015-01-01
The authors comment on the recently proposed food addiction spectrum that represents a theoretical model to understand the continuum between several conditions ranging from normality to pathological states, including eating disorders and obesity, as well as why some individuals show a peculiar attachment to food that can become an addiction. Further, they review the possible neurobiological underpinnings of these conditions that include dopaminergic neurotransmission and circuits that have long been implicated in drug addiction. The aim of this article is also that at stimulating a debate regarding the possible model of a food (or eating) addiction spectrum that may be helpful towards the search of novel therapeutic approaches to different pathological states related to disturbed feeding or overeating.
Timofeyuk, N. K.
2010-06-15
Overlap functions for one-nucleon removal are calculated as solutions of the inhomogeneous equation. The source term for this equation is generated by the 0(Planck constant/2pi)omega no-core shell-model wave functions and the effective nucleon-nucleon (NN) interactions that fit oscillator matrix elements derived from the NN scattering data. For the lightest A<=4 nuclei this method gives reasonable agreement with exact ab initio calculations. For 4normalization coefficients. The spectroscopic factors obtained show systematic deviation from the corresponding shell-model values. This deviation correlates with nucleon separation energies and follows a similar trend seen in the reduction factor of the nucleon knockout cross sections. Comparison with the overlap functions and spectroscopic factors obtained in the variational Monte Carlo method is presented and discussed.
One Dimension Analytical Model of Normal Ballistic Impact on Ceramic/Metal Gradient Armor
Liu Lisheng; Zhang Qingjie; Zhai Pengcheng; Cao Dongfeng
2008-02-15
An analytical model of normal ballistic impact on the ceramic/metal gradient armor, which is based on modified Alekseevskii-Tate equations, has been developed. The process of gradient armour impacted by the long rod can be divided into four stages in this model. First stage is projectile's mass erosion or flowing phase, mushrooming phase and rigid phase; second one is the formation of comminuted ceramic conoid; third one is the penetration of gradient layer and last one is the penetration of metal back-up plate. The equations of third stage have been advanced by assuming the behavior of gradient layer as rigid-plastic and considering the effect of strain rate on the dynamic yield strength.
NASA Astrophysics Data System (ADS)
Frič, Roman; Papčo, Martin
2010-12-01
Motivated by IF-probability theory (intuitionistic fuzzy), we study n-component probability domains in which each event represents a body of competing components and the range of a state represents a simplex S n of n-tuples of possible rewards-the sum of the rewards is a number from [0,1]. For n=1 we get fuzzy events, for example a bold algebra, and the corresponding fuzzy probability theory can be developed within the category ID of D-posets (equivalently effect algebras) of fuzzy sets and sequentially continuous D-homomorphisms. For n=2 we get IF-events, i.e., pairs ( μ, ν) of fuzzy sets μ, ν∈[0,1] X such that μ( x)+ ν( x)≤1 for all x∈ X, but we order our pairs (events) coordinatewise. Hence the structure of IF-events (where ( μ 1, ν 1)≤( μ 2, ν 2) whenever μ 1≤ μ 2 and ν 2≤ ν 1) is different and, consequently, the resulting IF-probability theory models a different principle. The category ID is cogenerated by I=[0,1] (objects of ID are subobjects of powers I X ), has nice properties and basic probabilistic notions and constructions are categorical. For example, states are morphisms. We introduce the category S n D cogenerated by Sn=\\{(x1,x2,ldots ,xn)in In;sum_{i=1}nxi≤ 1\\} carrying the coordinatewise partial order, difference, and sequential convergence and we show how basic probability notions can be defined within S n D.
NASA Astrophysics Data System (ADS)
Naliboff, J. B.; Billen, M. I.
2010-12-01
A characteristic feature of global subduction zones is normal faulting in the outer rise region, which reflects flexure of the downgoing plate in response to the slab pull force. Variations in the patterns of outer rise normal faulting between different subduction zones likely reflects both the magnitude of flexural induced topography and the strength of the downgoing plate. In particular, the rheology of the uppermost oceanic lithosphere is likely to strongly control the faulting patterns, which have been well documented recently in both the Middle and South American trenches. These recent observations of outer rise faulting provide a unique opportunity to test different rheological models of the oceanic lithosphere using geodynamic numerical experiments. Here, we develop a new approach for modeling deformation in the outer rise and trench regions of downgoing slabs, and discuss preliminary 2-D numerical models examining the relationship between faulting patterns and the rheology of the oceanic lithosphere. To model viscous and brittle deformation within the oceanic lithosphere we use the CIG (Computational Infrastructure for Geodynamics) finite element code Gale, which is designed to solve long-term tectonic problems. In order to resolve deformation features on geologically realistic scales (< 1 km), we model only the portion of the subduction system seaward of the trench. Horizontal and vertical stress boundary conditions on the side walls drive subduction and reflect, respectively, the ridge-push and slab-pull plate-driving forces. The initial viscosity structure of the oceanic lithosphere and underlying asthenosphere follow a composite viscosity law that takes into account both Newtonian and non-Newtonian deformation. The viscosity structure is consequently governed primarily by the strain rate and thermal structure, which follows a half-space cooling model. Modification of the viscosity structure and development of discrete shear zones occurs during yielding
May, Carl R; Mair, Frances S; Dowrick, Christopher F; Finch, Tracy L
2007-01-01
Background The Normalization Process Model is a conceptual tool intended to assist in understanding the factors that affect implementation processes in clinical trials and other evaluations of complex interventions. It focuses on the ways that the implementation of complex interventions is shaped by problems of workability and integration. Method In this paper the model is applied to two different complex trials: (i) the delivery of problem solving therapies for psychosocial distress, and (ii) the delivery of nurse-led clinics for heart failure treatment in primary care. Results Application of the model shows how process evaluations need to focus on more than the immediate contexts in which trial outcomes are generated. Problems relating to intervention workability and integration also need to be understood. The model may be used effectively to explain the implementation process in trials of complex interventions. Conclusion The model invites evaluators to attend equally to considering how a complex intervention interacts with existing patterns of service organization, professional practice, and professional-patient interaction. The justification for this may be found in the abundance of reports of clinical effectiveness for interventions that have little hope of being implemented in real healthcare settings. PMID:17650326
Testing deviation for a set of serial dilution most probable numbers from a Poisson-binomial model.
Blodgett, Robert J
2006-01-01
A serial dilution experiment estimates the microbial concentration in a broth by inoculating several sets of tubes with various amounts of the broth. The estimation uses the Poisson distribution and the number of tubes in each of these sets that show growth. Several factors, such as interfering microbes, toxins, or disaggregation of adhering microbes, may distort the results of a serial dilution experiment. A mild enough distortion may not raise suspicion with a single outcome. The test introduced here judges whether the entire set of serial dilution outcomes appears unusual. This test forms lists of the possible outcomes. The set of outcomes is declared unusual if any occurrence of an observed outcome is on the first list, or more than one is on the first or second list, etc. A similar test can apply when there are only a finite number of possible outcomes, and each outcome has a calculable probability, and few outcomes have tied probabilities.
NASA Technical Reports Server (NTRS)
Courey, Karim; Wright, Clara; Asfour, Shihab; Bayliss, Jon; Ludwig, Larry
2008-01-01
Existing risk simulations make the assumption that when a free tin whisker has bridged two adjacent exposed electrical conductors, the result is an electrical short circuit. This conservative assumption is made because shorting is a random event that has a currently unknown probability associated with it. Due to contact resistance, electrical shorts may not occur at lower voltage levels. In this experiment, we study the effect of varying voltage on the breakdown of the contact resistance which leads to a short circuit. From this data, we can estimate the probability of an electrical short, as a function of voltage, given that a free tin whisker has bridged two adjacent exposed electrical conductors. In addition, three tin whiskers grown from the same Space Shuttle Orbiter card guide used in the aforementioned experiment were cross sectioned and studied using a focused ion beam (FIB).
NASA Astrophysics Data System (ADS)
Zemskov, Serguey V.; Jonkers, Henk M.; Vermolen, Fred J.
The present study is performed in the framework of the investigation of the potential of bacteria to act as a catalyst of the self-healing process in concrete, i.e. their ability to repair occurring cracks autonomously. Spherical clay capsules containing the healing agent (calcium lactate) are embedded in the concrete structure. Water entering a freshly formed crack releases the healing agent and activates the bacteria which will seal the crack through the process of metabolically mediated calcium carbonate precipitation. In the paper, an analytic formalism is developed for the computation of the probability that a crack hits an encapsulated particle, i.e. the probability that the self-healing process starts. Most computations are performed in closed algebraic form in the computer algebra system Mathematica which allows to perform the last step of calculations numerically with a higher accuracy.
NASA Astrophysics Data System (ADS)
Warrell, K. F.; Withjack, M. O.; Schlische, R. W.
2014-12-01
Field- and seismic-reflection-based studies have documented the influence of pre-existing thrust faults on normal-fault development during subsequent extension. Published experimental (analog) models of shortening followed by extension with dry sand as the modeling medium show limited extensional reactivation of moderate-angle thrust faults (dipping > 40º). These dry sand models provide insight into the influence of pre-existing thrusts on normal-fault development, but these models have not reactivated low-angle (< 35º) thrust faults as seen in nature. New experimental (analog) models, using wet clay over silicone polymer to simulate brittle upper crust over ductile lower crust, suggest that low-angle thrust faults from an older shortening phase can reactivate as normal faults. In two-phase models of shortening followed by extension, normal faults nucleate above pre-existing thrust faults and likely link with thrusts at depth to create listric faults, movement on which produces rollover folds. Faults grow and link more rapidly in two-phase than in single-phase (extension-only) models. Fewer faults with higher displacements form in two-phase models, likely because, for a given displacement magnitude, a low-angle normal fault accommodates more horizontal extension than a high-angle normal fault. The resulting rift basins are wider and shallower than those forming along high-angle normal faults. Features in these models are similar to natural examples. Seismic-reflection profiles from the outer Hebrides, offshore Scotland, show listric faults partially reactivating pre-existing thrust faults with a rollover fold in the hanging wall; in crystalline basement, the thrust is reactivated, and in overlying sedimentary strata, a new, high-angle normal fault forms. Profiles from the Chignecto subbasin of the Fundy basin, offshore Canada, show full reactivation of thrust faults as low-angle normal faults where crystalline basement rocks make up the footwall.
NASA Astrophysics Data System (ADS)
von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo
2014-06-01
Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.
ERIC Educational Resources Information Center
Campos, Jose Alejandro Gonzalez; Moraga, Paulina Saavedra; Del Pozo, Manuel Freire
2013-01-01
This paper introduces the generalized beta (GB) model as a new modeling tool in the educational assessment area and evaluation analysis, specifically. Unlike normal model, GB model allows us to capture some real characteristics of data and it is an important tool for understanding the phenomenon of learning. This paper develops a contrast with the…
NASA Astrophysics Data System (ADS)
Strak, V.; Dominguez, S.; Petit, C.; Meyer, B.; Loget, N.
2011-12-01
The growth of relief in active tectonic areas is mainly controlled by the interactions between tectonics and surface processes (erosion and sedimentation). The study of long-lived morphologic markers formed by these interactions can help in quantifying the competing effects of tectonics, erosion and sedimentation. In regions experiencing active extension, river-long profiles and faceted spurs (triangular facets) can help in understanding the development of mountainous topography along normal fault scarps. In this study, we developed analogue experiments that simulate the morphologic evolution of a mountain range bounded by a normal fault. This paper focuses on the effect of the fault slip rate on the morphologic evolution of the footwall by performing three analogue experiments with different fault slip rates under a constant rainfall rate. A morphometric analysis of the modelled catchments allows comparing with a natural case (Tunka half-graben, Siberia). After a certain amount of fault slip, the modelled footwall topographies of our models reaches a dynamic equilibrium (i.e., erosion balances tectonic uplift relative to the base level) close to the fault, whereas the topography farther from the fault is still being dissected due to regressive erosion. We show that the rates of vertical erosion in the area where dynamic equilibrium is reached and the rate of regressive erosion are linearly correlated with the fault throw rate. Facet morphology seems to depend on the fault slip rate except for the fastest experiment where faceted spurs are degraded due to mass wasting. A stream-power law is computed for the area wherein rivers reach a topographic equilibrium. We show that the erosional capacity of the system depends on the fault slip rate. Finally, our results demonstrate the possibility of preserving convex river-long profiles on the long-term under steady external (tectonic uplift and rainfall) conditions.
Welton, Nicky J; Ades, A E
2005-01-01
Markov transition models are frequently used to model disease progression. The authors show how the solution to Kolmogorov's forward equations can be exploited to map between transition rates and probabilities from probability data in multistate models. They provide a uniform, Bayesian treatment of estimation and propagation of uncertainty of transition rates and probabilities when 1) observations are available on all transitions and exact time at risk in each state (fully observed data) and 2) observations are on initial state and final state after a fixed interval of time but not on the sequence of transitions (partially observed data). The authors show how underlying transition rates can be recovered from partially observed data using Markov chain Monte Carlo methods in WinBUGS, and they suggest diagnostics to investigate inconsistencies between evidence from different starting states. An illustrative example for a 3-state model is given, which shows how the methods extend to more complex Markov models using the software WBDiff to compute solutions. Finally, the authors illustrate how to statistically combine data from multiple sources, including partially observed data at several follow-up times and also how to calibrate a Markov model to be consistent with data from one specific study. PMID:16282214
A normal stress subgrid-scale eddy viscosity model in large eddy simulation
NASA Technical Reports Server (NTRS)
Horiuti, K.; Mansour, N. N.; Kim, John J.
1993-01-01
The Smagorinsky subgrid-scale eddy viscosity model (SGS-EVM) is commonly used in large eddy simulations (LES) to represent the effects of the unresolved scales on the resolved scales. This model is known to be limited because its constant must be optimized in different flows, and it must be modified with a damping function to account for near-wall effects. The recent dynamic model is designed to overcome these limitations but is compositionally intensive as compared to the traditional SGS-EVM. In a recent study using direct numerical simulation data, Horiuti has shown that these drawbacks are due mainly to the use of an improper velocity scale in the SGS-EVM. He also proposed the use of the subgrid-scale normal stress as a new velocity scale that was inspired by a high-order anisotropic representation model. The testing of Horiuti, however, was conducted using DNS data from a low Reynolds number channel flow simulation. It was felt that further testing at higher Reynolds numbers and also using different flows (other than wall-bounded shear flows) were necessary steps needed to establish the validity of the new model. This is the primary motivation of the present study. The objective is to test the new model using DNS databases of high Reynolds number channel and fully developed turbulent mixing layer flows. The use of both channel (wall-bounded) and mixing layer flows is important for the development of accurate LES models because these two flows encompass many characteristic features of complex turbulent flows.
2013-01-01
Background Normal colon crypts consist of stem cells, proliferating cells, and differentiated cells. Abnormal rates of proliferation and differentiation can initiate colon cancer. We have measured the variation in the number of each of these cell types in multiple crypts in normal human biopsy specimens. This has provided the opportunity to produce a calibrated computational model that simulates cell dynamics in normal human crypts, and by changing model parameter values, to simulate the initiation and treatment of colon cancer. Results An agent-based model of stochastic cell dynamics in human colon crypts was developed in the multi-platform open-source application NetLogo. It was assumed that each cell’s probability of proliferation and probability of death is determined by its position in two gradients along the crypt axis, a divide gradient and in a die gradient. A cell’s type is not intrinsic, but rather is determined by its position in the divide gradient. Cell types are dynamic, plastic, and inter-convertible. Parameter values were determined for the shape of each of the gradients, and for a cell’s response to the gradients. This was done by parameter sweeps that indicated the values that reproduced the measured number and variation of each cell type, and produced quasi-stationary stochastic dynamics. The behavior of the model was verified by its ability to reproduce the experimentally observed monocolonal conversion by neutral drift, the formation of adenomas resulting from mutations either at the top or bottom of the crypt, and by the robust ability of crypts to recover from perturbation by cytotoxic agents. One use of the virtual crypt model was demonstrated by evaluating different cancer chemotherapy and radiation scheduling protocols. Conclusions A virtual crypt has been developed that simulates the quasi-stationary stochastic cell dynamics of normal human colon crypts. It is unique in that it has been calibrated with measurements of human biopsy
ERIC Educational Resources Information Center
Koo, Reginald; Jones, Martin L.
2011-01-01
Quite a number of interesting problems in probability feature an event with probability equal to 1/e. This article discusses three such problems and attempts to explain why this probability occurs with such frequency.
NASA Astrophysics Data System (ADS)
Long, Yongjun; Wei, Xiaohui; Wang, Chunlei; Dai, Xin; Wang, Shigang
2014-05-01
A new rotary normal stress electromagnetic actuator for fast steering mirror (FSM) is presented. The study includes concept design, actuating torque modeling, actuator design, and validation with numerical simulation. To achieve an FSM with compact structure and high bandwidth, the actuator is designed with a cross armature magnetic topology. By introducing bias flux generated by four permanent magnets (PMs), the actuator has high-force density similar to a solenoid but also has essentially linear characteristics similar to a voice coil actuator, leading to a simply control algorithm. The actuating torque output is a linear function of both driving current and rotation angle and is formulated with equivalent magnetic circuit method. To i