Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van't
2012-03-15
Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.
Improving Normal Tissue Complication Probability Models: The Need to Adopt a 'Data-Pooling' Culture
Deasy, Joseph O.; Bentzen, Soren M.; Jackson, Andrew; Ten Haken, Randall K.; Yorke, Ellen D.; Constine, Louis S.; Sharma, Ashish; Marks, Lawrence B.
2010-03-01
Clinical studies of the dependence of normal tissue response on dose-volume factors are often confusingly inconsistent, as the QUANTEC reviews demonstrate. A key opportunity to accelerate progress is to begin storing high-quality datasets in repositories. Using available technology, multiple repositories could be conveniently queried, without divulging protected health information, to identify relevant sources of data for further analysis. After obtaining institutional approvals, data could then be pooled, greatly enhancing the capability to construct predictive models that are more widely applicable and better powered to accurately identify key predictive factors (whether dosimetric, image-based, clinical, socioeconomic, or biological). Data pooling has already been carried out effectively in a few normal tissue complication probability studies and should become a common strategy.
Improving normal tissue complication probability models: the need to adopt a "data-pooling" culture.
Deasy, Joseph O; Bentzen, Søren M; Jackson, Andrew; Ten Haken, Randall K; Yorke, Ellen D; Constine, Louis S; Sharma, Ashish; Marks, Lawrence B
2010-03-01
Clinical studies of the dependence of normal tissue response on dose-volume factors are often confusingly inconsistent, as the QUANTEC reviews demonstrate. A key opportunity to accelerate progress is to begin storing high-quality datasets in repositories. Using available technology, multiple repositories could be conveniently queried, without divulging protected health information, to identify relevant sources of data for further analysis. After obtaining institutional approvals, data could then be pooled, greatly enhancing the capability to construct predictive models that are more widely applicable and better powered to accurately identify key predictive factors (whether dosimetric, image-based, clinical, socioeconomic, or biological). Data pooling has already been carried out effectively in a few normal tissue complication probability studies and should become a common strategy. PMID:20171511
IMPROVING NORMAL TISSUE COMPLICATION PROBABILITY MODELS: THE NEED TO ADOPT A “DATA-POOLING” CULTURE
Deasy, Joseph O.; Bentzen, Søren M.; Jackson, Andrew; Ten Haken, Randall K.; Yorke, Ellen D.; Constine, Louis S.; Sharma, Ashish; Marks, Lawrence B.
2010-01-01
Clinical studies of the dependence of normal tissue response on dose-volume factors are often confusingly inconsistent, as the QUANTEC reviews demonstrate. A key opportunity to accelerate progress is to begin storing high-quality datasets in repositories. Using available technology, multiple repositories could be conveniently queried, without divulging protected health information, to identify relevant sources of data for further analysis. After obtaining institutional approvals, data could then be pooled, greatly enhancing the capability to construct predictive models that are more widely applicable and better powered to accurately identify key predictive factors (whether dosimetric, image-based, clinical, socioeconomic, or biological). Data pooling has already been carried out effectively in a few normal tissue complication probability studies and should become a common strategy. PMID:20171511
Cella, Laura; Liuzzi, Raffaele; Conson, Manuel; D’Avino, Vittoria; Salvatore, Marco; Pacelli, Roberto
2013-10-01
Purpose: To establish a multivariate normal tissue complication probability (NTCP) model for radiation-induced asymptomatic heart valvular defects (RVD). Methods and Materials: Fifty-six patients treated with sequential chemoradiation therapy for Hodgkin lymphoma (HL) were retrospectively reviewed for RVD events. Clinical information along with whole heart, cardiac chambers, and lung dose distribution parameters was collected, and the correlations to RVD were analyzed by means of Spearman's rank correlation coefficient (Rs). For the selection of the model order and parameters for NTCP modeling, a multivariate logistic regression method using resampling techniques (bootstrapping) was applied. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC). Results: When we analyzed the whole heart, a 3-variable NTCP model including the maximum dose, whole heart volume, and lung volume was shown to be the optimal predictive model for RVD (Rs = 0.573, P<.001, AUC = 0.83). When we analyzed the cardiac chambers individually, for the left atrium and for the left ventricle, an NTCP model based on 3 variables including the percentage volume exceeding 30 Gy (V30), cardiac chamber volume, and lung volume was selected as the most predictive model (Rs = 0.539, P<.001, AUC = 0.83; and Rs = 0.557, P<.001, AUC = 0.82, respectively). The NTCP values increase as heart maximum dose or cardiac chambers V30 increase. They also increase with larger volumes of the heart or cardiac chambers and decrease when lung volume is larger. Conclusions: We propose logistic NTCP models for RVD considering not only heart irradiation dose but also the combined effects of lung and heart volumes. Our study establishes the statistical evidence of the indirect effect of lung size on radio-induced heart toxicity.
Peeters, Stephanie; Hoogeman, Mischa S.; Heemsbergen, Wilma D.; Hart, Augustinus; Koper, Peter C.M.; Lebesque, Joos V. . E-mail: j.lebesque@nki.nl
2006-09-01
Purpose: To analyze whether inclusion of predisposing clinical features in the Lyman-Kutcher-Burman (LKB) normal tissue complication probability (NTCP) model improves the estimation of late gastrointestinal toxicity. Methods and Materials: This study includes 468 prostate cancer patients participating in a randomized trial comparing 68 with 78 Gy. We fitted the probability of developing late toxicity within 3 years (rectal bleeding, high stool frequency, and fecal incontinence) with the original, and a modified LKB model, in which a clinical feature (e.g., history of abdominal surgery) was taken into account by fitting subset specific TD50s. The ratio of these TD50s is the dose-modifying factor for that clinical feature. Dose distributions of anorectal (bleeding and frequency) and anal wall (fecal incontinence) were used. Results: The modified LKB model gave significantly better fits than the original LKB model. Patients with a history of abdominal surgery had a lower tolerance to radiation than did patients without previous surgery, with a dose-modifying factor of 1.1 for bleeding and of 2.5 for fecal incontinence. The dose-response curve for bleeding was approximately two times steeper than that for frequency and three times steeper than that for fecal incontinence. Conclusions: Inclusion of predisposing clinical features significantly improved the estimation of the NTCP. For patients with a history of abdominal surgery, more severe dose constraints should therefore be used during treatment plan optimization.
Bakhshandeh, Mohsen; Hashemi, Bijan; Mahdavi, Seied Rabi Mehdi; Nikoofar, Alireza; Vasheghani, Maryam; Kazemnejad, Anoshirvan
2013-02-01
Purpose: To determine the dose-response relationship of the thyroid for radiation-induced hypothyroidism in head-and-neck radiation therapy, according to 6 normal tissue complication probability models, and to find the best-fit parameters of the models. Methods and Materials: Sixty-five patients treated with primary or postoperative radiation therapy for various cancers in the head-and-neck region were prospectively evaluated. Patient serum samples (tri-iodothyronine, thyroxine, thyroid-stimulating hormone [TSH], free tri-iodothyronine, and free thyroxine) were measured before and at regular time intervals until 1 year after the completion of radiation therapy. Dose-volume histograms (DVHs) of the patients' thyroid gland were derived from their computed tomography (CT)-based treatment planning data. Hypothyroidism was defined as increased TSH (subclinical hypothyroidism) or increased TSH in combination with decreased free thyroxine and thyroxine (clinical hypothyroidism). Thyroid DVHs were converted to 2 Gy/fraction equivalent doses using the linear-quadratic formula with {alpha}/{beta} = 3 Gy. The evaluated models included the following: Lyman with the DVH reduced to the equivalent uniform dose (EUD), known as LEUD; Logit-EUD; mean dose; relative seriality; individual critical volume; and population critical volume models. The parameters of the models were obtained by fitting the patients' data using a maximum likelihood analysis method. The goodness of fit of the models was determined by the 2-sample Kolmogorov-Smirnov test. Ranking of the models was made according to Akaike's information criterion. Results: Twenty-nine patients (44.6%) experienced hypothyroidism. None of the models was rejected according to the evaluation of the goodness of fit. The mean dose model was ranked as the best model on the basis of its Akaike's information criterion value. The D{sub 50} estimated from the models was approximately 44 Gy. Conclusions: The implemented normal tissue
Defraene, Gilles; Van den Bergh, Laura; Al-Mamgani, Abrahim; Haustermans, Karin; Heemsbergen, Wilma; Van den Heuvel, Frank; Lebesque, Joos V.
2012-03-01
Purpose: To study the impact of clinical predisposing factors on rectal normal tissue complication probability modeling using the updated results of the Dutch prostate dose-escalation trial. Methods and Materials: Toxicity data of 512 patients (conformally treated to 68 Gy [n = 284] and 78 Gy [n = 228]) with complete follow-up at 3 years after radiotherapy were studied. Scored end points were rectal bleeding, high stool frequency, and fecal incontinence. Two traditional dose-based models (Lyman-Kutcher-Burman (LKB) and Relative Seriality (RS) and a logistic model were fitted using a maximum likelihood approach. Furthermore, these model fits were improved by including the most significant clinical factors. The area under the receiver operating characteristic curve (AUC) was used to compare the discriminating ability of all fits. Results: Including clinical factors significantly increased the predictive power of the models for all end points. In the optimal LKB, RS, and logistic models for rectal bleeding and fecal incontinence, the first significant (p = 0.011-0.013) clinical factor was 'previous abdominal surgery.' As second significant (p = 0.012-0.016) factor, 'cardiac history' was included in all three rectal bleeding fits, whereas including 'diabetes' was significant (p = 0.039-0.048) in fecal incontinence modeling but only in the LKB and logistic models. High stool frequency fits only benefitted significantly (p = 0.003-0.006) from the inclusion of the baseline toxicity score. For all models rectal bleeding fits had the highest AUC (0.77) where it was 0.63 and 0.68 for high stool frequency and fecal incontinence, respectively. LKB and logistic model fits resulted in similar values for the volume parameter. The steepness parameter was somewhat higher in the logistic model, also resulting in a slightly lower D{sub 50}. Anal wall DVHs were used for fecal incontinence, whereas anorectal wall dose best described the other two endpoints. Conclusions: Comparable
Xu ZhiYong; Liang Shixiong; Zhu Ji; Zhu Xiaodong; Zhao Jiandong; Lu Haijie; Yang Yunli; Chen Long; Wang Anyu; Fu Xiaolong; Jiang Guoliang . E-mail: jianggl@21cn.com
2006-05-01
Purpose: To describe the probability of RILD by application of the Lyman-Kutcher-Burman normal-tissue complication (NTCP) model for primary liver carcinoma (PLC) treated with hypofractionated three-dimensional conformal radiotherapy (3D-CRT). Methods and Materials: A total of 109 PLC patients treated by 3D-CRT were followed for RILD. Of these patients, 93 were in liver cirrhosis of Child-Pugh Grade A, and 16 were in Child-Pugh Grade B. The Michigan NTCP model was used to predict the probability of RILD, and then the modified Lyman NTCP model was generated for Child-Pugh A and Child-Pugh B patients by maximum-likelihood analysis. Results: Of all patients, 17 developed RILD in which 8 were of Child-Pugh Grade A, and 9 were of Child-Pugh Grade B. The prediction of RILD by the Michigan model was underestimated for PLC patients. The modified n, m, TD{sub 5} (1) were 1.1, 0.28, and 40.5 Gy and 0.7, 0.43, and 23 Gy for patients with Child-Pugh A and B, respectively, which yielded better estimations of RILD probability. The hepatic tolerable doses (TD{sub 5}) would be MDTNL of 21 Gy and 6 Gy, respectively, for Child-Pugh A and B patients. Conclusions: The Michigan model was probably not fit to predict RILD in PLC patients. A modified Lyman NTCP model for RILD was recommended.
2012-01-01
Background With advances in modern radiotherapy (RT), many patients with head and neck (HN) cancer can be effectively cured. However, xerostomia is a common complication in patients after RT for HN cancer. The purpose of this study was to use the Lyman–Kutcher–Burman (LKB) model to derive parameters for the normal tissue complication probability (NTCP) for xerostomia based on scintigraphy assessments and quality of life (QoL) questionnaires. We performed validation tests of the Quantitative Analysis of Normal Tissue Effects in the Clinic (QUANTEC) guidelines against prospectively collected QoL and salivary scintigraphic data. Methods Thirty-one patients with HN cancer were enrolled. Salivary excretion factors (SEFs) measured by scintigraphy and QoL data from self-reported questionnaires were used for NTCP modeling to describe the incidence of grade 3+ xerostomia. The NTCP parameters estimated from the QoL and SEF datasets were compared. Model performance was assessed using Pearson’s chi-squared test, Nagelkerke’s R2, the area under the receiver operating characteristic curve, and the Hosmer–Lemeshow test. The negative predictive value (NPV) was checked for the rate of correctly predicting the lack of incidence. Pearson’s chi-squared test was used to test the goodness of fit and association. Results Using the LKB NTCP model and assuming n=1, the dose for uniform irradiation of the whole or partial volume of the parotid gland that results in 50% probability of a complication (TD50) and the slope of the dose–response curve (m) were determined from the QoL and SEF datasets, respectively. The NTCP-fitted parameters for local disease were TD50=43.6 Gy and m=0.18 with the SEF data, and TD50=44.1 Gy and m=0.11 with the QoL data. The rate of grade 3+ xerostomia for treatment plans meeting the QUANTEC guidelines was specifically predicted, with a NPV of 100%, using either the QoL or SEF dataset. Conclusions Our study shows the agreement between the NTCP
Robertson, John M.; Soehn, Matthias; Yan Di
2010-05-01
Purpose: Understanding the dose-volume relationship of small bowel irradiation and severe acute diarrhea may help reduce the incidence of this side effect during adjuvant treatment for rectal cancer. Methods and Materials: Consecutive patients treated curatively for rectal cancer were reviewed, and the maximum grade of acute diarrhea was determined. The small bowel was outlined on the treatment planning CT scan, and a dose-volume histogram was calculated for the initial pelvic treatment (45 Gy). Logistic regression models were fitted for varying cutoff-dose levels from 5 to 45 Gy in 5-Gy increments. The model with the highest LogLikelihood was used to develop a cutoff-dose normal tissue complication probability (NTCP) model. Results: There were a total of 152 patients (48% preoperative, 47% postoperative, 5% other), predominantly treated prone (95%) with a three-field technique (94%) and a protracted venous infusion of 5-fluorouracil (78%). Acute Grade 3 diarrhea occurred in 21%. The largest LogLikelihood was found for the cutoff-dose logistic regression model with 15 Gy as the cutoff-dose, although the models for 20 Gy and 25 Gy had similar significance. According to this model, highly significant correlations (p <0.001) between small bowel volumes receiving at least 15 Gy and toxicity exist in the considered patient population. Similar findings applied to both the preoperatively (p = 0.001) and postoperatively irradiated groups (p = 0.001). Conclusion: The incidence of Grade 3 diarrhea was significantly correlated with the volume of small bowel receiving at least 15 Gy using a cutoff-dose NTCP model.
Bazan, Jose G.; Luxton, Gary; Kozak, Margaret M.; Anderson, Eric M.; Hancock, Steven L.; Kapp, Daniel S.; Kidd, Elizabeth A.; Koong, Albert C.; Chang, Daniel T.
2013-12-01
Purpose: To determine how chemotherapy agents affect radiation dose parameters that correlate with acute hematologic toxicity (HT) in patients treated with pelvic intensity modulated radiation therapy (P-IMRT) and concurrent chemotherapy. Methods and Materials: We assessed HT in 141 patients who received P-IMRT for anal, gynecologic, rectal, or prostate cancers, 95 of whom received concurrent chemotherapy. Patients were separated into 4 groups: mitomycin (MMC) + 5-fluorouracil (5FU, 37 of 141), platinum ± 5FU (Cis, 32 of 141), 5FU (26 of 141), and P-IMRT alone (46 of 141). The pelvic bone was contoured as a surrogate for pelvic bone marrow (PBM) and divided into subsites: ilium, lower pelvis, and lumbosacral spine (LSS). The volumes of each region receiving 5-40 Gy were calculated. The endpoint for HT was grade ≥3 (HT3+) leukopenia, neutropenia or thrombocytopenia. Normal tissue complication probability was calculated using the Lyman-Kutcher-Burman model. Logistic regression was used to analyze association between HT3+ and dosimetric parameters. Results: Twenty-six patients experienced HT3+: 10 of 37 (27%) MMC, 14 of 32 (44%) Cis, 2 of 26 (8%) 5FU, and 0 of 46 P-IMRT. PBM dosimetric parameters were correlated with HT3+ in the MMC group but not in the Cis group. LSS dosimetric parameters were well correlated with HT3+ in both the MMC and Cis groups. Constrained optimization (0
Bazan, Jose G.; Luxton, Gary; Mok, Edward C.; Koong, Albert C.; Chang, Daniel T.
2012-11-01
Purpose: To identify dosimetric parameters that correlate with acute hematologic toxicity (HT) in patients with squamous cell carcinoma of the anal canal treated with definitive chemoradiotherapy (CRT). Methods and Materials: We analyzed 33 patients receiving CRT. Pelvic bone (PBM) was contoured for each patient and divided into subsites: ilium, lower pelvis (LP), and lumbosacral spine (LSS). The volume of each region receiving at least 5, 10, 15, 20, 30, and 40 Gy was calculated. Endpoints included grade {>=}3 HT (HT3+) and hematologic event (HE), defined as any grade {>=}2 HT with a modification in chemotherapy dose. Normal tissue complication probability (NTCP) was evaluated with the Lyman-Kutcher-Burman (LKB) model. Logistic regression was used to test associations between HT and dosimetric/clinical parameters. Results: Nine patients experienced HT3+ and 15 patients experienced HE. Constrained optimization of the LKB model for HT3+ yielded the parameters m = 0.175, n = 1, and TD{sub 50} = 32 Gy. With this model, mean PBM doses of 25 Gy, 27.5 Gy, and 31 Gy result in a 10%, 20%, and 40% risk of HT3+, respectively. Compared with patients with mean PBM dose of <30 Gy, patients with mean PBM dose {>=}30 Gy had a 14-fold increase in the odds of developing HT3+ (p = 0.005). Several low-dose radiation parameters (i.e., PBM-V10) were associated with the development of HT3+ and HE. No association was found with the ilium, LP, or clinical factors. Conclusions: LKB modeling confirms the expectation that PBM acts like a parallel organ, implying that the mean dose to the organ is a useful predictor for toxicity. Low-dose radiation to the PBM was also associated with clinically significant HT. Keeping the mean PBM dose <22.5 Gy and <25 Gy is associated with a 5% and 10% risk of HT, respectively.
Multinomial mixture model with heterogeneous classification probabilities
Holland, M.D.; Gray, B.R.
2011-01-01
Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.
NASA Astrophysics Data System (ADS)
Trojková, Darina; Judas, Libor; Trojek, Tomáš
2014-11-01
Minimizing the late rectal toxicity of prostate cancer patients is a very important and widely-discussed topic. Normal tissue complication probability (NTCP) models can be used to evaluate competing treatment plans. In our work, the parameters of the Lyman-Kutcher-Burman (LKB), Källman, and Logit+EUD models are optimized by minimizing the Brier score for a group of 302 prostate cancer patients. The NTCP values are calculated and are compared with the values obtained using previously published values for the parameters. χ2 Statistics were calculated as a check of goodness of optimization.
Dean, Jamie A; Wong, Kee H; Welsh, Liam C; Jones, Ann-Britt; Schick, Ulrike; Newbold, Kate L; Bhide, Shreerang A; Harrington, Kevin J; Nutting, Christopher M; Gulliford, Sarah L
2016-01-01
Background and Purpose Severe acute mucositis commonly results from head and neck (chemo)radiotherapy. A predictive model of mucositis could guide clinical decision-making and inform treatment planning. We aimed to generate such a model using spatial dose metrics and machine learning. Material and Methods Predictive models of severe acute mucositis were generated using radiotherapy dose (dose-volume and spatial dose metrics) and clinical data. Penalised logistic regression, support vector classification and random forest classification (RFC) models were generated and compared. Internal validation was performed (with 100-iteration cross-validation), using multiple metrics, including area under the receiver operating characteristic curve (AUC) and calibration slope, to assess performance. Associations between covariates and severe mucositis were explored using the models. Results The dose-volume-based models (standard) performed equally to those incorporating spatial information. Discrimination was similar between models, but the RFCstandard had the best calibration. The mean AUC and calibration slope for this model were 0.71 (s.d.=0.09) and 3.9 (s.d.=2.2), respectively. The volumes of oral cavity receiving intermediate and high doses were associated with severe mucositis. Conclusions The RFCstandard model performance is modest-to-good, but should be improved, and requires external validation. Reducing the volumes of oral cavity receiving intermediate and high doses may reduce mucositis incidence. PMID:27240717
DISJUNCTIVE NORMAL SHAPE MODELS
Ramesh, Nisha; Mesadi, Fitsum; Cetin, Mujdat; Tasdizen, Tolga
2016-01-01
A novel implicit parametric shape model is proposed for segmentation and analysis of medical images. Functions representing the shape of an object can be approximated as a union of N polytopes. Each polytope is obtained by the intersection of M half-spaces. The shape function can be approximated as a disjunction of conjunctions, using the disjunctive normal form. The shape model is initialized using seed points defined by the user. We define a cost function based on the Chan-Vese energy functional. The model is differentiable, hence, gradient based optimization algorithms are used to find the model parameters. PMID:27403233
Bivariate normal, conditional and rectangular probabilities: A computer program with applications
NASA Technical Reports Server (NTRS)
Swaroop, R.; Brownlow, J. D.; Ashwworth, G. R.; Winter, W. R.
1980-01-01
Some results for the bivariate normal distribution analysis are presented. Computer programs for conditional normal probabilities, marginal probabilities, as well as joint probabilities for rectangular regions are given: routines for computing fractile points and distribution functions are also presented. Some examples from a closed circuit television experiment are included.
Site occupancy models with heterogeneous detection probabilities
Royle, J. Andrew
2006-01-01
Models for estimating the probability of occurrence of a species in the presence of imperfect detection are important in many ecological disciplines. In these ?site occupancy? models, the possibility of heterogeneity in detection probabilities among sites must be considered because variation in abundance (and other factors) among sampled sites induces variation in detection probability (p). In this article, I develop occurrence probability models that allow for heterogeneous detection probabilities by considering several common classes of mixture distributions for p. For any mixing distribution, the likelihood has the general form of a zero-inflated binomial mixture for which inference based upon integrated likelihood is straightforward. A recent paper by Link (2003, Biometrics 59, 1123?1130) demonstrates that in closed population models used for estimating population size, different classes of mixture distributions are indistinguishable from data, yet can produce very different inferences about population size. I demonstrate that this problem can also arise in models for estimating site occupancy in the presence of heterogeneous detection probabilities. The implications of this are discussed in the context of an application to avian survey data and the development of animal monitoring programs.
Computational Modelling and Simulation Fostering New Approaches in Learning Probability
ERIC Educational Resources Information Center
Kuhn, Markus; Hoppe, Ulrich; Lingnau, Andreas; Wichmann, Astrid
2006-01-01
Discovery learning in mathematics in the domain of probability based on hands-on experiments is normally limited because of the difficulty in providing sufficient materials and data volume in terms of repetitions of the experiments. Our cooperative, computational modelling and simulation environment engages students and teachers in composing and…
ERIC Educational Resources Information Center
Gibbons, Robert D.; And Others
The probability integral of the multivariate normal distribution (ND) has received considerable attention since W. F. Sheppard's (1900) and K. Pearson's (1901) seminal work on the bivariate ND. This paper evaluates the formula that represents the "n x n" correlation matrix of the "chi(sub i)" and the standardized multivariate normal density…
Multiple model cardinalized probability hypothesis density filter
NASA Astrophysics Data System (ADS)
Georgescu, Ramona; Willett, Peter
2011-09-01
The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.
A Quantum Probability Model of Causal Reasoning
Trueblood, Jennifer S.; Busemeyer, Jerome R.
2012-01-01
People can often outperform statistical methods and machine learning algorithms in situations that involve making inferences about the relationship between causes and effects. While people are remarkably good at causal reasoning in many situations, there are several instances where they deviate from expected responses. This paper examines three situations where judgments related to causal inference problems produce unexpected results and describes a quantum inference model based on the axiomatic principles of quantum probability theory that can explain these effects. Two of the three phenomena arise from the comparison of predictive judgments (i.e., the conditional probability of an effect given a cause) with diagnostic judgments (i.e., the conditional probability of a cause given an effect). The third phenomenon is a new finding examining order effects in predictive causal judgments. The quantum inference model uses the notion of incompatibility among different causes to account for all three phenomena. Psychologically, the model assumes that individuals adopt different points of view when thinking about different causes. The model provides good fits to the data and offers a coherent account for all three causal reasoning effects thus proving to be a viable new candidate for modeling human judgment. PMID:22593747
Molecular clouds have power-law probability distribution functions (not log-normal)
NASA Astrophysics Data System (ADS)
Alves, Joao; Lombardi, Marco; Lada, Charles
2015-08-01
We investigate the shape of the probability distribution of column densities (PDF) in molecular clouds. Through the use of low-noise, extinction-calibrated Planck-Herschel emission data for eight molecular clouds, we demonstrate that, contrary to common belief, the PDFs of molecular clouds are not described well by log-normal functions, but are instead power laws with exponents close to two and with breaks between AK≃0.1 and 0.2mag, so close to the CO self-shielding limit and not far from the transition between molecular and atomic gas. Additionally, we argue that the intrinsic functional form of the PDF cannot be securely determined below AK≃0.1mag, limiting our ability to investigate more complex models for the shape of the cloud PDF.
Tai An; Erickson, Beth; Li, X. Allen
2009-05-01
Purpose: The ability to predict normal tissue complication probability (NTCP) is essential for NTCP-based treatment planning. The purpose of this work is to estimate the Lyman NTCP model parameters for liver irradiation from published clinical data of different fractionation regimens. A new expression of normalized total dose (NTD) is proposed to convert NTCP data between different treatment schemes. Method and Materials: The NTCP data of radiation- induced liver disease (RILD) from external beam radiation therapy for primary liver cancer patients were selected for analysis. The data were collected from 4 institutions for tumor sizes in the range of of 8-10 cm. The dose per fraction ranged from 1.5 Gy to 6 Gy. A modified linear-quadratic model with two components corresponding to radiosensitive and radioresistant cells in the normal liver tissue was proposed to understand the new NTD formalism. Results: There are five parameters in the model: TD{sub 50}, m, n, {alpha}/{beta} and f. With two parameters n and {alpha}/{beta} fixed to be 1.0 and 2.0 Gy, respectively, the extracted parameters from the fitting are TD{sub 50}(1) = 40.3 {+-} 8.4Gy, m =0.36 {+-} 0.09, f = 0.156 {+-} 0.074 Gy and TD{sub 50}(1) = 23.9 {+-} 5.3Gy, m = 0.41 {+-} 0.15, f = 0.0 {+-} 0.04 Gy for patients with liver cirrhosis scores of Child-Pugh A and Child-Pugh B, respectively. The fitting results showed that the liver cirrhosis score significantly affects fractional dose dependence of NTD. Conclusion: The Lyman parameters generated presently and the new form of NTD may be used to predict NTCP for treatment planning of innovative liver irradiation with different fractionations, such as hypofractioned stereotactic body radiation therapy.
PABS: A Computer Program to Normalize Emission Probabilities and Calculate Realistic Uncertainties
Caron, D. S.; Browne, E.; Norman, E. B.
2009-08-21
The program PABS normalizes relative particle emission probabilities to an absolute scale and calculates the relevant uncertainties on this scale. The program is written in Java using the JDK 1.6 library. For additional information about system requirements, the code itself, and compiling from source, see the README file distributed with this program. The mathematical procedures used are given below.
Social Science and the Bayesian Probability Explanation Model
NASA Astrophysics Data System (ADS)
Yin, Jie; Zhao, Lei
2014-03-01
C. G. Hempel, one of the logical empiricists, who builds up his probability explanation model by using the empiricist view of probability, this model encountered many difficulties in the scientific explanation in which Hempel is difficult to make a reasonable defense. Based on the bayesian probability theory, the Bayesian probability model provides an approach of a subjective probability explanation based on the subjective probability, using the subjectivist view of probability. On the one hand, this probability model establishes the epistemological status of the subject in the social science; On the other hand, it provides a feasible explanation model for the social scientific explanation, which has important methodological significance.
Jensen, Ingelise; Carl, Jesper; Lund, Bente; Larsen, Erik H.; Nielsen, Jane
2011-07-01
Dose escalation in prostate radiotherapy is limited by normal tissue toxicities. The aim of this study was to assess the impact of margin size on tumor control and side effects for intensity-modulated radiation therapy (IMRT) and 3D conformal radiotherapy (3DCRT) treatment plans with increased dose. Eighteen patients with localized prostate cancer were enrolled. 3DCRT and IMRT plans were compared for a variety of margin sizes. A marker detectable on daily portal images was presupposed for narrow margins. Prescribed dose was 82 Gy within 41 fractions to the prostate clinical target volume (CTV). Tumor control probability (TCP) calculations based on the Poisson model including the linear quadratic approach were performed. Normal tissue complication probability (NTCP) was calculated for bladder, rectum and femoral heads according to the Lyman-Kutcher-Burman method. All plan types presented essentially identical TCP values and very low NTCP for bladder and femoral heads. Mean doses for these critical structures reached a minimum for IMRT with reduced margins. Two endpoints for rectal complications were analyzed. A marked decrease in NTCP for IMRT plans with narrow margins was seen for mild RTOG grade 2/3 as well as for proctitis/necrosis/stenosis/fistula, for which NTCP <7% was obtained. For equivalent TCP values, sparing of normal tissue was demonstrated with the narrow margin approach. The effect was more pronounced for IMRT than 3DCRT, with respect to NTCP for mild, as well as severe, rectal complications.
Model estimates hurricane wind speed probabilities
NASA Astrophysics Data System (ADS)
Mumane, Richard J.; Barton, Chris; Collins, Eric; Donnelly, Jeffrey; Eisner, James; Emanuel, Kerry; Ginis, Isaac; Howard, Susan; Landsea, Chris; Liu, Kam-biu; Malmquist, David; McKay, Megan; Michaels, Anthony; Nelson, Norm; O Brien, James; Scott, David; Webb, Thompson, III
In the United States, intense hurricanes (category 3, 4, and 5 on the Saffir/Simpson scale) with winds greater than 50 m s -1 have caused more damage than any other natural disaster [Pielke and Pielke, 1997]. Accurate estimates of wind speed exceedance probabilities (WSEP) due to intense hurricanes are therefore of great interest to (re)insurers, emergency planners, government officials, and populations in vulnerable coastal areas.The historical record of U.S. hurricane landfall is relatively complete only from about 1900, and most model estimates of WSEP are derived from this record. During the 1899-1998 period, only two category-5 and 16 category-4 hurricanes made landfall in the United States. The historical record therefore provides only a limited sample of the most intense hurricanes.
Datamining approaches for modeling tumor control probability
Naqa, Issam El; Deasy, Joseph O.; Mu, Yi; Huang, Ellen; Hope, Andrew J.; Lindsay, Patricia E.; Apte, Aditya; Alaly, James; Bradley, Jeffrey D.
2016-01-01
Background Tumor control probability (TCP) to radiotherapy is determined by complex interactions between tumor biology, tumor microenvironment, radiation dosimetry, and patient-related variables. The complexity of these heterogeneous variable interactions constitutes a challenge for building predictive models for routine clinical practice. We describe a datamining framework that can unravel the higher order relationships among dosimetric dose-volume prognostic variables, interrogate various radiobiological processes, and generalize to unseen data before when applied prospectively. Material and methods Several datamining approaches are discussed that include dose-volume metrics, equivalent uniform dose, mechanistic Poisson model, and model building methods using statistical regression and machine learning techniques. Institutional datasets of non-small cell lung cancer (NSCLC) patients are used to demonstrate these methods. The performance of the different methods was evaluated using bivariate Spearman rank correlations (rs). Over-fitting was controlled via resampling methods. Results Using a dataset of 56 patients with primary NCSLC tumors and 23 candidate variables, we estimated GTV volume and V75 to be the best model parameters for predicting TCP using statistical resampling and a logistic model. Using these variables, the support vector machine (SVM) kernel method provided superior performance for TCP prediction with an rs = 0.68 on leave-one-out testing compared to logistic regression (rs = 0.4), Poisson-based TCP (rs = 0.33), and cell kill equivalent uniform dose model (rs = 0.17). Conclusions The prediction of treatment response can be improved by utilizing datamining approaches, which are able to unravel important non-linear complex interactions among model variables and have the capacity to predict on unseen data for prospective clinical applications. PMID:20192878
Thompson, Sierra; Muzinic, Laura; Muzinic, Christopher; Niemiller, Matthew L.
2014-01-01
Abstract Multiple factors are thought to cause limb abnormalities in amphibian populations by altering processes of limb development and regeneration. We examined adult and juvenile axolotls (Ambystoma mexicanum) in the Ambystoma Genetic Stock Center (AGSC) for limb and digit abnormalities to investigate the probability of normal regeneration after bite injury. We observed that 80% of larval salamanders show evidence of bite injury at the time of transition from group housing to solitary housing. Among 717 adult axolotls that were surveyed, which included solitary‐housed males and group‐housed females, approximately half presented abnormalities, including examples of extra or missing digits and limbs, fused digits, and digits growing from atypical anatomical positions. Bite injury probably explains these limb defects, and not abnormal development, because limbs with normal anatomy regenerated after performing rostral amputations. We infer that only 43% of AGSC larvae will present four anatomically normal looking adult limbs after incurring a bite injury. Our results show regeneration of normal limb anatomy to be less than perfect after bite injury. PMID:25745564
Classical probability model for Bell inequality
NASA Astrophysics Data System (ADS)
Khrennikov, Andrei
2014-04-01
We show that by taking into account randomness of realization of experimental contexts it is possible to construct common Kolmogorov space for data collected for these contexts, although they can be incompatible. We call such a construction "Kolmogorovization" of contextuality. This construction of common probability space is applied to Bell's inequality. It is well known that its violation is a consequence of collecting statistical data in a few incompatible experiments. In experiments performed in quantum optics contexts are determined by selections of pairs of angles (θi,θ'j) fixing orientations of polarization beam splitters. Opposite to the common opinion, we show that statistical data corresponding to measurements of polarizations of photons in the singlet state, e.g., in the form of correlations, can be described in the classical probabilistic framework. The crucial point is that in constructing the common probability space one has to take into account not only randomness of the source (as Bell did), but also randomness of context-realizations (in particular, realizations of pairs of angles (θi, θ'j)). One may (but need not) say that randomness of "free will" has to be accounted for.
Regularized Finite Mixture Models for Probability Trajectories
ERIC Educational Resources Information Center
Shedden, Kerby; Zucker, Robert A.
2008-01-01
Finite mixture models are widely used in the analysis of growth trajectory data to discover subgroups of individuals exhibiting similar patterns of behavior over time. In practice, trajectories are usually modeled as polynomials, which may fail to capture important features of the longitudinal pattern. Focusing on dichotomous response measures, we…
Other probable cases in the subquark model
NASA Astrophysics Data System (ADS)
Li, Tie-Zhong
1982-05-01
Except for flavor, color, subcolor and generation etc., there might be some other unknown quantum numbers for subquarks and the statistics what subquarks obey might not be of the Fermi type. With these factors in consideration, we re-study the Casalbuoni-Gatto model and get some different results. Aspirant FNRS.
NASA Technical Reports Server (NTRS)
Falls, L. W.
1973-01-01
This document replaces Cape Kennedy empirical wind component statistics which are presently being used for aerospace engineering applications that require component wind probabilities for various flight azimuths and selected altitudes. The normal (Gaussian) distribution is presented as an adequate statistical model to represent component winds at Cape Kennedy. Head-, tail-, and crosswind components are tabulated for all flight azimuths for altitudes from 0 to 70 km by monthly reference periods. Wind components are given for 11 selected percentiles ranging from 0.135 percent to 99,865 percent for each month. Results of statistical goodness-of-fit tests are presented to verify the use of the Gaussian distribution as an adequate model to represent component winds at Cape Kennedy, Florida.
NASA Technical Reports Server (NTRS)
Falls, L. W.
1975-01-01
Vandenberg Air Force Base (AFB), California, wind component statistics are presented to be used for aerospace engineering applications that require component wind probabilities for various flight azimuths and selected altitudes. The normal (Gaussian) distribution is presented as a statistical model to represent component winds at Vandenberg AFB. Head tail, and crosswind components are tabulated for all flight azimuths for altitudes from 0 to 70 km by monthly reference periods. Wind components are given for 11 selected percentiles ranging from 0.135 percent to 99.865 percent for each month. The results of statistical goodness-of-fit tests are presented to verify the use of the Gaussian distribution as an adequate model to represent component winds at Vandenberg AFB.
Probability of Future Observations Exceeding One-Sided, Normal, Upper Tolerance Limits
Edwards, Timothy S.
2014-10-29
Normal tolerance limits are frequently used in dynamic environments specifications of aerospace systems as a method to account for aleatory variability in the environments. Upper tolerance limits, when used in this way, are computed from records of the environment and used to enforce conservatism in the specification by describing upper extreme values the environment may take in the future. Components and systems are designed to withstand these extreme loads to ensure they do not fail under normal use conditions. The degree of conservatism in the upper tolerance limits is controlled by specifying the coverage and confidence level (usually written in “coverage/confidence” form). Moreover, in high-consequence systems it is common to specify tolerance limits at 95% or 99% coverage and confidence at the 50% or 90% level. Despite the ubiquity of upper tolerance limits in the aerospace community, analysts and decision-makers frequently misinterpret their meaning. The misinterpretation extends into the standards that govern much of the acceptance and qualification of commercial and government aerospace systems. As a result, the risk of a future observation of the environment exceeding the upper tolerance limit is sometimes significantly underestimated by decision makers. This note explains the meaning of upper tolerance limits and a related measure, the upper prediction limit. So, the objective of this work is to clarify the probability of exceeding these limits in flight so that decision-makers can better understand the risk associated with exceeding design and test levels during flight and balance the cost of design and development with that of mission failure.
Probability of Future Observations Exceeding One-Sided, Normal, Upper Tolerance Limits
Edwards, Timothy S.
2014-10-29
Normal tolerance limits are frequently used in dynamic environments specifications of aerospace systems as a method to account for aleatory variability in the environments. Upper tolerance limits, when used in this way, are computed from records of the environment and used to enforce conservatism in the specification by describing upper extreme values the environment may take in the future. Components and systems are designed to withstand these extreme loads to ensure they do not fail under normal use conditions. The degree of conservatism in the upper tolerance limits is controlled by specifying the coverage and confidence level (usually written inmore » “coverage/confidence” form). Moreover, in high-consequence systems it is common to specify tolerance limits at 95% or 99% coverage and confidence at the 50% or 90% level. Despite the ubiquity of upper tolerance limits in the aerospace community, analysts and decision-makers frequently misinterpret their meaning. The misinterpretation extends into the standards that govern much of the acceptance and qualification of commercial and government aerospace systems. As a result, the risk of a future observation of the environment exceeding the upper tolerance limit is sometimes significantly underestimated by decision makers. This note explains the meaning of upper tolerance limits and a related measure, the upper prediction limit. So, the objective of this work is to clarify the probability of exceeding these limits in flight so that decision-makers can better understand the risk associated with exceeding design and test levels during flight and balance the cost of design and development with that of mission failure.« less
Probability density function modeling for sub-powered interconnects
NASA Astrophysics Data System (ADS)
Pater, Flavius; Amaricǎi, Alexandru
2016-06-01
This paper proposes three mathematical models for reliability probability density function modeling the interconnect supplied at sub-threshold voltages: spline curve approximations, Gaussian models,and sine interpolation. The proposed analysis aims at determining the most appropriate fitting for the switching delay - probability of correct switching for sub-powered interconnects. We compare the three mathematical models with the Monte-Carlo simulations of interconnects for 45 nm CMOS technology supplied at 0.25V.
Normalization of Gravitational Acceleration Models
NASA Technical Reports Server (NTRS)
Eckman, Randy A.; Brown, Aaron J.; Adamo, Daniel R.
2011-01-01
Unlike the uniform density spherical shell approximations of Newton, the con- sequence of spaceflight in the real universe is that gravitational fields are sensitive to the nonsphericity of their generating central bodies. The gravitational potential of a nonspherical central body is typically resolved using spherical harmonic approximations. However, attempting to directly calculate the spherical harmonic approximations results in at least two singularities which must be removed in order to generalize the method and solve for any possible orbit, including polar orbits. Three unique algorithms have been developed to eliminate these singularities by Samuel Pines [1], Bill Lear [2], and Robert Gottlieb [3]. This paper documents the methodical normalization of two1 of the three known formulations for singularity-free gravitational acceleration (namely, the Lear [2] and Gottlieb [3] algorithms) and formulates a general method for defining normalization parameters used to generate normalized Legendre Polynomials and ALFs for any algorithm. A treatment of the conventional formulation of the gravitational potential and acceleration is also provided, in addition to a brief overview of the philosophical differences between the three known singularity-free algorithms.
Gendist: An R Package for Generated Probability Distribution Models
Abu Bakar, Shaiful Anuar; Nadarajah, Saralees; ABSL Kamarul Adzhar, Zahrul Azmir; Mohamed, Ibrahim
2016-01-01
In this paper, we introduce the R package gendist that computes the probability density function, the cumulative distribution function, the quantile function and generates random values for several generated probability distribution models including the mixture model, the composite model, the folded model, the skewed symmetric model and the arc tan model. These models are extensively used in the literature and the R functions provided here are flexible enough to accommodate various univariate distributions found in other R packages. We also show its applications in graphing, estimation, simulation and risk measurements. PMID:27272043
Review of Literature for Model Assisted Probability of Detection
Meyer, Ryan M.; Crawford, Susan L.; Lareau, John P.; Anderson, Michael T.
2014-09-30
This is a draft technical letter report for NRC client documenting a literature review of model assisted probability of detection (MAPOD) for potential application to nuclear power plant components for improvement of field NDE performance estimations.
Aggregate and Individual Replication Probability within an Explicit Model of the Research Process
ERIC Educational Resources Information Center
Miller, Jeff; Schwarz, Wolf
2011-01-01
We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by…
The Normalization Model of Attention
Reynolds, John H.; Heeger, David J.
2009-01-01
Attention has been found to have a wide variety of effects on the responses of neurons in visual cortex. We describe a model of attention that exhibits each of these different forms of attentional modulation, depending on the stimulus conditions and the spread (or selectivity) of the attention field in the model. The model helps reconcile proposals that have been taken to represent alternative theories of attention. We argue that the variety and complexity of the results reported in the literature emerge from the variety of empirical protocols that were used, such that the results observed in any one experiment depended on the stimulus conditions and the subject’s attentional strategy, a notion that we define precisely in terms of the attention field in the model, but that has not typically been completely under experimental control. PMID:19186161
Modeling Finite-Time Failure Probabilities in Risk Analysis Applications.
Dimitrova, Dimitrina S; Kaishev, Vladimir K; Zhao, Shouqi
2015-10-01
In this article, we introduce a framework for analyzing the risk of systems failure based on estimating the failure probability. The latter is defined as the probability that a certain risk process, characterizing the operations of a system, reaches a possibly time-dependent critical risk level within a finite-time interval. Under general assumptions, we define two dually connected models for the risk process and derive explicit expressions for the failure probability and also the joint probability of the time of the occurrence of failure and the excess of the risk process over the risk level. We illustrate how these probabilistic models and results can be successfully applied in several important areas of risk analysis, among which are systems reliability, inventory management, flood control via dam management, infectious disease spread, and financial insolvency. Numerical illustrations are also presented. PMID:26010201
Semenenko, Vladimir A.; Tarima, Sergey S.; Devisetty, Kiran; Pelizzari, Charles A.; Liauw, Stanley L.
2013-03-15
Purpose: To perform validation of risk predictions for late rectal toxicity (LRT) in prostate cancer obtained using a new approach to synthesize published normal tissue complication data. Methods and Materials: A published study survey was performed to identify the dose-response relationships for LRT derived from nonoverlapping patient populations. To avoid mixing models based on different symptoms, the emphasis was placed on rectal bleeding. The selected models were used to compute the risk estimates of grade 2+ and grade 3+ LRT for an independent validation cohort composed of 269 prostate cancer patients with known toxicity outcomes. Risk estimates from single studies were combined to produce consolidated risk estimates. An agreement between the actuarial toxicity incidence 3 years after radiation therapy completion and single-study or consolidated risk estimates was evaluated using the concordance correlation coefficient. Goodness of fit for the consolidated risk estimates was assessed using the Hosmer-Lemeshow test. Results: A total of 16 studies of grade 2+ and 5 studies of grade 3+ LRT met the inclusion criteria. The consolidated risk estimates of grade 2+ and 3+ LRT were constructed using 3 studies each. For grade 2+ LRT, the concordance correlation coefficient for the consolidated risk estimates was 0.537 compared with 0.431 for the best-fit single study. For grade 3+ LRT, the concordance correlation coefficient for the consolidated risk estimates was 0.477 compared with 0.448 for the best-fit single study. No evidence was found for a lack of fit for the consolidated risk estimates using the Hosmer-Lemeshow test (P=.531 and P=.397 for grade 2+ and 3+ LRT, respectively). Conclusions: In a large cohort of prostate cancer patients, selected sets of consolidated risk estimates were found to be more accurate predictors of LRT than risk estimates derived from any single study.
UV Multi-scatter Propagation Model of Point Probability Method
NASA Astrophysics Data System (ADS)
Lu, Bai; Zhensen, Wu; Haiying, Li
Based on the multi-scatter propagation model of Monte Carlo, an improved geometric model is proposed. The model is ameliorated by using the point probability method. Comparison is made between the multiple scattering propagation models and the single-scatter propagation model in calculation time and relative error. The effect of complex weather, stumbling block and the transmitter and the receiver in different height are discussed. It is shown that although the single-scatter propagation model can be evaluated easily from standard numerical integration but this model cannot describe general non-line-of sight propagation problem. While the improved point probability multi-scatter Monte Carlo model may be used to more general case.
Naive Probability: Model-Based Estimates of Unique Events.
Khemlani, Sangeet S; Lotstein, Max; Johnson-Laird, Philip N
2015-08-01
We describe a dual-process theory of how individuals estimate the probabilities of unique events, such as Hillary Clinton becoming U.S. President. It postulates that uncertainty is a guide to improbability. In its computer implementation, an intuitive system 1 simulates evidence in mental models and forms analog non-numerical representations of the magnitude of degrees of belief. This system has minimal computational power and combines evidence using a small repertoire of primitive operations. It resolves the uncertainty of divergent evidence for single events, for conjunctions of events, and for inclusive disjunctions of events, by taking a primitive average of non-numerical probabilities. It computes conditional probabilities in a tractable way, treating the given event as evidence that may be relevant to the probability of the dependent event. A deliberative system 2 maps the resulting representations into numerical probabilities. With access to working memory, it carries out arithmetical operations in combining numerical estimates. Experiments corroborated the theory's predictions. Participants concurred in estimates of real possibilities. They violated the complete joint probability distribution in the predicted ways, when they made estimates about conjunctions: P(A), P(B), P(A and B), disjunctions: P(A), P(B), P(A or B or both), and conditional probabilities P(A), P(B), P(B|A). They were faster to estimate the probabilities of compound propositions when they had already estimated the probabilities of each of their components. We discuss the implications of these results for theories of probabilistic reasoning. PMID:25363706
A Skew-Normal Mixture Regression Model
ERIC Educational Resources Information Center
Liu, Min; Lin, Tsung-I
2014-01-01
A challenge associated with traditional mixture regression models (MRMs), which rest on the assumption of normally distributed errors, is determining the number of unobserved groups. Specifically, even slight deviations from normality can lead to the detection of spurious classes. The current work aims to (a) examine how sensitive the commonly…
Linearity of Quantum Probability Measure and Hardy's Model
NASA Astrophysics Data System (ADS)
Fujikawa, Kazuo; Oh, C. H.; Zhang, Chengjie
2014-01-01
We re-examine d = 4 hidden-variables model for a system of two spin-1/2 particles in view of the concrete model of Hardy, who analyzed the criterion of entanglement without referring to inequality. The basis of our analysis is the linearity of the probability measure related to the Born probability interpretation, which excludes noncontextual hidden-variables model in d≥3. To be specific, we note the inconsistency of the noncontextual hidden-variables model in d = 4 with the linearity of the quantum mechanical probability measure in the sense <ψ|aṡσ ⊗b ṡσ|ψ>+ <ψ|a ṡσ ⊗b‧ ṡσ|ψ> = <ψ|aṡσ⊗(b + b‧)ṡσ|ψ> for noncollinear b and b‧. It is then shown that Hardy's model in d = 4 does not lead to a unique mathematical expression in the demonstration of the discrepancy of local realism (hidden-variables model) with entanglement and thus his proof is incomplete. We identify the origin of this nonuniqueness with the nonuniqueness of translating quantum mechanical expressions into expressions in hidden-variables model, which results from the failure of the above linearity of the probability measure. In contrast, if the linearity of the probability measure is strictly imposed, which tantamounts to asking that the noncontextual hidden-variables model in d = 4 gives the Clauser-Horne-Shimony-Holt (CHSH) inequality ||≤2 uniquely, it is shown that the hidden-variables model can describe only separable quantum mechanical states; this conclusion is in perfect agreement with the so-called Gisin's theorem which states that ||≤2 implies separable states.
Gap probability - Measurements and models of a pecan orchard
NASA Technical Reports Server (NTRS)
Strahler, Alan H.; Li, Xiaowen; Moody, Aaron; Liu, YI
1992-01-01
Measurements and models are compared for gap probability in a pecan orchard. Measurements are based on panoramic photographs of 50* by 135 view angle made under the canopy looking upwards at regular positions along transects between orchard trees. The gap probability model is driven by geometric parameters at two levels-crown and leaf. Crown level parameters include the shape of the crown envelope and spacing of crowns; leaf level parameters include leaf size and shape, leaf area index, and leaf angle, all as functions of canopy position.
Simulation modeling of the probability of magmatic disruption of the potential Yucca Mountain Site
Crowe, B.M.; Perry, F.V.; Valentine, G.A.; Wallmann, P.C.; Kossik, R.
1993-11-01
The first phase of risk simulation modeling was completed for the probability of magmatic disruption of a potential repository at Yucca Mountain. E1, the recurrence rate of volcanic events, is modeled using bounds from active basaltic volcanic fields and midpoint estimates of E1. The cumulative probability curves for El are generated by simulation modeling using a form of a triangular distribution. The 50% estimates are about 5 to 8 {times} 10{sup 8} events yr{sup {minus}1}. The simulation modeling shows that the cumulative probability distribution for E1 is more sensitive to the probability bounds then the midpoint estimates. The E2 (disruption probability) is modeled through risk simulation using a normal distribution and midpoint estimates from multiple alternative stochastic and structural models. The 50% estimate of E2 is 4.3 {times} 10{sup {minus}3} The probability of magmatic disruption of the potential Yucca Mountain site is 2.5 {times} 10{sup {minus}8} yr{sup {minus}1}. This median estimate decreases to 9.6 {times} 10{sup {minus}9} yr{sup {minus}1} if E1 is modified for the structural models used to define E2. The Repository Integration Program was tested to compare releases of a simulated repository (without volcanic events) to releases from time histories which may include volcanic disruptive events. Results show that the performance modeling can be used for sensitivity studies of volcanic effects.
Model-independent trend of α-preformation probability
NASA Astrophysics Data System (ADS)
Qian, YiBin; Ren, ZhongZhou
2013-08-01
The α-preformation probability is directly deduced from experimental α decay energies and half-lives in an analytical way without any modified parameters. Several other model-deduced results, are used to compare with that of the present study. The key role played by the shell effects in the α-preformation process is indicated in all these cases. In detail, the α-preformation factors of different theoretical extractions are found to have similar behavior for one given isotopic chain, implying the model-independent varying trend of the preformation probability of α particle. In addition, the formation probability of heavier particle in cluster radioactivity is also obtained, and this confirms the relationship between the cluster preformation factor and the product of the cluster and daughter proton numbers.
Camera-Model Identification Using Markovian Transition Probability Matrix
NASA Astrophysics Data System (ADS)
Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei
Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.
Investigation of an empirical probability measure based test for multivariate normality
Booker, J.M.; Johnson, M.E.; Beckman, R.J.
1984-01-01
Foutz (1980) derived a goodness of fit test for a hypothesis specifying a continuous, p-variate distribution. The test statistic is both distribution-free and independent of p. In adapting the Foutz test for multivariate normality, we consider using chi/sup 2/ and rescaled beta variates in constructing statistically equivalent blocks. The Foutz test is compared to other multivariate normality tests developed by Hawkins (1981) and Malkovich and Afifi (1973). The set of alternative distributions tested include Pearson type II and type VII, Johnson translations, Plackett, and distributions arising from Khintchine's theorem. Univariate alternatives from the general class developed by Johnson et al. (1980) were also used. An empirical study confirms the independence of the test statistic on p even when parameters are estimated. In general, the Foutz test is less conservative under the null hypothesis but has poorer power under most alternatives than the other tests.
Modeling highway travel time distribution with conditional probability models
Oliveira Neto, Francisco Moraes; Chin, Shih-Miao; Hwang, Ho-Ling; Han, Lee
2014-01-01
ABSTRACT Under the sponsorship of the Federal Highway Administration's Office of Freight Management and Operations, the American Transportation Research Institute (ATRI) has developed performance measures through the Freight Performance Measures (FPM) initiative. Under this program, travel speed information is derived from data collected using wireless based global positioning systems. These telemetric data systems are subscribed and used by trucking industry as an operations management tool. More than one telemetric operator submits their data dumps to ATRI on a regular basis. Each data transmission contains truck location, its travel time, and a clock time/date stamp. Data from the FPM program provides a unique opportunity for studying the upstream-downstream speed distributions at different locations, as well as different time of the day and day of the week. This research is focused on the stochastic nature of successive link travel speed data on the continental United States Interstates network. Specifically, a method to estimate route probability distributions of travel time is proposed. This method uses the concepts of convolution of probability distributions and bivariate, link-to-link, conditional probability to estimate the expected distributions for the route travel time. Major contribution of this study is the consideration of speed correlation between upstream and downstream contiguous Interstate segments through conditional probability. The established conditional probability distributions, between successive segments, can be used to provide travel time reliability measures. This study also suggests an adaptive method for calculating and updating route travel time distribution as new data or information is added. This methodology can be useful to estimate performance measures as required by the recent Moving Ahead for Progress in the 21st Century Act (MAP 21).
Probability theory for 3-layer remote sensing radiative transfer model: univariate case.
Ben-David, Avishai; Davidson, Charles E
2012-04-23
A probability model for a 3-layer radiative transfer model (foreground layer, cloud layer, background layer, and an external source at the end of line of sight) has been developed. The 3-layer model is fundamentally important as the primary physical model in passive infrared remote sensing. The probability model is described by the Johnson family of distributions that are used as a fit for theoretically computed moments of the radiative transfer model. From the Johnson family we use the SU distribution that can address a wide range of skewness and kurtosis values (in addition to addressing the first two moments, mean and variance). In the limit, SU can also describe lognormal and normal distributions. With the probability model one can evaluate the potential for detecting a target (vapor cloud layer), the probability of observing thermal contrast, and evaluate performance (receiver operating characteristics curves) in clutter-noise limited scenarios. This is (to our knowledge) the first probability model for the 3-layer remote sensing geometry that treats all parameters as random variables and includes higher-order statistics. PMID:22535093
Distributed estimation and joint probabilities estimation by entropy model
NASA Astrophysics Data System (ADS)
Fassinut-Mombot, B.; Zribi, M.; Choquel, J. B.
2001-05-01
This paper proposes the use of Entropy Model for distributed estimation system. Entropy Model is an entropic technique based on the minimization of conditional entropy and developed for Multi-Source/Sensor Information Fusion (MSIF) problem. We address the problem of distributed estimation from independent observations involving multiple sources, i.e., the problem of estimating or selecting one of several identity declaration, or hypothesis concerning an observed object. Two problems are considered in Entropy Model. In order to fuse observations using Entropy Model, it is necessary to know or estimate the conditional probabilities and by equivalent the joint probabilities. A common practice for estimating probability distributions from data when nothing is known (without a priori knowledge), one should prefer distributions that are as uniform as possible, that is, have maximal entropy. Next, the problem of combining (or ``fusing'') observations relating to identity hypotheses and selecting the most appropriate hypothesis about the object's identity is addressed. Much future work remains, but the results indicate that Entropy Model is a promising technique for distributed estimation. .
Fixation probability in a two-locus intersexual selection model.
Durand, Guillermo; Lessard, Sabin
2016-06-01
We study a two-locus model of intersexual selection in a finite haploid population reproducing according to a discrete-time Moran model with a trait locus expressed in males and a preference locus expressed in females. We show that the probability of ultimate fixation of a single mutant allele for a male ornament introduced at random at the trait locus given any initial frequency state at the preference locus is increased by weak intersexual selection and recombination, weak or strong. Moreover, this probability exceeds the initial frequency of the mutant allele even in the case of a costly male ornament if intersexual selection is not too weak. On the other hand, the probability of ultimate fixation of a single mutant allele for a female preference towards a male ornament introduced at random at the preference locus is increased by weak intersexual selection and weak recombination if the female preference is not costly, and is strong enough in the case of a costly male ornament. The analysis relies on an extension of the ancestral recombination-selection graph for samples of haplotypes to take into account events of intersexual selection, while the symbolic calculation of the fixation probabilities is made possible in a reasonable time by an optimizing algorithm. PMID:27059474
A propagation model of computer virus with nonlinear vaccination probability
NASA Astrophysics Data System (ADS)
Gan, Chenquan; Yang, Xiaofan; Liu, Wanping; Zhu, Qingyi
2014-01-01
This paper is intended to examine the effect of vaccination on the spread of computer viruses. For that purpose, a novel computer virus propagation model, which incorporates a nonlinear vaccination probability, is proposed. A qualitative analysis of this model reveals that, depending on the value of the basic reproduction number, either the virus-free equilibrium or the viral equilibrium is globally asymptotically stable. The results of simulation experiments not only demonstrate the validity of our model, but also show the effectiveness of nonlinear vaccination strategies. Through parameter analysis, some effective strategies for eradicating viruses are suggested.
Thorburn, J; Bryman, I; Hahlin, M
1992-01-01
The probability of an unclear very early pregnancy being a normal intrauterine pregnancy was estimated using a logistic model. Five diagnostic measures of prognostic value were identified in the model: (i) daily change in human chorionic gonadotrophin (HCG), (ii) results of transvaginal ultrasound, (iii) vaginal bleeding, (iv) serum progesterone level and (v) risk score for ectopic pregnancy. With the use of this model, the probability of a normal intrauterine pregnancy has been estimated as 96.7%. PMID:1551947
A model to assess dust explosion occurrence probability.
Hassan, Junaid; Khan, Faisal; Amyotte, Paul; Ferdous, Refaul
2014-03-15
Dust handling poses a potential explosion hazard in many industrial facilities. The consequences of a dust explosion are often severe and similar to a gas explosion; however, its occurrence is conditional to the presence of five elements: combustible dust, ignition source, oxidant, mixing and confinement. Dust explosion researchers have conducted experiments to study the characteristics of these elements and generate data on explosibility. These experiments are often costly but the generated data has a significant scope in estimating the probability of a dust explosion occurrence. This paper attempts to use existing information (experimental data) to develop a predictive model to assess the probability of a dust explosion occurrence in a given environment. The pro-posed model considers six key parameters of a dust explosion: dust particle diameter (PD), minimum ignition energy (MIE), minimum explosible concentration (MEC), minimum ignition temperature (MIT), limiting oxygen concentration (LOC) and explosion pressure (Pmax). A conditional probabilistic approach has been developed and embedded in the proposed model to generate a nomograph for assessing dust explosion occurrence. The generated nomograph provides a quick assessment technique to map the occurrence probability of a dust explosion for a given environment defined with the six parameters. PMID:24486616
Opinion dynamics model with weighted influence: Exit probability and dynamics
NASA Astrophysics Data System (ADS)
Biswas, Soham; Sinha, Suman; Sen, Parongama
2013-08-01
We introduce a stochastic model of binary opinion dynamics in which the opinions are determined by the size of the neighboring domains. The exit probability here shows a step function behavior, indicating the existence of a separatrix distinguishing two different regions of basin of attraction. This behavior, in one dimension, is in contrast to other well known opinion dynamics models where no such behavior has been observed so far. The coarsening study of the model also yields novel exponent values. A lower value of persistence exponent is obtained in the present model, which involves stochastic dynamics, when compared to that in a similar type of model with deterministic dynamics. This apparently counterintuitive result is justified using further analysis. Based on these results, it is concluded that the proposed model belongs to a unique dynamical class.
Quantum Probability -- A New Direction for Modeling in Cognitive Science
NASA Astrophysics Data System (ADS)
Roy, Sisir
2014-07-01
Human cognition is still a puzzling issue in research and its appropriate modeling. It depends on how the brain behaves at that particular instance and identifies and responds to a signal among myriads of noises that are present in the surroundings (called external noise) as well as in the neurons themselves (called internal noise). Thus it is not surprising to assume that the functionality consists of various uncertainties, possibly a mixture of aleatory and epistemic uncertainties. It is also possible that a complicated pathway consisting of both types of uncertainties in continuum play a major role in human cognition. For more than 200 years mathematicians and philosophers have been using probability theory to describe human cognition. Recently in several experiments with human subjects, violation of traditional probability theory has been clearly revealed in plenty of cases. Literature survey clearly suggests that classical probability theory fails to model human cognition beyond a certain limit. While the Bayesian approach may seem to be a promising candidate to this problem, the complete success story of Bayesian methodology is yet to be written. The major problem seems to be the presence of epistemic uncertainty and its effect on cognition at any given time. Moreover the stochasticity in the model arises due to the unknown path or trajectory (definite state of mind at each time point), a person is following. To this end a generalized version of probability theory borrowing ideas from quantum mechanics may be a plausible approach. A superposition state in quantum theory permits a person to be in an indefinite state at each point of time. Such an indefinite state allows all the states to have the potential to be expressed at each moment. Thus a superposition state appears to be able to represent better, the uncertainty, ambiguity or conflict experienced by a person at any moment demonstrating that mental states follow quantum mechanics during perception and
A Normalization Model of Multisensory Integration
Ohshiro, Tomokazu; Angelaki, Dora E.; DeAngelis, Gregory C.
2011-01-01
Responses of neurons that integrate multiple sensory inputs are traditionally characterized in terms of a set of empirical principles. However, a simple computational framework that accounts for these empirical features of multisensory integration has not been established. We propose that divisive normalization, acting at the stage of multisensory integration, can account for many of the empirical principles of multisensory integration exhibited by single neurons, such as the principle of inverse effectiveness and the spatial principle. This model, which employs a simple functional operation (normalization) for which there is considerable experimental support, also accounts for the recent observation that the mathematical rule by which multisensory neurons combine their inputs changes with cue reliability. The normalization model, which makes a strong testable prediction regarding cross-modal suppression, may therefore provide a simple unifying computational account of the key features of multisensory integration by neurons. PMID:21552274
NASA Astrophysics Data System (ADS)
Mandal, S.; Choudhury, B. U.
2015-07-01
Sagar Island, setting on the continental shelf of Bay of Bengal, is one of the most vulnerable deltas to the occurrence of extreme rainfall-driven climatic hazards. Information on probability of occurrence of maximum daily rainfall will be useful in devising risk management for sustaining rainfed agrarian economy vis-a-vis food and livelihood security. Using six probability distribution models and long-term (1982-2010) daily rainfall data, we studied the probability of occurrence of annual, seasonal and monthly maximum daily rainfall (MDR) in the island. To select the best fit distribution models for annual, seasonal and monthly time series based on maximum rank with minimum value of test statistics, three statistical goodness of fit tests, viz. Kolmogorove-Smirnov test (K-S), Anderson Darling test ( A 2 ) and Chi-Square test ( X 2) were employed. The fourth probability distribution was identified from the highest overall score obtained from the three goodness of fit tests. Results revealed that normal probability distribution was best fitted for annual, post-monsoon and summer seasons MDR, while Lognormal, Weibull and Pearson 5 were best fitted for pre-monsoon, monsoon and winter seasons, respectively. The estimated annual MDR were 50, 69, 86, 106 and 114 mm for return periods of 2, 5, 10, 20 and 25 years, respectively. The probability of getting an annual MDR of >50, >100, >150, >200 and >250 mm were estimated as 99, 85, 40, 12 and 03 % level of exceedance, respectively. The monsoon, summer and winter seasons exhibited comparatively higher probabilities (78 to 85 %) for MDR of >100 mm and moderate probabilities (37 to 46 %) for >150 mm. For different recurrence intervals, the percent probability of MDR varied widely across intra- and inter-annual periods. In the island, rainfall anomaly can pose a climatic threat to the sustainability of agricultural production and thus needs adequate adaptation and mitigation measures.
Improving Conceptual Models Using AEM Data and Probability Distributions
NASA Astrophysics Data System (ADS)
Davis, A. C.; Munday, T. J.; Christensen, N. B.
2012-12-01
With emphasis being placed on uncertainty in groundwater modelling and prediction, coupled with questions concerning the value of geophysical methods in hydrogeology, it is important to ask meaningful questions of hydrogeophysical data and inversion results. For example, to characterise aquifers using electromagnetic (EM) data, we ask questions such as "Given that the electrical conductivity of aquifer 'A' is less than x, where is that aquifer elsewhere in the survey area?" The answer may be given by examining inversion models, selecting locations and layers that satisfy the condition 'conductivity <= x', and labelling them as aquifer 'A'. One difficulty with this approach is that the inversion model result often be considered to be the only model for the data. In reality it is just one image of the subsurface that, given the method and the regularisation imposed in the inversion, agrees with measured data within a given error bound. We have no idea whether the final model realised by the inversion satisfies the global minimum error, or whether it is simply in a local minimum. There is a distribution of inversion models that satisfy the error tolerance condition: the final model is not the only one, nor is it necessarily the correct one. AEM inversions are often linearised in the calculation of the parameter sensitivity: we rely on the second derivatives in the Taylor expansion, thus the minimum model has all layer parameters distributed about their mean parameter value with well-defined variance. We investigate the validity of the minimum model, and its uncertainty, by examining the full posterior covariance matrix. We ask questions of the minimum model, and answer them in a probabilistically. The simplest question we can pose is "What is the probability that all layer resistivity values are <= a cut-off value?" We can calculate through use of the erf or the erfc functions. The covariance values of the inversion become marginalised in the integration: only the
Defining prior probabilities for hydrologic model structures in UK catchments
NASA Astrophysics Data System (ADS)
Clements, Michiel; Pianosi, Francesca; Wagener, Thorsten; Coxon, Gemma; Freer, Jim; Booij, Martijn
2014-05-01
The selection of a model structure is an essential part of the hydrological modelling process. Recently flexible modeling frameworks have been proposed where hybrid model structures can be obtained by mixing together components from a suite of existing hydrological models. When sufficient and reliable data are available, this framework can be successfully utilised to identify the most appropriate structure, and associated optimal parameters, for a given catchment by maximizing the different models ability to reproduce the desired range of flow behaviour. In this study, we use a flexible modelling framework to address a rather different question: can the most appropriate model structure be inferred a priori (i.e without using flow observations) from catchment characteristics like topography, geology, land use, and climate? Furthermore and more generally, can we define priori probabilities of different model structures as a function of catchment characteristics? To address these questions we propose a two-step methodology and demonstrate it by application to a national database of meteo-hydrological data and catchment characteristics for 89 catchments across the UK. In the first step, each catchment is associated with its most appropriate model structure. We consider six possible structures obtained by combining two soil moisture accounting components widely used in the UK (Penman and PDM) and three different flow routing modules (linear, parallel, leaky). We measure the suitability of a model structure by the probability of finding behavioural parameterizations for that model structure when applied to the catchment under study. In the second step, we use regression analysis to establish a relation between selected model structures and the catchment characteristics. Specifically, we apply Classification And Regression Trees (CART) and show that three catchment characteristics, the Base Flow Index, the Runoff Coefficient and the mean Drainage Path Slope, can be used
Predictions of Geospace Drivers By the Probability Distribution Function Model
NASA Astrophysics Data System (ADS)
Bussy-Virat, C.; Ridley, A. J.
2014-12-01
Geospace drivers like the solar wind speed, interplanetary magnetic field (IMF), and solar irradiance have a strong influence on the density of the thermosphere and the near-Earth space environment. This has important consequences on the drag on satellites that are in low orbit and therefore on their position. One of the basic problems with space weather prediction is that these drivers can only be measured about one hour before they affect the environment. In order to allow for adequate planning for some members of the commercial, military, or civilian communities, reliable long-term space weather forecasts are needed. The study presents a model for predicting geospace drivers up to five days in advance. This model uses the same general technique to predict the solar wind speed, the three components of the IMF, and the solar irradiance F10.7. For instance, it uses Probability distribution functions (PDFs) to relate the current solar wind speed and slope to the future solar wind speed, as well as the solar wind speed to the solar wind speed one solar rotation in the future. The PDF Model has been compared to other models for predictions of the speed. It has been found that it is better than using the current solar wind speed (i.e., persistence), and better than the Wang-Sheeley-Arge Model for prediction horizons of 24 hours. Once the drivers are predicted, and the uncertainty on the drivers are specified, the density in the thermosphere can be derived using various models of the thermosphere, such as the Global Ionosphere Thermosphere Model. In addition, uncertainties on the densities can be estimated, based on ensembles of simulations. From the density and uncertainty predictions, satellite positions, as well as the uncertainty in those positions can be estimated. These can assist operators in determining the probability of collisions between objects in low Earth orbit.
Estimating transition probabilities among everglades wetland communities using multistate models
Hotaling, A.S.; Martin, J.; Kitchens, W.M.
2009-01-01
In this study we were able to provide the first estimates of transition probabilities of wet prairie and slough vegetative communities in Water Conservation Area 3A (WCA3A) of the Florida Everglades and to identify the hydrologic variables that determine these transitions. These estimates can be used in management models aimed at restoring proportions of wet prairie and slough habitats to historical levels in the Everglades. To determine what was driving the transitions between wet prairie and slough communities we evaluated three hypotheses: seasonality, impoundment, and wet and dry year cycles using likelihood-based multistate models to determine the main driver of wet prairie conversion in WCA3A. The most parsimonious model included the effect of wet and dry year cycles on vegetative community conversions. Several ecologists have noted wet prairie conversion in southern WCA3A but these are the first estimates of transition probabilities among these community types. In addition, to being useful for management of the Everglades we believe that our framework can be used to address management questions in other ecosystems. ?? 2009 The Society of Wetland Scientists.
Louwe, R. J. W.; Wendling, M.; Herk, M. B. van; Mijnheer, B. J.
2007-04-15
Irradiation of the heart is one of the major concerns during radiotherapy of breast cancer. Three-dimensional (3D) treatment planning would therefore be useful but cannot always be performed for left-sided breast treatments, because CT data may not be available. However, even if 3D dose calculations are available and an estimate of the normal tissue damage can be made, uncertainties in patient positioning may significantly influence the heart dose during treatment. Therefore, 3D reconstruction of the actual heart dose during breast cancer treatment using electronic imaging portal device (EPID) dosimetry has been investigated. A previously described method to reconstruct the dose in the patient from treatment portal images at the radiological midsurface was used in combination with a simple geometrical model of the irradiated heart volume to enable calculation of dose-volume histograms (DVHs), to independently verify this aspect of the treatment without using 3D data from a planning CT scan. To investigate the accuracy of our method, the DVHs obtained with full 3D treatment planning system (TPS) calculations and those obtained after resampling the TPS dose in the radiological midsurface were compared for fifteen breast cancer patients for whom CT data were available. In addition, EPID dosimetry as well as 3D dose calculations using our TPS, film dosimetry, and ionization chamber measurements were performed in an anthropomorphic phantom. It was found that the dose reconstructed using EPID dosimetry and the dose calculated with the TPS agreed within 1.5% in the lung/heart region. The dose-volume histograms obtained with EPID dosimetry were used to estimate the normal tissue complication probability (NTCP) for late excess cardiac mortality. Although the accuracy of these NTCP calculations might be limited due to the uncertainty in the NTCP model, in combination with our portal dosimetry approach it allows incorporation of the actual heart dose. For the anthropomorphic
Recent Advances in Model-Assisted Probability of Detection
NASA Technical Reports Server (NTRS)
Thompson, R. Bruce; Brasche, Lisa J.; Lindgren, Eric; Swindell, Paul; Winfree, William P.
2009-01-01
The increased role played by probability of detection (POD) in structural integrity programs, combined with the significant time and cost associated with the purely empirical determination of POD, provides motivation for alternate means to estimate this important metric of NDE techniques. One approach to make the process of POD estimation more efficient is to complement limited empirical experiments with information from physics-based models of the inspection process or controlled laboratory experiments. The Model-Assisted Probability of Detection (MAPOD) Working Group was formed by the Air Force Research Laboratory, the FAA Technical Center, and NASA to explore these possibilities. Since the 2004 inception of the MAPOD Working Group, 11 meetings have been held in conjunction with major NDE conferences. This paper will review the accomplishments of this group, which includes over 90 members from around the world. Included will be a discussion of strategies developed to combine physics-based and empirical understanding, draft protocols that have been developed to guide application of the strategies, and demonstrations that have been or are being carried out in a number of countries. The talk will conclude with a discussion of future directions, which will include documentation of benefits via case studies, development of formal protocols for engineering practice, as well as a number of specific technical issues.
Normal brain ageing: models and mechanisms
Toescu, Emil C
2005-01-01
Normal ageing is associated with a degree of decline in a number of cognitive functions. Apart from the issues raised by the current attempts to expand the lifespan, understanding the mechanisms and the detailed metabolic interactions involved in the process of normal neuronal ageing continues to be a challenge. One model, supported by a significant amount of experimental evidence, views the cellular ageing as a metabolic state characterized by an altered function of the metabolic triad: mitochondria–reactive oxygen species (ROS)–intracellular Ca2+. The perturbation in the relationship between the members of this metabolic triad generate a state of decreased homeostatic reserve, in which the aged neurons could maintain adequate function during normal activity, as demonstrated by the fact that normal ageing is not associated with widespread neuronal loss, but become increasingly vulnerable to the effects of excessive metabolic loads, usually associated with trauma, ischaemia or neurodegenerative processes. This review will concentrate on some of the evidence showing altered mitochondrial function with ageing and also discuss some of the functional consequences that would result from such events, such as alterations in mitochondrial Ca2+ homeostasis, ATP production and generation of ROS. PMID:16321805
Probability of detection models for eddy current NDE methods
Rajesh, S.N.
1993-04-30
The development of probability of detection (POD) models for a variety of nondestructive evaluation (NDE) methods is motivated by a desire to quantify the variability introduced during the process of testing. Sources of variability involved in eddy current methods of NDE include those caused by variations in liftoff, material properties, probe canting angle, scan format, surface roughness and measurement noise. This thesis presents a comprehensive POD model for eddy current NDE. Eddy current methods of nondestructive testing are used widely in industry to inspect a variety of nonferromagnetic and ferromagnetic materials. The development of a comprehensive POD model is therefore of significant importance. The model incorporates several sources of variability characterized by a multivariate Gaussian distribution and employs finite element analysis to predict the signal distribution. The method of mixtures is then used for estimating optimal threshold values. The research demonstrates the use of a finite element model within a probabilistic framework to the spread in the measured signal for eddy current nondestructive methods. Using the signal distributions for various flaw sizes the POD curves for varying defect parameters have been computed. In contrast to experimental POD models, the cost of generating such curves is very low and complex defect shapes can be handled very easily. The results are also operator independent.
Modelling the Probability of Landslides Impacting Road Networks
NASA Astrophysics Data System (ADS)
Taylor, F. E.; Malamud, B. D.
2012-04-01
During a landslide triggering event, the threat of landslides blocking roads poses a risk to logistics, rescue efforts and communities dependant on those road networks. Here we present preliminary results of a stochastic model we have developed to evaluate the probability of landslides intersecting a simple road network during a landslide triggering event and apply simple network indices to measure the state of the road network in the affected region. A 4000 x 4000 cell array with a 5 m x 5 m resolution was used, with a pre-defined simple road network laid onto it, and landslides 'randomly' dropped onto it. Landslide areas (AL) were randomly selected from a three-parameter inverse gamma probability density function, consisting of a power-law decay of about -2.4 for medium and large values of AL and an exponential rollover for small values of AL; the rollover (maximum probability) occurs at about AL = 400 m2 This statistical distribution was chosen based on three substantially complete triggered landslide inventories recorded in existing literature. The number of landslide areas (NL) selected for each triggered event iteration was chosen to have an average density of 1 landslide km-2, i.e. NL = 400 landslide areas chosen randomly for each iteration, and was based on several existing triggered landslide event inventories. A simple road network was chosen, in a 'T' shape configuration, with one road 1 x 4000 cells (5 m x 20 km) in a 'T' formation with another road 1 x 2000 cells (5 m x 10 km). The landslide areas were then randomly 'dropped' over the road array and indices such as the location, size (ABL) and number of road blockages (NBL) recorded. This process was performed 500 times (iterations) in a Monte-Carlo type simulation. Initial results show that for a landslide triggering event with 400 landslides over a 400 km2 region, the number of road blocks per iteration, NBL,ranges from 0 to 7. The average blockage area for the 500 iterations (A¯ BL) is about 3000 m
NASA Astrophysics Data System (ADS)
Montanari, A.
2006-12-01
This contribution introduces a statistically based approach for uncertainty assessment in hydrological modeling, in an optimality context. Indeed, in several real world applications, there is the need for the user to select a model that is deemed to be the best possible choice accordingly to a given goodness of fit criteria. In this case, it is extremely important to assess the model uncertainty, intended as the range around the model output within which the measured hydrological variable is expected to fall with a given probability. This indication allows the user to quantify the risk associated to a decision that is based on the model response. The technique proposed here is carried out by inferring the probability distribution of the hydrological model error through a non linear multiple regression approach, depending on an arbitrary number of selected conditioning variables. These may include the current and previous model output as well as internal state variables of the model. The purpose is to indirectly relate the model error to the sources of uncertainty, through the conditioning variables. The method can be applied to any model of arbitrary complexity, included distributed approaches. The probability distribution of the model error is derived in the Gaussian space, through a meta-Gaussian approach. The normal quantile transform is applied in order to make the marginal probability distribution of the model error and the conditioning variables Gaussian. Then the above marginal probability distributions are related through the multivariate Gaussian distribution, whose parameters are estimated via multiple regression. Application of the inverse of the normal quantile transform allows the user to derive the confidence limits of the model output for an assigned significance level. The proposed technique is valid under statistical assumptions, that are essentially those conditioning the validity of the multiple regression in the Gaussian space. Statistical tests
Modeling pore corrosion in normally open gold- plated copper connectors.
Battaile, Corbett Chandler; Moffat, Harry K.; Sun, Amy Cha-Tien; Enos, David George; Serna, Lysle M.; Sorensen, Neil Robert
2008-09-01
The goal of this study is to model the electrical response of gold plated copper electrical contacts exposed to a mixed flowing gas stream consisting of air containing 10 ppb H{sub 2}S at 30 C and a relative humidity of 70%. This environment accelerates the attack normally observed in a light industrial environment (essentially a simplified version of the Battelle Class 2 environment). Corrosion rates were quantified by measuring the corrosion site density, size distribution, and the macroscopic electrical resistance of the aged surface as a function of exposure time. A pore corrosion numerical model was used to predict both the growth of copper sulfide corrosion product which blooms through defects in the gold layer and the resulting electrical contact resistance of the aged surface. Assumptions about the distribution of defects in the noble metal plating and the mechanism for how corrosion blooms affect electrical contact resistance were needed to complete the numerical model. Comparisons are made to the experimentally observed number density of corrosion sites, the size distribution of corrosion product blooms, and the cumulative probability distribution of the electrical contact resistance. Experimentally, the bloom site density increases as a function of time, whereas the bloom size distribution remains relatively independent of time. These two effects are included in the numerical model by adding a corrosion initiation probability proportional to the surface area along with a probability for bloom-growth extinction proportional to the corrosion product bloom volume. The cumulative probability distribution of electrical resistance becomes skewed as exposure time increases. While the electrical contact resistance increases as a function of time for a fraction of the bloom population, the median value remains relatively unchanged. In order to model this behavior, the resistance calculated for large blooms has been weighted more heavily.
Biomechanical modelling of normal pressure hydrocephalus.
Dutta-Roy, Tonmoy; Wittek, Adam; Miller, Karol
2008-07-19
This study investigates the mechanics of normal pressure hydrocephalus (NPH) growth using a computational approach. We created a generic 3-D brain mesh of a healthy human brain and modelled the brain parenchyma as single phase and biphasic continuum. In our model, hyperelastic constitutive law and finite deformation theory described deformations within the brain parenchyma. We used a value of 155.77Pa for the shear modulus (mu) of the brain parenchyma. Additionally, in our model, contact boundary definitions constrained the brain outer surface inside the skull. We used transmantle pressure difference to load the model. Fully nonlinear, implicit finite element procedures in the time domain were used to obtain the deformations of the ventricles and the brain. To the best of our knowledge, this was the first 3-D, fully nonlinear model investigating NPH growth mechanics. Clinicians generally accept that at most 1mm of Hg transmantle pressure difference (133.416Pa) is associated with the condition of NPH. Our computations showed that transmantle pressure difference of 1mm of Hg (133.416Pa) did not produce NPH for either single phase or biphasic model of the brain parenchyma. A minimum transmantle pressure difference of 1.764mm of Hg (235.44Pa) was required to produce the clinical condition of NPH. This suggested that the hypothesis of a purely mechanical basis for NPH growth needs to be revised. We also showed that under equal transmantle pressure difference load, there were no significant differences between the computed ventricular volumes for biphasic and incompressible/nearly incompressible single phase model of the brain parenchyma. As a result, there was no major advantage gained by using a biphasic model for the brain parenchyma. We propose that for modelling NPH, nearly incompressible single phase model of the brain parenchyma was adequate. Single phase treatment of the brain parenchyma simplified the mathematical description of the NPH model and resulted in
Aerosol Behavior Log-Normal Distribution Model.
Energy Science and Technology Software Center (ESTSC)
2001-10-22
HAARM3, an acronym for Heterogeneous Aerosol Agglomeration Revised Model 3, is the third program in the HAARM series developed to predict the time-dependent behavior of radioactive aerosols under postulated LMFBR accident conditions. HAARM3 was developed to include mechanisms of aerosol growth and removal which had not been accounted for in the earlier models. In addition, experimental measurements obtained on sodium oxide aerosols have been incorporated in the code. As in HAARM2, containment gas temperature, pressure,more » and temperature gradients normal to interior surfaces are permitted to vary with time. The effects of reduced density on sodium oxide agglomerate behavior and of nonspherical shape of particles on aerosol behavior mechanisms are taken into account, and aerosol agglomeration due to turbulent air motion is considered. Also included is a capability to calculate aerosol concentration attenuation factors and to restart problems requiring long computing times.« less
Low-probability flood risk modeling for New York City.
Aerts, Jeroen C J H; Lin, Ning; Botzen, Wouter; Emanuel, Kerry; de Moel, Hans
2013-05-01
The devastating impact by Hurricane Sandy (2012) again showed New York City (NYC) is one of the most vulnerable cities to coastal flooding around the globe. The low-lying areas in NYC can be flooded by nor'easter storms and North Atlantic hurricanes. The few studies that have estimated potential flood damage for NYC base their damage estimates on only a single, or a few, possible flood events. The objective of this study is to assess the full distribution of hurricane flood risk in NYC. This is done by calculating potential flood damage with a flood damage model that uses many possible storms and surge heights as input. These storms are representative for the low-probability/high-impact flood hazard faced by the city. Exceedance probability-loss curves are constructed under different assumptions about the severity of flood damage. The estimated flood damage to buildings for NYC is between US$59 and 129 millions/year. The damage caused by a 1/100-year storm surge is within a range of US$2 bn-5 bn, while this is between US$5 bn and 11 bn for a 1/500-year storm surge. An analysis of flood risk in each of the five boroughs of NYC finds that Brooklyn and Queens are the most vulnerable to flooding. This study examines several uncertainties in the various steps of the risk analysis, which resulted in variations in flood damage estimations. These uncertainties include: the interpolation of flood depths; the use of different flood damage curves; and the influence of the spectra of characteristics of the simulated hurricanes. PMID:23383711
A Probability Model of Accuracy in Deception Detection Experiments.
ERIC Educational Resources Information Center
Park, Hee Sun; Levine, Timothy R.
2001-01-01
Extends the recent work on the veracity effect in deception detection. Explains the probabilistic nature of a receiver's accuracy in detecting deception and analyzes a receiver's detection of deception in terms of set theory and conditional probability. Finds that accuracy is shown to be a function of the relevant conditional probability and the…
Smits, Iris A M; Timmerman, Marieke E; Stegeman, Alwin
2016-05-01
Maximum likelihood estimation of the linear factor model for continuous items assumes normally distributed item scores. We consider deviations from normality by means of a skew-normally distributed factor model or a quadratic factor model. We show that the item distributions under a skew-normal factor are equivalent to those under a quadratic model up to third-order moments. The reverse only holds if the quadratic loadings are equal to each other and within certain bounds. We illustrate that observed data which follow any skew-normal factor model can be so well approximated with the quadratic factor model that the models are empirically indistinguishable, and that the reverse does not hold in general. The choice between the two models to account for deviations of normality is illustrated by an empirical example from clinical psychology. PMID:26566696
Estimation of State Transition Probabilities: A Neural Network Model
NASA Astrophysics Data System (ADS)
Saito, Hiroshi; Takiyama, Ken; Okada, Masato
2015-12-01
Humans and animals can predict future states on the basis of acquired knowledge. This prediction of the state transition is important for choosing the best action, and the prediction is only possible if the state transition probability has already been learned. However, how our brains learn the state transition probability is unknown. Here, we propose a simple algorithm for estimating the state transition probability by utilizing the state prediction error. We analytically and numerically confirmed that our algorithm is able to learn the probability completely with an appropriate learning rate. Furthermore, our learning rule reproduced experimentally reported psychometric functions and neural activities in the lateral intraparietal area in a decision-making task. Thus, our algorithm might describe the manner in which our brains learn state transition probabilities and predict future states.
Probability distributed time delays: integrating spatial effects into temporal models
2010-01-01
Background In order to provide insights into the complex biochemical processes inside a cell, modelling approaches must find a balance between achieving an adequate representation of the physical phenomena and keeping the associated computational cost within reasonable limits. This issue is particularly stressed when spatial inhomogeneities have a significant effect on system's behaviour. In such cases, a spatially-resolved stochastic method can better portray the biological reality, but the corresponding computer simulations can in turn be prohibitively expensive. Results We present a method that incorporates spatial information by means of tailored, probability distributed time-delays. These distributions can be directly obtained by single in silico or a suitable set of in vitro experiments and are subsequently fed into a delay stochastic simulation algorithm (DSSA), achieving a good compromise between computational costs and a much more accurate representation of spatial processes such as molecular diffusion and translocation between cell compartments. Additionally, we present a novel alternative approach based on delay differential equations (DDE) that can be used in scenarios of high molecular concentrations and low noise propagation. Conclusions Our proposed methodologies accurately capture and incorporate certain spatial processes into temporal stochastic and deterministic simulations, increasing their accuracy at low computational costs. This is of particular importance given that time spans of cellular processes are generally larger (possibly by several orders of magnitude) than those achievable by current spatially-resolved stochastic simulators. Hence, our methodology allows users to explore cellular scenarios under the effects of diffusion and stochasticity in time spans that were, until now, simply unfeasible. Our methodologies are supported by theoretical considerations on the different modelling regimes, i.e. spatial vs. delay-temporal, as indicated
Modelling probabilities of heavy precipitation by regional approaches
NASA Astrophysics Data System (ADS)
Gaal, L.; Kysely, J.
2009-09-01
Extreme precipitation events are associated with large negative consequences for human society, mainly as they may trigger floods and landslides. The recent series of flash floods in central Europe (affecting several isolated areas) on June 24-28, 2009, the worst one over several decades in the Czech Republic as to the number of persons killed and the extent of damage to buildings and infrastructure, is an example. Estimates of growth curves and design values (corresponding e.g. to 50-yr and 100-yr return periods) of precipitation amounts, together with their uncertainty, are important in hydrological modelling and other applications. The interest in high quantiles of precipitation distributions is also related to possible climate change effects, as climate model simulations tend to project increased severity of precipitation extremes in a warmer climate. The present study compares - in terms of Monte Carlo simulation experiments - several methods to modelling probabilities of precipitation extremes that make use of ‘regional approaches’: the estimation of distributions of extremes takes into account data in a ‘region’ (‘pooling group’), in which one may assume that the distributions at individual sites are identical apart from a site-specific scaling factor (the condition is referred to as ‘regional homogeneity’). In other words, all data in a region - often weighted in some way - are taken into account when estimating the probability distribution of extremes at a given site. The advantage is that sampling variations in the estimates of model parameters and high quantiles are to a large extent reduced compared to the single-site analysis. We focus on the ‘region-of-influence’ (ROI) method which is based on the identification of unique pooling groups (forming the database for the estimation) for each site under study. The similarity of sites is evaluated in terms of a set of site attributes related to the distributions of extremes. The issue of
Modelling probabilities of heavy precipitation by regional approaches
NASA Astrophysics Data System (ADS)
Gaal, L.; Kysely, J.
2009-09-01
Extreme precipitation events are associated with large negative consequences for human society, mainly as they may trigger floods and landslides. The recent series of flash floods in central Europe (affecting several isolated areas) on June 24-28, 2009, the worst one over several decades in the Czech Republic as to the number of persons killed and the extent of damage to buildings and infrastructure, is an example. Estimates of growth curves and design values (corresponding e.g. to 50-yr and 100-yr return periods) of precipitation amounts, together with their uncertainty, are important in hydrological modelling and other applications. The interest in high quantiles of precipitation distributions is also related to possible climate change effects, as climate model simulations tend to project increased severity of precipitation extremes in a warmer climate. The present study compares - in terms of Monte Carlo simulation experiments - several methods to modelling probabilities of precipitation extremes that make use of ‘regional approaches’: the estimation of distributions of extremes takes into account data in a ‘region’ (‘pooling group’), in which one may assume that the distributions at individual sites are identical apart from a site-specific scaling factor (the condition is referred to as ‘regional homogeneity’). In other words, all data in a region - often weighted in some way - are taken into account when estimating the probability distribution of extremes at a given site. The advantage is that sampling variations in the estimates of model parameters and high quantiles are to a large extent reduced compared to the single-site analysis. We focus on the ‘region-of-influence’ (ROI) method which is based on the identification of unique pooling groups (forming the database for the estimation) for each site under study. The similarity of sites is evaluated in terms of a set of site attributes related to the distributions of extremes. The issue of
Probability Distribution Functions of freak-waves: nonlinear vs linear model
NASA Astrophysics Data System (ADS)
Kachulin, Dmitriy; Dyachenko, Alexander; Zakharov, Vladimir
2015-04-01
No doubts that estimation of probability of freak-wave appearing at the surface of ocean has practical meaning. Among different mechanisms of this phenomenon linear dispersion and modulational instability are generally recognized. For linear equation of water waves Probability Distribution Functions (PDF) can be calculated analytically and it is nothing but normal Gaussian distribution for surface elevation. Or it is Rayleigh distribution for absolute values of elevations. For nonlinear waves one can expect something different. In this report we consider and compare these two mechanism for various levels of nonlinearity. We present results of numerical experiments on calculation of Probability Distribution Functions for surface elevations of waters waves both for nonlinear and linear models. Both model demonstrates Rayleigh distribution of surface elevations. However dispersion of PDF for nonlinear case is much larger than for linear case. This work was supported by the Grant "Wave turbulence: theory, numerical simulation, experiment" #14-22-00174 of Russian Science Foundation. Numerical simulation was performed on the Informational Computational Center of the Novosibirsk State University.
The Sequential Probability Ratio Test and Binary Item Response Models
ERIC Educational Resources Information Center
Nydick, Steven W.
2014-01-01
The sequential probability ratio test (SPRT) is a common method for terminating item response theory (IRT)-based adaptive classification tests. To decide whether a classification test should stop, the SPRT compares a simple log-likelihood ratio, based on the classification bound separating two categories, to prespecified critical values. As has…
Marijuana odor perception: studies modeled from probable cause cases.
Doty, Richard L; Wudarski, Thomas; Marshall, David A; Hastings, Lloyd
2004-04-01
The 4th Amendment of the United States Constitution protects American citizens against unreasonable search and seizure without probable cause. Although law enforcement officials routinely rely solely on the sense of smell to justify probable cause when entering vehicles and dwellings to search for illicit drugs, the accuracy of their perception in this regard has rarely been questioned and, to our knowledge, never tested. In this paper, we present data from two empirical studies based upon actual legal cases in which the odor of marijuana was used as probable cause for search. In the first, we simulated a situation in which, during a routine traffic stop, the odor of packaged marijuana located in the trunk of an automobile was said to be detected through the driver's window. In the second, we investigated a report that marijuana odor was discernable from a considerable distance from the chimney effluence of diesel exhaust emanating from an illicit California grow room. Our findings suggest that the odor of marijuana was not reliably discernable by persons with an excellent sense of smell in either case. These studies are the first to examine the ability of humans to detect marijuana in simulated real-life situations encountered by law enforcement officials, and are particularly relevant to the issue of probable cause. PMID:15141780
ERIC Educational Resources Information Center
Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas
2013-01-01
Data analysis requires subtle probability reasoning to answer questions like "What is the chance of event A occurring, given that event B was observed?" This generic question arises in discussions of many intriguing scientific questions such as "What is the probability that an adolescent weighs between 120 and 140 pounds given that…
Valve, normally open, titanium: Pyronetics Model 1425
NASA Technical Reports Server (NTRS)
Avalos, E.
1972-01-01
An operating test series was applied to two explosive actuated, normally open, titanium valves. There were no failures. Tests included: proof pressure and external leakage test, gross leak test, post actuation leakage test, and burst pressure test.
Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.
2012-04-01
Purpose: To determine whether normal tissue complication probability (NTCP) analyses of the human spinal cord by use of the Lyman-Kutcher-Burman (LKB) model, supplemented by linear-quadratic modeling to account for the effect of fractionation, predict the risk of myelopathy from stereotactic radiosurgery (SRS). Methods and Materials: From November 2001 to July 2008, 24 spinal hemangioblastomas in 17 patients were treated with SRS. Of the tumors, 17 received 1 fraction with a median dose of 20 Gy (range, 18-30 Gy) and 7 received 20 to 25 Gy in 2 or 3 sessions, with cord maximum doses of 22.7 Gy (range, 17.8-30.9 Gy) and 22.0 Gy (range, 20.2-26.6 Gy), respectively. By use of conventional values for {alpha}/{beta}, volume parameter n, 50% complication probability dose TD{sub 50}, and inverse slope parameter m, a computationally simplified implementation of the LKB model was used to calculate the biologically equivalent uniform dose and NTCP for each treatment. Exploratory calculations were performed with alternate values of {alpha}/{beta} and n. Results: In this study 1 case (4%) of myelopathy occurred. The LKB model using radiobiological parameters from Emami and the logistic model with parameters from Schultheiss overestimated complication rates, predicting 13 complications (54%) and 18 complications (75%), respectively. An increase in the volume parameter (n), to assume greater parallel organization, improved the predictive value of the models. Maximum-likelihood LKB fitting of {alpha}/{beta} and n yielded better predictions (0.7 complications), with n = 0.023 and {alpha}/{beta} = 17.8 Gy. Conclusions: The spinal cord tolerance to the dosimetry of SRS is higher than predicted by the LKB model using any set of accepted parameters. Only a high {alpha}/{beta} value in the LKB model and only a large volume effect in the logistic model with Schultheiss data could explain the low number of complications observed. This finding emphasizes that radiobiological models
Simplifying Probability Elicitation and Uncertainty Modeling in Bayesian Networks
Paulson, Patrick R; Carroll, Thomas E; Sivaraman, Chitra; Neorr, Peter A; Unwin, Stephen D; Hossain, Shamina S
2011-04-16
In this paper we contribute two methods that simplify the demands of knowledge elicitation for particular types of Bayesian networks. The first method simplify the task of providing probabilities when the states that a random variable takes can be described by a new, fully ordered state set in which a state implies all the preceding states. The second method leverages Dempster-Shafer theory of evidence to provide a way for the expert to express the degree of ignorance that they feel about the estimates being provided.
Jakobi, Annika; Bandurska-Luque, Anna; Stützer, Kristin; Haase, Robert; Löck, Steffen; Wack, Linda-Jacqueline; Mönnich, David; Thorwarth, Daniela; and others
2015-08-01
Purpose: The purpose of this study was to determine, by treatment plan comparison along with normal tissue complication probability (NTCP) modeling, whether a subpopulation of patients with head and neck squamous cell carcinoma (HNSCC) could be identified that would gain substantial benefit from proton therapy in terms of NTCP. Methods and Materials: For 45 HNSCC patients, intensity modulated radiation therapy (IMRT) was compared to intensity modulated proton therapy (IMPT). Physical dose distributions were evaluated as well as the resulting NTCP values, using modern models for acute mucositis, xerostomia, aspiration, dysphagia, laryngeal edema, and trismus. Patient subgroups were defined based on primary tumor location. Results: Generally, IMPT reduced the NTCP values while keeping similar target coverage for all patients. Subgroup analyses revealed a higher individual reduction of swallowing-related side effects by IMPT for patients with tumors in the upper head and neck area, whereas the risk reduction of acute mucositis was more pronounced in patients with tumors in the larynx region. More patients with tumors in the upper head and neck area had a reduction in NTCP of more than 10%. Conclusions: Subgrouping can help to identify patients who may benefit more than others from the use of IMPT and, thus, can be a useful tool for a preselection of patients in the clinic where there are limited PT resources. Because the individual benefit differs within a subgroup, the relative merits should additionally be evaluated by individual treatment plan comparisons.
NASA Astrophysics Data System (ADS)
Mõttus, Matti; Stenberg, Pauline; Rautiainen, Miina
2007-02-01
Photon recollision probability, or the probability by which a photon scattered from a phytoelement in the canopy will interact within the canopy again, has previously been shown to approximate well the fractions of radiation scattered and absorbed by homogeneous plant covers. To test the applicability of the recollision probability theory to more complicated canopy structures, a set of modeled stands was generated using allometric relations for Scots pine trees growing in central Finland. A hybrid geometric-optical model (FRT, or the Kuusk-Nilson model) was used to simulate the reflectance and transmittance of the modeled forests consisting of ellipsoidal tree crowns and, on the basis of the simulations, the recollision probability (p) was calculated for the canopies. As the recollision probability theory assumes energy conservation, a method to check and ensure energy conservation in the model was first developed. The method enabled matching the geometric-optical and two-stream submodels of the hybrid FRT model, and more importantly, allowed calculation of the recollision probability from model output. Next, to assess the effect of canopy structure on the recollision probability, the obtained p-values were compared to those calculated for structureless (homogeneous) canopies with similar effective LAI using a simple two-stream radiation transfer model. Canopy structure was shown to increase the recollision probability, implying that structured canopies absorb more efficiently the radiation interacting with the canopy, and it also changed the escape probabilities for different scattering orders. Most importantly, the study demonstrated that the concept of recollision probability is coherent with physically based canopy reflectance models which use the classical radiative transfer theory. Furthermore, it was shown that as a first approximation, the recollision probability can be considered to be independent of wavelength. Finally, different algorithms for
Takemura, Kazuhisa; Murakami, Hajime
2016-01-01
A probability weighting function (w(p)) is considered to be a nonlinear function of probability (p) in behavioral decision theory. This study proposes a psychophysical model of probability weighting functions derived from a hyperbolic time discounting model and a geometric distribution. The aim of the study is to show probability weighting functions from the point of view of waiting time for a decision maker. Since the expected value of a geometrically distributed random variable X is 1/p, we formulized the probability weighting function of the expected value model for hyperbolic time discounting as w(p) = (1 − k log p)−1. Moreover, the probability weighting function is derived from Loewenstein and Prelec's (1992) generalized hyperbolic time discounting model. The latter model is proved to be equivalent to the hyperbolic-logarithmic weighting function considered by Prelec (1998) and Luce (2001). In this study, we derive a model from the generalized hyperbolic time discounting model assuming Fechner's (1860) psychophysical law of time and a geometric distribution of trials. In addition, we develop median models of hyperbolic time discounting and generalized hyperbolic time discounting. To illustrate the fitness of each model, a psychological experiment was conducted to assess the probability weighting and value functions at the level of the individual participant. The participants were 50 university students. The results of individual analysis indicated that the expected value model of generalized hyperbolic discounting fitted better than previous probability weighting decision-making models. The theoretical implications of this finding are discussed. PMID:27303338
Takemura, Kazuhisa; Murakami, Hajime
2016-01-01
A probability weighting function (w(p)) is considered to be a nonlinear function of probability (p) in behavioral decision theory. This study proposes a psychophysical model of probability weighting functions derived from a hyperbolic time discounting model and a geometric distribution. The aim of the study is to show probability weighting functions from the point of view of waiting time for a decision maker. Since the expected value of a geometrically distributed random variable X is 1/p, we formulized the probability weighting function of the expected value model for hyperbolic time discounting as w(p) = (1 - k log p)(-1). Moreover, the probability weighting function is derived from Loewenstein and Prelec's (1992) generalized hyperbolic time discounting model. The latter model is proved to be equivalent to the hyperbolic-logarithmic weighting function considered by Prelec (1998) and Luce (2001). In this study, we derive a model from the generalized hyperbolic time discounting model assuming Fechner's (1860) psychophysical law of time and a geometric distribution of trials. In addition, we develop median models of hyperbolic time discounting and generalized hyperbolic time discounting. To illustrate the fitness of each model, a psychological experiment was conducted to assess the probability weighting and value functions at the level of the individual participant. The participants were 50 university students. The results of individual analysis indicated that the expected value model of generalized hyperbolic discounting fitted better than previous probability weighting decision-making models. The theoretical implications of this finding are discussed. PMID:27303338
Modeling Conditional Probabilities in Complex Educational Assessments. CSE Technical Report.
ERIC Educational Resources Information Center
Mislevy, Robert J.; Almond, Russell; Dibello, Lou; Jenkins, Frank; Steinberg, Linda; Yan, Duanli; Senturk, Deniz
An active area in psychometric research is coordinated task design and statistical analysis built around cognitive models. Compared with classical test theory and item response theory, there is often less information from observed data about the measurement-model parameters. On the other hand, there is more information from the grounding…
Application of Probability Methods to Assess Crash Modeling Uncertainty
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Stockwell, Alan E.; Hardy, Robin C.
2003-01-01
Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stress-strain behaviors, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the effects of finite element modeling assumptions on the predicted responses. The vertical drop test of a Fokker F28 fuselage section will be the focus of this paper. The results of a probabilistic analysis using finite element simulations will be compared with experimental data.
Application of Probability Methods to Assess Crash Modeling Uncertainty
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Stockwell, Alan E.; Hardy, Robin C.
2007-01-01
Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stress-strain behaviors, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the effects of finite element modeling assumptions on the predicted responses. The vertical drop test of a Fokker F28 fuselage section will be the focus of this paper. The results of a probabilistic analysis using finite element simulations will be compared with experimental data.
Modeling Outcomes from Probability Tasks: Sixth Graders Reasoning Together
ERIC Educational Resources Information Center
Alston, Alice; Maher, Carolyn
2003-01-01
This report considers the reasoning of sixth grade students as they explore problem tasks concerning the fairness of dice games. The particular focus is the students' interactions, verbal and non-verbal, as they build and justify representations that extend their basic understanding of number combinations in order to model the outcome set of a…
Physical model assisted probability of detection in nondestructive evaluation
Li, M.; Meeker, W. Q.; Thompson, R. B.
2011-06-23
Nondestructive evaluation is used widely in many engineering and industrial areas to detect defects or flaws such as cracks inside parts or structures during manufacturing or for products in service. The standard statistical model is a simple empirical linear regression between the (possibly transformed) signal response variables and the (possibly transformed) explanatory variables. For some applications, such a simple empirical approach is inadequate. An important alternative approach is to use knowledge of the physics of the inspection process to provide information about the underlying relationship between the response and explanatory variables. Use of such knowledge can greatly increase the power and accuracy of the statistical analysis and enable, when needed, proper extrapolation outside the range of the observed explanatory variables. This paper describes a set of physical model-assisted analyses to study the capability of two different ultrasonic testing inspection methods to detect synthetic hard alpha inclusion and flat-bottom hole defects in a titanium forging disk.
Some aspects of statistical modeling of human-error probability
Prairie, R. R.
1982-01-01
Human reliability analyses (HRA) are often performed as part of risk assessment and reliability projects. Recent events in nuclear power have shown the potential importance of the human element. There are several on-going efforts in the US and elsewhere with the purpose of modeling human error such that the human contribution can be incorporated into an overall risk assessment associated with one or more aspects of nuclear power. An effort that is described here uses the HRA (event tree) to quantify and model the human contribution to risk. As an example, risk analyses are being prepared on several nuclear power plants as part of the Interim Reliability Assessment Program (IREP). In this process the risk analyst selects the elements of his fault tree that could be contributed to by human error. He then solicits the HF analyst to do a HRA on this element.
Physical Model Assisted Probability of Detection in Nondestructive Evaluation
NASA Astrophysics Data System (ADS)
Li, M.; Meeker, W. Q.; Thompson, R. B.
2011-06-01
Nondestructive evaluation is used widely in many engineering and industrial areas to detect defects or flaws such as cracks inside parts or structures during manufacturing or for products in service. The standard statistical model is a simple empirical linear regression between the (possibly transformed) signal response variables and the (possibly transformed) explanatory variables. For some applications, such a simple empirical approach is inadequate. An important alternative approach is to use knowledge of the physics of the inspection process to provide information about the underlying relationship between the response and explanatory variables. Use of such knowledge can greatly increase the power and accuracy of the statistical analysis and enable, when needed, proper extrapolation outside the range of the observed explanatory variables. This paper describes a set of physical model-assisted analyses to study the capability of two different ultrasonic testing inspection methods to detect synthetic hard alpha inclusion and flat-bottom hole defects in a titanium forging disk.
Probabilistic Independence Networks for Hidden Markov Probability Models
NASA Technical Reports Server (NTRS)
Smyth, Padhraic; Heckerman, Cavid; Jordan, Michael I
1996-01-01
In this paper we explore hidden Markov models(HMMs) and related structures within the general framework of probabilistic independence networks (PINs). The paper contains a self-contained review of the basic principles of PINs. It is shown that the well-known forward-backward (F-B) and Viterbi algorithms for HMMs are special cases of more general enference algorithms for arbitrary PINs.
A simulation model for estimating probabilities of defects in welds
Chapman, O.J.V.; Khaleel, M.A.; Simonen, F.A.
1996-12-01
In recent work for the US Nuclear Regulatory Commission in collaboration with Battelle Pacific Northwest National Laboratory, Rolls-Royce and Associates, Ltd., has adapted an existing model for piping welds to address welds in reactor pressure vessels. This paper describes the flaw estimation methodology as it applies to flaws in reactor pressure vessel welds (but not flaws in base metal or flaws associated with the cladding process). Details of the associated computer software (RR-PRODIGAL) are provided. The approach uses expert elicitation and mathematical modeling to simulate the steps in manufacturing a weld and the errors that lead to different types of weld defects. The defects that may initiate in weld beads include center cracks, lack of fusion, slag, pores with tails, and cracks in heat affected zones. Various welding processes are addressed including submerged metal arc welding. The model simulates the effects of both radiographic and dye penetrant surface inspections. Output from the simulation gives occurrence frequencies for defects as a function of both flaw size and flaw location (surface connected and buried flaws). Numerical results are presented to show the effects of submerged metal arc versus manual metal arc weld processes.
A Comparison Of The Mycin Model For Reasoning Under Uncertainty To A Probability Based Model
NASA Astrophysics Data System (ADS)
Neapolitan, Richard E.
1986-03-01
Rule-based expert systems are those in which a certain number of IF-THEN rules are assumed to hold. Based on the verity of some assertions, the rules deduce new conclusions. In many cases, neither the rules nor the assertions are known with certainty. The system must then be able to obtain a measure of partial belief in the conclusion based upon measures of partial belief in the assertions and the rule. A problem arises when two or more rules (items of evidence) argue for the same conclusion. As proven in , certain assumptions concerning the independence of the two items of evidence is necessary before the certainties can be combined. In the current paper, it is shown how the well known MYCIN model combines the certainties from two items of evidence. The validity of the model is then proven based on the model's assumptions of independence of evidence. The assumptions are that the evidence must be independent in the whole space, in the space of the conclusion, and in the space of the complement of the conclusion. Next a probability-based model is described and compared to the MYCIN model. It is proven that the probabilistic assumptions for this model are weaker (independence is necessary only in the space of the conclusion and the space of the complement of conclusion), and therefore more appealing. An example is given to show how the added assumption in the MYCIN model is, in fact, the most restrictive assumption. It is also proven that, when two rules argue for the same conclusion, the combinatoric method in a MYCIN version of the probability-based model yields a higher combined certainty than that in the MYCIN model. It is finally concluded that the probability-based model, in light of the comparison, is the better choice.
A simplified model for the assessment of the impact probability of fragments.
Gubinelli, Gianfilippo; Zanelli, Severino; Cozzani, Valerio
2004-12-31
A model was developed for the assessment of fragment impact probability on a target vessel, following the collapse and fragmentation of a primary vessel due to internal pressure. The model provides the probability of impact of a fragment with defined shape, mass and initial velocity on a target of a known shape and at a given position with respect to the source point. The model is based on the ballistic analysis of the fragment trajectory and on the determination of impact probabilities by the analysis of initial direction of fragment flight. The model was validated using available literature data. PMID:15601611
NASA Astrophysics Data System (ADS)
Love, J. J.; Rigler, E. J.; Pulkkinen, A. A.; Riley, P.
2015-12-01
An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to -Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, -Dst > 850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42, 2.41] times per century; a 100-yr magnetic storm is identified as having a -Dst > 880 nT (greater than Carrington) but a wide 95% confidence interval of [490, 1187] nT. This work is partially motivated by United States National Science and Technology Council and Committee on Space Research and International Living with a Star priorities and strategic plans for the assessment and mitigation of space-weather hazards.
Love, Jeffrey J.; Rigler, E. Joshua; Pulkkinen, Antti; Riley, Pete
2015-01-01
An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to −Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, −Dst≥850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42,2.41] times per century; a 100-yr magnetic storm is identified as having a −Dst≥880 nT (greater than Carrington) but a wide 95% confidence interval of [490,1187] nT.
Time‐dependent renewal‐model probabilities when date of last earthquake is unknown
Field, Edward H.; Jordan, Thomas H.
2015-01-01
We derive time-dependent, renewal-model earthquake probabilities for the case in which the date of the last event is completely unknown, and compare these with the time-independent Poisson probabilities that are customarily used as an approximation in this situation. For typical parameter values, the renewal-model probabilities exceed Poisson results by more than 10% when the forecast duration exceeds ~20% of the mean recurrence interval. We also derive probabilities for the case in which the last event is further constrained to have occurred before historical record keeping began (the historic open interval), which can only serve to increase earthquake probabilities for typically applied renewal models.We conclude that accounting for the historic open interval can improve long-term earthquake rupture forecasts for California and elsewhere.
Cold and hot cognition: quantum probability theory and realistic psychological modeling.
Corr, Philip J
2013-06-01
Typically, human decision making is emotionally "hot" and does not conform to "cold" classical probability (CP) theory. As quantum probability (QP) theory emphasises order, context, superimposition states, and nonlinear dynamic effects, one of its major strengths may be its power to unify formal modeling and realistic psychological theory (e.g., information uncertainty, anxiety, and indecision, as seen in the Prisoner's Dilemma). PMID:23673029
ERIC Educational Resources Information Center
Jenny, Mirjam A.; Rieskamp, Jörg; Nilsson, Håkan
2014-01-01
Judging whether multiple events will co-occur is an important aspect of everyday decision making. The underlying probabilities of occurrence are usually unknown and have to be inferred from experience. Using a rigorous, quantitative model comparison, we investigate how people judge the conjunctive probabilities of multiple events to co-occur. In 2…
Emptiness Formation Probability of the Six-Vertex Model and the Sixth Painlevé Equation
NASA Astrophysics Data System (ADS)
Kitaev, A. V.; Pronko, A. G.
2016-07-01
We show that the emptiness formation probability of the six-vertex model with domain wall boundary conditions at its free-fermion point is a {τ}-function of the sixth Painlevé equation. Using this fact we derive asymptotics of the emptiness formation probability in the thermodynamic limit.
Discrete Latent Markov Models for Normally Distributed Response Data
ERIC Educational Resources Information Center
Schmittmann, Verena D.; Dolan, Conor V.; van der Maas, Han L. J.; Neale, Michael C.
2005-01-01
Van de Pol and Langeheine (1990) presented a general framework for Markov modeling of repeatedly measured discrete data. We discuss analogical single indicator models for normally distributed responses. In contrast to discrete models, which have been studied extensively, analogical continuous response models have hardly been considered. These…
Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno
2016-01-01
Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision. PMID:27303323
Application of the response probability density function technique to biodynamic models.
Hershey, R L; Higgins, T H
1978-01-01
A method has been developed, which we call the "response probability density function technique," which has applications in predicting the probability of injury in a wide range of biodynamic situations. The method, which was developed in connection with sonic boom damage prediction, utilized the probability density function of the excitation force and the probability density function of the sensitivity of the material being acted upon. The method is especially simple to use when both these probability density functions are lognormal. Studies thus far have shown that the stresses from sonic booms, as well as the strengths of glass and mortars, are distributed lognormally. Some biodynamic processes also have lognormal distributions and are, therefore, amenable to modeling by this technique. In particular, this paper discusses the application of the response probability density function technique to the analysis of the thoracic response to air blast and the prediction of skull fracture from head impact. PMID:623590
Not Quite Normal: Consequences of Violating the Assumption of Normality in Regression Mixture Models
ERIC Educational Resources Information Center
Van Horn, M. Lee; Smith, Jessalyn; Fagan, Abigail A.; Jaki, Thomas; Feaster, Daniel J.; Masyn, Katherine; Hawkins, J. David; Howe, George
2012-01-01
Regression mixture models, which have only recently begun to be used in applied research, are a new approach for finding differential effects. This approach comes at the cost of the assumption that error terms are normally distributed within classes. This study uses Monte Carlo simulations to explore the effects of relatively minor violations of…
Simpson, Daniel R.; Song, William Y.; Moiseenko, Vitali; Rose, Brent S.; Yashar, Catheryn M.; Mundt, Arno J.; Mell, Loren K.
2012-05-01
Purpose: To test the hypothesis that increased bowel radiation dose is associated with acute gastrointestinal (GI) toxicity in cervical cancer patients undergoing concurrent chemotherapy and intensity-modulated radiation therapy (IMRT), using a previously derived normal tissue complication probability (NTCP) model. Methods: Fifty patients with Stage I-III cervical cancer undergoing IMRT and concurrent weekly cisplatin were analyzed. Acute GI toxicity was graded using the Radiation Therapy Oncology Group scale, excluding upper GI events. A logistic model was used to test correlations between acute GI toxicity and bowel dosimetric parameters. The primary objective was to test the association between Grade {>=}2 GI toxicity and the volume of bowel receiving {>=}45 Gy (V{sub 45}) using the logistic model. Results: Twenty-three patients (46%) had Grade {>=}2 GI toxicity. The mean (SD) V{sub 45} was 143 mL (99). The mean V{sub 45} values for patients with and without Grade {>=}2 GI toxicity were 176 vs. 115 mL, respectively. Twenty patients (40%) had V{sub 45} >150 mL. The proportion of patients with Grade {>=}2 GI toxicity with and without V{sub 45} >150 mL was 65% vs. 33% (p = 0.03). Logistic model parameter estimates V50 and {gamma} were 161 mL (95% confidence interval [CI] 60-399) and 0.31 (95% CI 0.04-0.63), respectively. On multivariable logistic regression, increased V{sub 45} was associated with an increased odds of Grade {>=}2 GI toxicity (odds ratio 2.19 per 100 mL, 95% CI 1.04-4.63, p = 0.04). Conclusions: Our results support the hypothesis that increasing bowel V{sub 45} is correlated with increased GI toxicity in cervical cancer patients undergoing IMRT and concurrent cisplatin. Reducing bowel V{sub 45} could reduce the risk of Grade {>=}2 GI toxicity by approximately 50% per 100 mL of bowel spared.
A removal model for estimating detection probabilities from point-count surveys
Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.
2000-01-01
We adapted a removal model to estimate detection probability during point count surveys. The model assumes one factor influencing detection during point counts is the singing frequency of birds. This may be true for surveys recording forest songbirds when most detections are by sound. The model requires counts to be divided into several time intervals. We used time intervals of 2, 5, and 10 min to develop a maximum-likelihood estimator for the detectability of birds during such surveys. We applied this technique to data from bird surveys conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. The overall detection probability for all birds was 75%. We found differences in detection probability among species. Species that sing frequently such as Winter Wren and Acadian Flycatcher had high detection probabilities (about 90%) and species that call infrequently such as Pileated Woodpecker had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. This method of estimating detectability during point count surveys offers a promising new approach to using count data to address questions of the bird abundance, density, and population trends.
A Discrete SIRS Model with Kicked Loss of Immunity and Infection Probability
NASA Astrophysics Data System (ADS)
Paladini, F.; Renna, I.; Renna, L.
2011-03-01
A discrete-time deterministic epidemic model is proposed with the aim of reproducing the behaviour observed in the incidence of real infectious diseases, such as oscillations and irregularities. For this purpose we introduce, in a naïve discrete-time SIRS model, seasonal variability in the loss of immunity and in the infection probability, modelled by sequences of kicks. Restrictive assumptions are made on the parameters of the models, in order to guarantee that the transitions are determined by true probabilities, so that comparisons with stochastic discrete-time previsions can be also provided. Numerical simulations show that the characteristics of real infectious diseases can be adequately modeled.
Skew-normal antedependence models for skewed longitudinal data
Chang, Shu-Ching; Zimmerman, Dale L.
2016-01-01
Antedependence models, also known as transition models, have proven to be useful for longitudinal data exhibiting serial correlation, especially when the variances and/or same-lag correlations are time-varying. Statistical inference procedures associated with normal antedependence models are well-developed and have many nice properties, but they are not appropriate for longitudinal data that exhibit considerable skewness. We propose two direct extensions of normal antedependence models to skew-normal antedependence models. The first is obtained by imposing antedependence on a multivariate skew-normal distribution, and the second is a sequential autoregressive model with skew-normal innovations. For both models, necessary and sufficient conditions for \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$p$\\end{document}th-order antedependence are established, and likelihood-based estimation and testing procedures for models satisfying those conditions are developed. The procedures are applied to simulated data and to real data from a study of cattle growth. PMID:27279663
General properties of different models used to predict normal tissue complications due to radiation
Kuperman, V. Y.
2008-11-15
In the current study the author analyzes general properties of three different models used to predict normal tissue complications due to radiation: (1) Surviving fraction of normal cells in the framework of the linear quadratic (LQ) equation for cell kill, (2) the Lyman-Kutcher-Burman (LKB) model for normal tissue complication probability (NTCP), and (3) generalized equivalent uniform dose (gEUD). For all considered cases the author assumes fixed average dose to an organ of interest. The author's goal is to establish whether maximizing dose uniformity in the irradiated normal tissues is radiobiologically beneficial. Assuming that NTCP increases with increasing overall cell kill, it is shown that NTCP in the LQ model is maximized for uniform dose. Conversely, NTCP in the LKB and gEUD models is always smaller for a uniform dose to a normal organ than that for a spatially varying dose if parameter n in these models is small (i.e., n<1). The derived conflicting properties of the considered models indicate the need for more studies before these models can be utilized clinically for plan evaluation and/or optimization of dose distributions. It is suggested that partial-volume irradiation can be used to establish the validity of the considered models.
Normal seasonal variations for atmospheric radon concentration: a sinusoidal model.
Hayashi, Koseki; Yasuoka, Yumi; Nagahama, Hiroyuki; Muto, Jun; Ishikawa, Tetsuo; Omori, Yasutaka; Suzuki, Toshiyuki; Homma, Yoshimi; Mukai, Takahiro
2015-01-01
Anomalous radon readings in air have been reported before an earthquake activity. However, careful measurements of atmospheric radon concentrations during a normal period are required to identify anomalous variations in a precursor period. In this study, we obtained radon concentration data for 5 years (2003-2007) that can be considered a normal period and compared it with data from the precursory period of 2008 until March 2011, when the 2011 Tohoku-Oki Earthquake occurred. Then, we established a model for seasonal variation by fitting a sinusoidal model to the radon concentration data during the normal period, considering that the seasonal variation was affected by atmospheric turbulence. By determining the amplitude in the sinusoidal model, the normal variation of the radon concentration can be estimated. Thus, the results of this method can be applied to identify anomalous radon variations before an earthquake. PMID:25464051
A removal model for estimating detection probabilities from point-count surveys
Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.
2002-01-01
Use of point-count surveys is a popular method for collecting data on abundance and distribution of birds. However, analyses of such data often ignore potential differences in detection probability. We adapted a removal model to directly estimate detection probability during point-count surveys. The model assumes that singing frequency is a major factor influencing probability of detection when birds are surveyed using point counts. This may be appropriate for surveys in which most detections are by sound. The model requires counts to be divided into several time intervals. Point counts are often conducted for 10 min, where the number of birds recorded is divided into those first observed in the first 3 min, the subsequent 2 min, and the last 5 min. We developed a maximum-likelihood estimator for the detectability of birds recorded during counts divided into those intervals. This technique can easily be adapted to point counts divided into intervals of any length. We applied this method to unlimited-radius counts conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. We found differences in detection probability among species. Species that sing frequently such as Winter Wren (Troglodytes troglodytes) and Acadian Flycatcher (Empidonax virescens) had high detection probabilities (~90%) and species that call infrequently such as Pileated Woodpecker (Dryocopus pileatus) had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. We used the same approach to estimate detection probability and density for a subset of the observations with limited-radius point counts.
NASA Astrophysics Data System (ADS)
Pu, H. C.; Lin, C. H.
2016-05-01
To investigate the seismic behavior of crustal deformation, we deployed a dense seismic network at the Hsinchu area of northwestern Taiwan during the period between 2004 and 2006. Based on abundant local micro-earthquakes recorded at this seismic network, we have successfully determined 274 focal mechanisms among ∼1300 seismic events. It is very interesting to see that the dominant energy of both seismic strike-slip and normal faulting mechanisms repeatedly alternated with each other within two years. Also, the strike-slip and normal faulting earthquakes were largely accompanied with the surface slipping along N60°E and uplifting obtained from the continuous GPS data, individually. Those phenomena were probably resulted by the slow uplifts at the mid-crust beneath the northwestern Taiwan area. As the deep slow uplift was active below 10 km in depth along either the boundary fault or blind fault, the push of the uplifting material would simultaneously produce both of the normal faulting earthquakes in the shallow depths (0-10 km) and the slight surface uplifting. As the deep slow uplift was stop, instead, the strike-slip faulting earthquakes would be dominated as usual due to strongly horizontal plate convergence in the Taiwan. Since the normal faulting earthquakes repeatedly dominated in every 6 or 7 months between 2004 and 2006, it may conclude that slow slip events in the mid crust were frequent to release accumulated tectonic stress in the Hsinchu area.
Trending in Probability of Collision Measurements via a Bayesian Zero-Inflated Beta Mixed Model
NASA Technical Reports Server (NTRS)
Vallejo, Jonathon; Hejduk, Matt; Stamey, James
2015-01-01
We investigate the performance of a generalized linear mixed model in predicting the Probabilities of Collision (Pc) for conjunction events. Specifically, we apply this model to the log(sub 10) transformation of these probabilities and argue that this transformation yields values that can be considered bounded in practice. Additionally, this bounded random variable, after scaling, is zero-inflated. Consequently, we model these values using the zero-inflated Beta distribution, and utilize the Bayesian paradigm and the mixed model framework to borrow information from past and current events. This provides a natural way to model the data and provides a basis for answering questions of interest, such as what is the likelihood of observing a probability of collision equal to the effective value of zero on a subsequent observation.
A likelihood reformulation method in non-normal random effects models.
Liu, Lei; Yu, Zhangsheng
2008-07-20
In this paper, we propose a practical computational method to obtain the maximum likelihood estimates (MLE) for mixed models with non-normal random effects. By simply multiplying and dividing a standard normal density, we reformulate the likelihood conditional on the non-normal random effects to that conditional on the normal random effects. Gaussian quadrature technique, conveniently implemented in SAS Proc NLMIXED, can then be used to carry out the estimation process. Our method substantially reduces computational time, while yielding similar estimates to the probability integral transformation method (J. Comput. Graphical Stat. 2006; 15:39-57). Furthermore, our method can be applied to more general situations, e.g. finite mixture random effects or correlated random effects from Clayton copula. Simulations and applications are presented to illustrate our method. PMID:18038445
Height probabilities in the Abelian sandpile model on the generalized finite Bethe lattice
NASA Astrophysics Data System (ADS)
Chen, Haiyan; Zhang, Fuji
2013-08-01
In this paper, we study the sandpile model on the generalized finite Bethe lattice with a particular boundary condition. Using a combinatorial method, we give the exact expressions for all single-site probabilities and some two-site joint probabilities. As a by-product, we prove that the height probabilities of bulk vertices are all the same for the Bethe lattice with certain given boundary condition, which was found from numerical evidence by Grassberger and Manna ["Some more sandpiles," J. Phys. (France) 51, 1077-1098 (1990)], 10.1051/jphys:0199000510110107700 but without a proof.
Series Expansion Method for Asymmetrical Percolation Models with Two Connection Probabilities
NASA Astrophysics Data System (ADS)
Inui, Norio; Komatsu, Genichi; Kameoka, Koichi
2000-01-01
In order to study the solvability of the percolation model based on Guttmann and Enting's conjecture, the power series for the percolation probability in the form of ∑nHn(q)pn is examined. Although the power series is given by calculating inverse of the transfer-matrix in principle, it is very hard to obtain the inverse matrix containing many complex polynomials as elements. We introduce a new series expansion technique which does not necessitate inverse operation for the transfer-matrix.By using the new procedure, we derive the series of the asymmetrical percolation probability including the isotropic percolation probability as a special case.
Reasenberg, P.A.; Hanks, T.C.; Bakun, W.H.
2003-01-01
The moment magnitude M 7.8 earthquake in 1906 profoundly changed the rate of seismic activity over much of northern California. The low rate of seismic activity in the San Francisco Bay region (SFBR) since 1906, relative to that of the preceding 55 yr, is often explained as a stress-shadow effect of the 1906 earthquake. However, existing elastic and visco-elastic models of stress change fail to fully account for the duration of the lowered rate of earthquake activity. We use variations in the rate of earthquakes as a basis for a simple empirical model for estimating the probability of M ???6.7 earthquakes in the SFBR. The model preserves the relative magnitude distribution of sources predicted by the Working Group on California Earthquake Probabilities' (WGCEP, 1999; WGCEP, 2002) model of characterized ruptures on SFBR faults and is consistent with the occurrence of the four M ???6.7 earthquakes in the region since 1838. When the empirical model is extrapolated 30 yr forward from 2002, it gives a probability of 0.42 for one or more M ???6.7 in the SFBR. This result is lower than the probability of 0.5 estimated by WGCEP (1988), lower than the 30-yr Poisson probability of 0.60 obtained by WGCEP (1999) and WGCEP (2002), and lower than the 30-yr time-dependent probabilities of 0.67, 0.70, and 0.63 obtained by WGCEP (1990), WGCEP (1999), and WGCEP (2002), respectively, for the occurrence of one or more large earthquakes. This lower probability is consistent with the lack of adequate accounting for the 1906 stress-shadow in these earlier reports. The empirical model represents one possible approach toward accounting for the stress-shadow effect of the 1906 earthquake. However, the discrepancy between our result and those obtained with other modeling methods underscores the fact that the physics controlling the timing of earthquakes is not well understood. Hence, we advise against using the empirical model alone (or any other single probability model) for estimating the
Ye, Ming; Neuman, Shlomo P.; Meyer, Philip D.; Pohlmann, Karl
2005-12-24
Previous application of Maximum Likelihood Bayesian Model Averaging (MLBMA, Neuman [2002, 2003]) to alternative variogram models of log air permeability data in fractured tuff has demonstrated its effectiveness in quantifying conceptual model uncertainty and enhancing predictive capability [Ye et al., 2004]. A question remained how best to ascribe prior probabilities to competing models. In this paper we examine the extent to which lead statistics of posterior log permeability predictions are sensitive to prior probabilities of seven corresponding variogram models. We then explore the feasibility of quantifying prior model probabilities by (a) maximizing Shannon's entropy H [Shannon, 1948] subject to constraints reflecting a single analyst's (or a group of analysts?) prior perception about how plausible each alternative model (or a group of models) is relative to others, and (b) selecting a posteriori the most likely among such maxima corresponding to alternative prior perceptions of various analysts or groups of analysts. Another way to select among alternative prior model probability sets, which however is not guaranteed to yield optimum predictive performance (though it did so in our example) and would therefore not be our preferred option, is a min-max approach according to which one selects a priori the set corresponding to the smallest value of maximum entropy. Whereas maximizing H subject to the prior perception of a single analyst (or group) maximizes the potential for further information gain through conditioning, selecting the smallest among such maxima gives preference to the most informed prior perception among those of several analysts (or groups). We use the same variogram models and log permeability data as Ye et al. [2004] to demonstrate that our proposed approach yields the least amount of posterior entropy (residual uncertainty after conditioning) and enhances predictive model performance as compared to (a) the non-informative neutral case in
Modeling and simulation of normal and hemiparetic gait
NASA Astrophysics Data System (ADS)
Luengas, Lely A.; Camargo, Esperanza; Sanchez, Giovanni
2015-09-01
Gait is the collective term for the two types of bipedal locomotion, walking and running. This paper is focused on walking. The analysis of human gait is of interest to many different disciplines, including biomechanics, human-movement science, rehabilitation and medicine in general. Here we present a new model that is capable of reproducing the properties of walking, normal and pathological. The aim of this paper is to establish the biomechanical principles that underlie human walking by using Lagrange method. The constraint forces of Rayleigh dissipation function, through which to consider the effect on the tissues in the gait, are included. Depending on the value of the factor present in the Rayleigh dissipation function, both normal and pathological gait can be simulated. First of all, we apply it in the normal gait and then in the permanent hemiparetic gait. Anthropometric data of adult person are used by simulation, and it is possible to use anthropometric data for children but is necessary to consider existing table of anthropometric data. Validation of these models includes simulations of passive dynamic gait that walk on level ground. The dynamic walking approach provides a new perspective of gait analysis, focusing on the kinematics and kinetics of gait. There have been studies and simulations to show normal human gait, but few of them have focused on abnormal, especially hemiparetic gait. Quantitative comparisons of the model predictions with gait measurements show that the model can reproduce the significant characteristics of normal gait.
ERIC Educational Resources Information Center
So, Tak-Shing Harry; Peng, Chao-Ying Joanne
This study compared the accuracy of predicting two-group membership obtained from K-means clustering with those derived from linear probability modeling, linear discriminant function, and logistic regression under various data properties. Multivariate normally distributed populations were simulated based on combinations of population proportions,…
Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique
Glosup, J.G.; Axelrod, M.C.
1994-08-12
The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method. The problem involves a probability model for underwater noise due to distant shipping.
Suitable models for face geometry normalization in facial expression recognition
NASA Astrophysics Data System (ADS)
Sadeghi, Hamid; Raie, Abolghasem A.
2015-01-01
Recently, facial expression recognition has attracted much attention in machine vision research because of its various applications. Accordingly, many facial expression recognition systems have been proposed. However, the majority of existing systems suffer from a critical problem: geometric variability. It directly affects the performance of geometric feature-based facial expression recognition approaches. Furthermore, it is a crucial challenge in appearance feature-based techniques. This variability appears in both neutral faces and facial expressions. Appropriate face geometry normalization can improve the accuracy of each facial expression recognition system. Therefore, this paper proposes different geometric models or shapes for normalization. Face geometry normalization removes geometric variability of facial images and consequently, appearance feature extraction methods can be accurately utilized to represent facial images. Thus, some expression-based geometric models are proposed for facial image normalization. Next, local binary patterns and local phase quantization are used for appearance feature extraction. A combination of an effective geometric normalization with accurate appearance representations results in more than a 4% accuracy improvement compared to several state-of-the-arts in facial expression recognition. Moreover, utilizing the model of facial expressions which have larger mouth and eye region sizes gives higher accuracy due to the importance of these regions in facial expression.
A neuronal model of vowel normalization and representation.
Sussman, H M
1986-05-01
A speculative neuronal model for vowel normalization and representation is offered. The neurophysiological basis for the premise is the "combination-sensitive" neuron recently documented in the auditory cortex of the mustached bat (N. Suga, W. E. O'Neill, K. Kujirai, and T. Manabe, 1983, Journal of Neurophysiology, 49, 1573-1627). These neurons are specialized to respond to either precise frequency, amplitude, or time differentials between specific harmonic components of the pulse-echo pair comprising the biosonar signal of the bat. Such multiple frequency comparisons lie at the heart of human vowel perception and categorization. A representative vowel normalization algorithm is used to illustrate the operational principles of the neuronal model in accomplishing both normalization and categorization in early infancy. The neurological precursors to a phonemic vocalic system is described based on the neurobiological events characterizing regressive neurogenesis. PMID:3013360
NASA Technical Reports Server (NTRS)
Deiwert, G. S.; Yoshikawa, K. K.
1975-01-01
A semiclassical model proposed by Pearson and Hansen (1974) for computing collision-induced transition probabilities in diatomic molecules is tested by the direct-simulation Monte Carlo method. Specifically, this model is described by point centers of repulsion for collision dynamics, and the resulting classical trajectories are used in conjunction with the Schroedinger equation for a rigid-rotator harmonic oscillator to compute the rotational energy transition probabilities necessary to evaluate the rotation-translation exchange phenomena. It is assumed that a single, average energy spacing exists between the initial state and possible final states for a given collision.
The Probability of the Collapse of the Thermohaline Circulation in an Intermediate Complexity Model
NASA Astrophysics Data System (ADS)
Challenor, P.; Hankin, R.; Marsh, R.
2005-12-01
If the thermohaline circulation were to collapse we could see very rapid climate changes, with North West Europe becoming much cooler and widespread impacts across the globe. The risk of such an event has two aspects: the first is the impact of a collapse in the circulation and the second is the probability that it will happen. In this paper we look at latter problem. In particular we investigate the probability that the thermohaline circulation will collapse by the end of the century. To calculate the probability of thermohaline collapse we use a Monte Carl method. We simulate from a climate model with uncertain parameters and estimate the probability from the number of times the model collapses compared to the number of runs. We use an intermediate complexity climate model, C-GOLDSTEIN, which includes a 3-d ocean, an energy balance atmosphere and, in the version we use, a parameterised carbon cycle. Although C-GOLDSTEIN runs quickly for a climate model it is still too slow to allow the thousands of runs needed for the Monte Carlo calculations. We therefore build an emulator of the model. An emulator is a statistical approximation to the full climate model that gives an estimate of the model output and an uncertainty measure. We use a Gaussian process as our emulator. A limited number of model runs are used to build the emulator which is then used for the simulations. We produce estimates of the probability of the collapse of the thermohaline circulation corresponding to the indicative SRES emission scenarios: A1, A1FI, A1T, A2, B1 and B2.
Fitting the distribution of dry and wet spells with alternative probability models
NASA Astrophysics Data System (ADS)
Deni, Sayang Mohd; Jemain, Abdul Aziz
2009-06-01
The development of the rainfall occurrence model is greatly important not only for data-generation purposes, but also in providing informative resources for future advancements in water-related sectors, such as water resource management and the hydrological and agricultural sectors. Various kinds of probability models had been introduced to a sequence of dry (wet) days by previous researchers in the field. Based on the probability models developed previously, the present study is aimed to propose three types of mixture distributions, namely, the mixture of two log series distributions (LSD), the mixture of the log series Poisson distribution (MLPD), and the mixture of the log series and geometric distributions (MLGD), as the alternative probability models to describe the distribution of dry (wet) spells in daily rainfall events. In order to test the performance of the proposed new models with the other nine existing probability models, 54 data sets which had been published by several authors were reanalyzed in this study. Also, the new data sets of daily observations from the six selected rainfall stations in Peninsular Malaysia for the period 1975-2004 were used. In determining the best fitting distribution to describe the observed distribution of dry (wet) spells, a Chi-square goodness-of-fit test was considered. The results revealed that the new method proposed that MLGD and MLPD showed a better fit as more than half of the data sets successfully fitted the distribution of dry and wet spells. However, the existing models, such as the truncated negative binomial and the modified LSD, were also among the successful probability models to represent the sequence of dry (wet) days in daily rainfall occurrence.
NASA Astrophysics Data System (ADS)
Smith, L. A.
2007-12-01
We question the relevance of climate-model based Bayesian (or other) probability statements for decision support and impact assessment on spatial scales less than continental and temporal averages less than seasonal. Scientific assessment of higher resolution space and time scale information is urgently needed, given the commercial availability of "products" at high spatiotemporal resolution, their provision by nationally funded agencies for use both in industry decision making and governmental policy support, and their presentation to the public as matters of fact. Specifically we seek to establish necessary conditions for probability forecasts (projections conditioned on a model structure and a forcing scenario) to be taken seriously as reflecting the probability of future real-world events. We illustrate how risk management can profitably employ imperfect models of complicated chaotic systems, following NASA's study of near-Earth PHOs (Potentially Hazardous Objects). Our climate models will never be perfect, nevertheless the space and time scales on which they provide decision- support relevant information is expected to improve with the models themselves. Our aim is to establish a set of baselines of internal consistency; these are merely necessary conditions (not sufficient conditions) that physics based state-of-the-art models are expected to pass if their output is to be judged decision support relevant. Probabilistic Similarity is proposed as one goal which can be obtained even when our models are not empirically adequate. In short, probabilistic similarity requires that, given inputs similar to today's empirical observations and observational uncertainties, we expect future models to produce similar forecast distributions. Expert opinion on the space and time scales on which we might reasonably expect probabilistic similarity may prove of much greater utility than expert elicitation of uncertainty in parameter values in a model that is not empirically
Normal versus Noncentral Chi-Square Asymptotics of Misspecified Models
ERIC Educational Resources Information Center
Chun, So Yeon; Shapiro, Alexander
2009-01-01
The noncentral chi-square approximation of the distribution of the likelihood ratio (LR) test statistic is a critical part of the methodology in structural equation modeling. Recently, it was argued by some authors that in certain situations normal distributions may give a better approximation of the distribution of the LR test statistic. The main…
Coupled escape probability for an asymmetric spherical case: Modeling optically thick comets
Gersch, Alan M.; A'Hearn, Michael F.
2014-05-20
We have adapted Coupled Escape Probability, a new exact method of solving radiative transfer problems, for use in asymmetrical spherical situations. Our model is intended specifically for use in modeling optically thick cometary comae, although not limited to such use. This method enables the accurate modeling of comets' spectra even in the potentially optically thick regions nearest the nucleus, such as those seen in Deep Impact observations of 9P/Tempel 1 and EPOXI observations of 103P/Hartley 2.
NASA Technical Reports Server (NTRS)
Courey, Karim; Wright, Clara; Asfour, Shihab; Onar, Arzu; Bayliss, Jon; Ludwig, Larry
2009-01-01
In this experiment, an empirical model to quantify the probability of occurrence of an electrical short circuit from tin whiskers as a function of voltage was developed. This empirical model can be used to improve existing risk simulation models. FIB and TEM images of a tin whisker confirm the rare polycrystalline structure on one of the three whiskers studied. FIB cross-section of the card guides verified that the tin finish was bright tin.
Logit-normal mixed model for Indian monsoon precipitation
NASA Astrophysics Data System (ADS)
Dietz, L. R.; Chatterjee, S.
2014-09-01
Describing the nature and variability of Indian monsoon precipitation is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Four GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data. The logit-normal model was applied to light, moderate, and extreme rainfall. Findings indicated that physical constructs were preserved by the models, and random effects were significant in many cases. We also found GLMM estimation methods were sensitive to tuning parameters and assumptions and therefore, recommend use of multiple methods in applications. This work provides a novel use of GLMM and promotes its addition to the gamut of tools for analysis in studying climate phenomena.
Results from probability-based, simplified, off-shore Louisiana CSEM hydrocarbon reservoir modeling
NASA Astrophysics Data System (ADS)
Stalnaker, J. L.; Tinley, M.; Gueho, B.
2009-12-01
Perhaps the biggest impediment to the commercial application of controlled-source electromagnetic (CSEM) geophysics marine hydrocarbon exploration is the inefficiency of modeling and data inversion. If an understanding of the typical (in a statistical sense) geometrical and electrical nature of a reservoir can be attained, then it is possible to derive therefrom a simplified yet accurate model of the electromagnetic interactions that produce a measured marine CSEM signal, leading ultimately to efficient modeling and inversion. We have compiled geometric and resistivity measurements from roughly 100 known, producing off-shore Louisiana Gulf of Mexico reservoirs. Recognizing that most reservoirs could be recreated roughly from a sectioned hemi-ellipsoid, we devised a unified, compact reservoir geometry description. Each reservoir was initially fit to the ellipsoid by eye, though we plan in the future to perform a more rigorous least-squares fit. We created, using kernel density estimation, initial probabilistic descriptions of reservoir parameter distributions, with the understanding that additional information would not fundamentally alter our results, but rather increase accuracy. From the probabilistic description, we designed an approximate model consisting of orthogonally oriented current segments distributed across the ellipsoid--enough to define the shape, yet few enough to be resolved during inversion. The moment and length of the currents are mapped to geometry and resistivity of the ellipsoid. The probability density functions (pdfs) derived from reservoir statistics serve as a workbench. We first use the pdfs in a Monte Carlo simulation designed to assess the detectability off-shore Louisiana reservoirs using magnitude versus offset (MVO) anomalies. From the pdfs, many reservoir instances are generated (using rejection sampling) and each normalized MVO response is calculated. The response strength is summarized by numerically computing MVO power, and that
Durazzo, Timothy C; Korecka, Magdalena; Trojanowski, John Q; Weiner, Michael W; O' Hara, Ruth; Ashford, John W; Shaw, Leslie M
2016-07-25
Neurodegenerative diseases and chronic cigarette smoking are associated with increased cerebral oxidative stress (OxS). Elevated F2-isoprostane levels in biological fluid is a recognized marker of OxS. This study assessed the association of active cigarette smoking with F2-isoprostane in concentrations in cognitively-normal elders (CN), and those with mild cognitive impairment (MCI) and probable Alzheimer's disease (AD). Smoking and non-smoking CN (n = 83), MCI (n = 164), and probable AD (n = 101) were compared on cerebrospinal fluid (CSF) iPF2α-III and 8,12, iso-iPF2α-VI F2-isoprostane concentrations. Associations between F2-isoprostane levels and hippocampal volumes were also evaluated. In CN and AD, smokers had higher iPF2α-III concentration; overall, smoking AD showed the highest iPF2α-III concentration across groups. Smoking and non-smoking MCI did not differ on iPF2α-III concentration. No group differences were apparent on 8,12, iso-iPF2α-VI concentration, but across AD, higher 8,12, iso-iPF2α-VI level was related to smaller left and total hippocampal volumes. Results indicate that active cigarette smoking in CN and probable AD is associated with increased central nervous system OxS. Further investigation of factors mediating/moderating the absence of smoking effects on CSF F2-isoprostane levels in MCI is warranted. In AD, increasing magnitude of OxS appeared to be related to smaller hippocampal volume. This study contributes additional novel information to the mounting body of evidence that cigarette smoking is associated with adverse effects on the human central nervous system across the lifespan. PMID:27472882
Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique
Glosup, J.G.; Axelrod M.C.
1994-11-15
The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of our interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method.
A model selection algorithm for a posteriori probability estimation with neural networks.
Arribas, Juan Ignacio; Cid-Sueiro, Jesús
2005-07-01
This paper proposes a novel algorithm to jointly determine the structure and the parameters of a posteriori probability model based on neural networks (NNs). It makes use of well-known ideas of pruning, splitting, and merging neural components and takes advantage of the probabilistic interpretation of these components. The algorithm, so called a posteriori probability model selection (PPMS), is applied to an NN architecture called the generalized softmax perceptron (GSP) whose outputs can be understood as probabilities although results shown can be extended to more general network architectures. Learning rules are derived from the application of the expectation-maximization algorithm to the GSP-PPMS structure. Simulation results show the advantages of the proposed algorithm with respect to other schemes. PMID:16121722
Application of damping mechanism model and stacking fault probability in Fe-Mn alloy
Huang, S.K.; Wen, Y.H.; Li, N. Teng, J.; Ding, S.; Xu, Y.G.
2008-06-15
In this paper, the damping mechanism model of Fe-Mn alloy was analyzed using dislocation theory. Moreover, as an important parameter in Fe-Mn based alloy, the effect of stacking fault probability on the damping capacity of Fe-19.35Mn alloy after deep-cooling or tensile deformation was also studied. The damping capacity was measured using reversal torsion pendulum. The stacking fault probability of {gamma}-austenite and {epsilon}-martensite was determined by means of X-ray diffraction (XRD) profile analysis. The microstructure was observed using scanning electronic microscope (SEM). The results indicated that with the strain amplitude increasing above a critical value, the damping capacity of Fe-19.35Mn alloy increased rapidly which could be explained using the breakaway model of Shockley partial dislocations. Deep-cooling and suitable tensile deformation could improve the damping capacity owning to the increasing of stacking fault probability of Fe-19.35Mn alloy.
Modelling detection probabilities to evaluate management and control tools for an invasive species
Christy, M.T.; Yackel Adams, A.A.; Rodda, G.H.; Savidge, J.A.; Tyrrell, C.L.
2010-01-01
For most ecologists, detection probability (p) is a nuisance variable that must be modelled to estimate the state variable of interest (i.e. survival, abundance, or occupancy). However, in the realm of invasive species control, the rate of detection and removal is the rate-limiting step for management of this pervasive environmental problem. For strategic planning of an eradication (removal of every individual), one must identify the least likely individual to be removed, and determine the probability of removing it. To evaluate visual searching as a control tool for populations of the invasive brown treesnake Boiga irregularis, we designed a mark-recapture study to evaluate detection probability as a function of time, gender, size, body condition, recent detection history, residency status, searcher team and environmental covariates. We evaluated these factors using 654 captures resulting from visual detections of 117 snakes residing in a 5-ha semi-forested enclosure on Guam, fenced to prevent immigration and emigration of snakes but not their prey. Visual detection probability was low overall (= 0??07 per occasion) but reached 0??18 under optimal circumstances. Our results supported sex-specific differences in detectability that were a quadratic function of size, with both small and large females having lower detection probabilities than males of those sizes. There was strong evidence for individual periodic changes in detectability of a few days duration, roughly doubling detection probability (comparing peak to non-elevated detections). Snakes in poor body condition had estimated mean detection probabilities greater than snakes with high body condition. Search teams with high average detection rates exhibited detection probabilities about twice that of search teams with low average detection rates. Surveys conducted with bright moonlight and strong wind gusts exhibited moderately decreased probabilities of detecting snakes. Synthesis and applications. By
Bailey, Larissa L.; Reid, Janice A.; Forsman, Eric D.; Nichols, James D.
2009-01-01
Barred owls (Strix varia) have recently expanded their range and now encompass the entire range of the northern spotted owl (Strix occidentalis caurina). This expansion has led to two important issues of concern for management of northern spotted owls: (1) possible competitive interactions between the two species that could contribute to population declines of northern spotted owls, and (2) possible changes in vocalization behavior and detection probabilities of northern spotted owls induced by presence of barred owls. We used a two-species occupancy model to investigate whether there was evidence of competitive exclusion between the two species at study locations in Oregon, USA. We simultaneously estimated detection probabilities for both species and determined if the presence of one species influenced the detection of the other species. Model selection results and associated parameter estimates provided no evidence that barred owls excluded spotted owls from territories. We found strong evidence that detection probabilities differed for the two species, with higher probabilities for northern spotted owls that are the object of current surveys. Non-detection of barred owls is very common in surveys for northern spotted owls, and detection of both owl species was negatively influenced by the presence of the congeneric species. Our results suggest that analyses directed at hypotheses of barred owl effects on demographic or occupancy vital rates of northern spotted owls need to deal adequately with imperfect and variable detection probabilities for both species.
Cool, Geneviève; Lebel, Alexandre; Sadiq, Rehan; Rodriguez, Manuel J
2015-12-01
The regional variability of the probability of occurrence of high total trihalomethane (TTHM) levels was assessed using multilevel logistic regression models that incorporate environmental and infrastructure characteristics. The models were structured in a three-level hierarchical configuration: samples (first level), drinking water utilities (DWUs, second level) and natural regions, an ecological hierarchical division from the Quebec ecological framework of reference (third level). They considered six independent variables: precipitation, temperature, source type, seasons, treatment type and pH. The average probability of TTHM concentrations exceeding the targeted threshold was 18.1%. The probability was influenced by seasons, treatment type, precipitations and temperature. The variance at all levels was significant, showing that the probability of TTHM concentrations exceeding the threshold is most likely to be similar if located within the same DWU and within the same natural region. However, most of the variance initially attributed to natural regions was explained by treatment types and clarified by spatial aggregation on treatment types. Nevertheless, even after controlling for treatment type, there was still significant regional variability of the probability of TTHM concentrations exceeding the threshold. Regional variability was particularly important for DWUs using chlorination alone since they lack the appropriate treatment required to reduce the amount of natural organic matter (NOM) in source water prior to disinfection. Results presented herein could be of interest to authorities in identifying regions with specific needs regarding drinking water quality and for epidemiological studies identifying geographical variations in population exposure to disinfection by-products (DBPs). PMID:26563233
ERIC Educational Resources Information Center
Calvert, Carol Elaine
2014-01-01
This case study relates to distance learning students on open access courses. It demonstrates the use of predictive analytics to generate a model of the probabilities of success and retention at different points, or milestones, in a student journey. A core set of explanatory variables has been established and their varying relative importance at…
Blind Students' Learning of Probability through the Use of a Tactile Model
ERIC Educational Resources Information Center
Vita, Aida Carvalho; Kataoka, Verônica Yumi
2014-01-01
The objective of this paper is to discuss how blind students learn basic concepts of probability using the tactile model proposed by Vita (2012). Among the activities were part of the teaching sequence "Jefferson's Random Walk", in which students built a tree diagram (using plastic trays, foam cards, and toys), and pictograms in 3D…
Logit-normal mixed model for Indian Monsoon rainfall extremes
NASA Astrophysics Data System (ADS)
Dietz, L. R.; Chatterjee, S.
2014-03-01
Describing the nature and variability of Indian monsoon rainfall extremes is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Several GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data procured from the National Climatic Data Center. The logit-normal model was applied with fixed covariates of latitude, longitude, elevation, daily minimum and maximum temperatures with a random intercept by weather station. In general, the estimation methods concurred in their suggestion of a relationship between the El Niño Southern Oscillation (ENSO) and extreme rainfall variability estimates. This work provides a valuable starting point for extending GLMM to incorporate the intricate dependencies in extreme climate events.
Schmidt, W.; Niemeyer, J. C.; Ciaraldi-Schoolmann, F.; Roepke, F. K.; Hillebrandt, W.
2010-02-20
The delayed detonation model describes the observational properties of the majority of Type Ia supernovae very well. Using numerical data from a three-dimensional deflagration model for Type Ia supernovae, the intermittency of the turbulent velocity field and its implications on the probability of a deflagration-to-detonation (DDT) transition are investigated. From structure functions of the turbulent velocity fluctuations, we determine intermittency parameters based on the log-normal and the log-Poisson models. The bulk of turbulence in the ash regions appears to be less intermittent than predicted by the standard log-normal model and the She-Leveque model. On the other hand, the analysis of the turbulent velocity fluctuations in the vicinity of the flame front by Roepke suggests a much higher probability of large velocity fluctuations on the grid scale in comparison to the log-normal intermittency model. Following Pan et al., we computed probability density functions for a DDT for the different distributions. The determination of the total number of regions at the flame surface, in which DDTs can be triggered, enables us to estimate the total number of events. Assuming that a DDT can occur in the stirred flame regime, as proposed by Woosley et al., the log-normal model would imply a delayed detonation between 0.7 and 0.8 s after the beginning of the deflagration phase for the multi-spot ignition scenario used in the simulation. However, the probability drops to virtually zero if a DDT is further constrained by the requirement that the turbulent velocity fluctuations reach about 500 km s{sup -1}. Under this condition, delayed detonations are only possible if the distribution of the velocity fluctuations is not log-normal. From our calculations follows that the distribution obtained by Roepke allow for multiple DDTs around 0.8 s after ignition at a transition density close to 1 x 10{sup 7} g cm{sup -3}.
Dong, Jing; Mahmassani, Hani S.
2011-01-01
This paper proposes a methodology to produce random flow breakdown endogenously in a mesoscopic operational model, by capturing breakdown probability and duration. Based on previous research findings that probability of flow breakdown can be represented as a function of flow rate and the duration can be characterized by a hazard model. By generating random flow breakdown at various levels and capturing the traffic characteristics at the onset of the breakdown, the stochastic network simulation model provides a tool for evaluating travel time variability. The proposed model can be used for (1) providing reliability related traveler information; (2) designing ITS (intelligent transportation systems) strategies to improve reliability; and (3) evaluating reliability-related performance measures of the system.
Internal Energy Exchange and Dissociation Probability in DSMC Molecular Collision Models
NASA Astrophysics Data System (ADS)
Chabut, E.
2008-12-01
The present work is related to the gas—gas collision models used in DSMC. It especially concerns the relaxation rates and the reactivity for diatomic molecules (but most of the models can be extended to polyatomic molecules). The Larsen-Borgnakke [1] model is often used in DSMC to describe the way of redistribution of the energies during collisions. A lot of information is provided by literature about links existing between macroscopic collision number, the fraction of inelastic collisions and the probability for a molecule to exchange energy during a collision in a specific mode. We then expose the main relations able to reproduce macroscopic relaxation rates. During collisions, the energy brought by the collision partners can be sufficient to generate a chemical reaction. The problematic is at first to determine an energetic condition for a possible reaction: which energy we have to consider and which threshold we have to compare with; and in second how to calculate the reaction probabilities. Then we often use the experimental results which put in light some phenomena (vibration—dissociation coupling for example) to built a qualitative basis for the models and, in a quantitative point of view, we determine probabilities such they can reproduce the macroscopic experimental rates reflected by the modified Arrhenius law. Some of the different chemical models used in DSMC will be exposed as the "TCE" [2]-3], "EAE" [3], "ME" [4] and "VFD" [5] models.
Multistate modeling of habitat dynamics: Factors affecting Florida scrub transition probabilities
Breininger, D.R.; Nichols, J.D.; Duncan, B.W.; Stolen, Eric D.; Carter, G.M.; Hunt, D.K.; Drese, J.H.
2010-01-01
Many ecosystems are influenced by disturbances that create specific successional states and habitat structures that species need to persist. Estimating transition probabilities between habitat states and modeling the factors that influence such transitions have many applications for investigating and managing disturbance-prone ecosystems. We identify the correspondence between multistate capture-recapture models and Markov models of habitat dynamics. We exploit this correspondence by fitting and comparing competing models of different ecological covariates affecting habitat transition probabilities in Florida scrub and flatwoods, a habitat important to many unique plants and animals. We subdivided a large scrub and flatwoods ecosystem along central Florida's Atlantic coast into 10-ha grid cells, which approximated average territory size of the threatened Florida Scrub-Jay (Aphelocoma coerulescens), a management indicator species. We used 1.0-m resolution aerial imagery for 1994, 1999, and 2004 to classify grid cells into four habitat quality states that were directly related to Florida Scrub-Jay source-sink dynamics and management decision making. Results showed that static site features related to fire propagation (vegetation type, edges) and temporally varying disturbances (fires, mechanical cutting) best explained transition probabilities. Results indicated that much of the scrub and flatwoods ecosystem was resistant to moving from a degraded state to a desired state without mechanical cutting, an expensive restoration tool. We used habitat models parameterized with the estimated transition probabilities to investigate the consequences of alternative management scenarios on future habitat dynamics. We recommend this multistate modeling approach as being broadly applicable for studying ecosystem, land cover, or habitat dynamics. The approach provides maximum-likelihood estimates of transition parameters, including precision measures, and can be used to assess
Dependence in probabilistic modeling, Dempster-Shafer theory, and probability bounds analysis.
Oberkampf, William Louis; Tucker, W. Troy; Zhang, Jianzhong; Ginzburg, Lev; Berleant, Daniel J.; Ferson, Scott; Hajagos, Janos; Nelsen, Roger B.
2004-10-01
This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.
Li, Ning; Liu, Xueqin; Xie, Wei; Wu, Jidong; Zhang, Peng
2013-01-01
New features of natural disasters have been observed over the last several years. The factors that influence the disasters' formation mechanisms, regularity of occurrence and main characteristics have been revealed to be more complicated and diverse in nature than previously thought. As the uncertainty involved increases, the variables need to be examined further. This article discusses the importance and the shortage of multivariate analysis of natural disasters and presents a method to estimate the joint probability of the return periods and perform a risk analysis. Severe dust storms from 1990 to 2008 in Inner Mongolia were used as a case study to test this new methodology, as they are normal and recurring climatic phenomena on Earth. Based on the 79 investigated events and according to the dust storm definition with bivariate, the joint probability distribution of severe dust storms was established using the observed data of maximum wind speed and duration. The joint return periods of severe dust storms were calculated, and the relevant risk was analyzed according to the joint probability. The copula function is able to simulate severe dust storm disasters accurately. The joint return periods generated are closer to those observed in reality than the univariate return periods and thus have more value in severe dust storm disaster mitigation, strategy making, program design, and improvement of risk management. This research may prove useful in risk-based decision making. The exploration of multivariate analysis methods can also lay the foundation for further applications in natural disaster risk analysis. PMID:22616629
Impact of stray charge on interconnect wire via probability model of double-dot system
NASA Astrophysics Data System (ADS)
Xiangye, Chen; Li, Cai; Qiang, Zeng; Xinqiao, Wang
2016-02-01
The behavior of quantum cellular automata (QCA) under the influence of a stray charge is quantified. A new time-independent switching paradigm, a probability model of the double-dot system, is developed. Superiority in releasing the calculation operation is presented by the probability model compared to previous stray charge analysis utilizing ICHA or full-basis calculation. Simulation results illustrate that there is a 186-nm-wide region surrounding a QCA wire where a stray charge will cause the target cell to switch unsuccessfully. The failure is exhibited by two new states' dominating the target cell. Therefore, a bistable saturation model is no longer applicable for stray charge analysis. Project supported by the National Natural Science Foundation of China (No. 61172043) and the Key Program of Shaanxi Provincial Natural Science for Basic Research (No. 2011JZ015).
A Probability Model of Decompression Sickness at 4.3 Psia after Exercise Prebreathe
NASA Technical Reports Server (NTRS)
Conkin, Johnny; Gernhardt, Michael L.; Powell, Michael R.; Pollock, Neal
2004-01-01
Exercise PB can reduce the risk of decompression sickness on ascent to 4.3 psia when performed at the proper intensity and duration. Data are from seven tests. PB times ranged from 90 to 150 min. High intensity, short duration dual-cycle ergometry was done during the PB. This was done alone, or combined with intermittent low intensity exercise or periods of rest for the remaining PB. Nonambulating men and women performed light exercise from a semi-recumbent position at 4.3 psia for four hrs. The Research Model with age tested the probability that DCS increases with advancing age. The NASA Model with gender hypothesized that the probability of DCS increases if gender is female. Accounting for exercise and rest during PB with a variable half-time compartment for computed tissue N2 pressure advances our probability modeling of hypobaric DCS. Both models show that a small increase in exercise intensity during PB reduces the risk of DCS, and a larger increase in exercise intensity dramatically reduces risk. These models support the hypothesis that aerobic fitness is an important consideration for the risk of hypobaric DCS when exercise is performed during the PB.
Modelling convection-enhanced delivery in normal and oedematous brain.
Haar, P J; Chen, Z-J; Fatouros, P P; Gillies, G T; Corwin, F D; Broaddus, W C
2014-03-01
Convection-enhanced delivery (CED) could have clinical applications in the delivery of neuroprotective agents in brain injury states, such as ischaemic stroke. For CED to be safe and effective, a physician must have accurate knowledge of how concentration distributions will be affected by catheter location, flow rate and other similar parameters. In most clinical applications of CED, brain microstructures will be altered by pathological injury processes. Ischaemic stroke and other acute brain injury states are complicated by formation of cytotoxic oedema, in which cellular swelling decreases the fractional volume of the extracellular space (ECS). Such changes would be expected to significantly alter the distribution of neuroprotective agents delivered by CED. Quantitative characterization of these changes will help confirm this prediction and assist in efforts to model the distribution of therapeutic agents. Three-dimensional computational models based on a Nodal Point Integration (NPI) scheme were developed to model infusions in normal brain and brain with cytotoxic oedema. These models were compared to experimental data in which CED was studied in normal brain and in a middle cerebral artery (MCA) occlusion model of cytotoxic oedema. The computational models predicted concentration distributions with reasonable accuracy. PMID:24446800
Syntactic error modeling and scoring normalization in speech recognition
NASA Technical Reports Server (NTRS)
Olorenshaw, Lex
1991-01-01
The objective was to develop the speech recognition system to be able to detect speech which is pronounced incorrectly, given that the text of the spoken speech is known to the recognizer. Research was performed in the following areas: (1) syntactic error modeling; (2) score normalization; and (3) phoneme error modeling. The study into the types of errors that a reader makes will provide the basis for creating tests which will approximate the use of the system in the real world. NASA-Johnson will develop this technology into a 'Literacy Tutor' in order to bring innovative concepts to the task of teaching adults to read.
Gray, David R
2010-12-01
As global trade increases so too does the probability of introduction of alien species to new locations. Estimating the probability of an alien species introduction and establishment following introduction is a necessary step in risk estimation (probability of an event times the consequences, in the currency of choice, of the event should it occur); risk estimation is a valuable tool for reducing the risk of biological invasion with limited resources. The Asian gypsy moth, Lymantria dispar (L.), is a pest species whose consequence of introduction and establishment in North America and New Zealand warrants over US$2 million per year in surveillance expenditure. This work describes the development of a two-dimensional phenology model (GLS-2d) that simulates insect development from source to destination and estimates: (1) the probability of introduction from the proportion of the source population that would achieve the next developmental stage at the destination and (2) the probability of establishment from the proportion of the introduced population that survives until a stable life cycle is reached at the destination. The effect of shipping schedule on the probabilities of introduction and establishment was examined by varying the departure date from 1 January to 25 December by weekly increments. The effect of port efficiency was examined by varying the length of time that invasion vectors (shipping containers and ship) were available for infection. The application of GLS-2d is demonstrated using three common marine trade routes (to Auckland, New Zealand, from Kobe, Japan, and to Vancouver, Canada, from Kobe and from Vladivostok, Russia). PMID:21265459
Determining the probability of arsenic in groundwater using a parsimonious model.
Lee, Jin-Jing; Jang, Cheng-Shin; Liu, Chen-Wuing; Liang, Ching-Ping; Wang, Sheng-Wei
2009-09-01
Spatial distributions of groundwater quality are commonly heterogeneous, varying with depths and locations, which is important in assessing the health and ecological risks. Owing to time and cost constraints, it is not practical or economical to measure arsenic everywhere. A predictive model is necessary to estimate the distribution of a specific pollutant in groundwater. This study developed a logistic regression (LR) model to predict the residential well water quality in the Lanyang plain. Six hydrochemical parameters, pH, NO3- -N, NO2- -N, NH+ -N, Fe, and Mn, and a regional variable (binary type) were used to evaluate the probability of arsenic concentrations exceeding 10 microg/L in groundwater. The developed parsimonious LR model indicates that four parameters in the Lanyang plain aquifer, (pH, NH4+, Fe(aq), and a component to account for regional heterogeneity) can accurately predict probability of arsenic concentration > or =1 microg/Lin groundwater. These parameters provide an explanation for release of arsenic by reductive dissolution of As-rich FeOOH in NH4+ containing groundwater. A comparison of LR and indicator kriging (IK) show similar results in modeling the distributions of arsenic. LR can be applied to assess the probability of groundwater arsenic at sampled sites without arsenic concentration data apriori. However, arsenic sampling is still needed and required in arsenic-assessment stages in other areas, and the need for long-term monitoring and maintenance is not precluded. PMID:19764232
NASA Astrophysics Data System (ADS)
Denzler, Stefan M.; Dacorogna, Michel M.; Muller, Ulrich A.; McNeil, Alexander J.
2005-05-01
Credit risk models like Moody's KMV are now well established in the market and give bond managers reliable default probabilities for individual firms. Until now it has been hard to relate those probabilities to the actual credit spreads observed on the market for corporate bonds. Inspired by the existence of scaling laws in financial markets by Dacorogna et al. 2001 and DiMatteo et al. 2005 deviating from the Gaussian behavior, we develop a model that quantitatively links those default probabilities to credit spreads (market prices). The main input quantities to this study are merely industry yield data of different times to maturity and expected default frequencies (EDFs) of Moody's KMV. The empirical results of this paper clearly indicate that the model can be used to calculate approximate credit spreads (market prices) from EDFs, independent of the time to maturity and the industry sector under consideration. Moreover, the model is effective in an out-of-sample setting, it produces consistent results on the European bond market where data are scarce and can be adequately used to approximate credit spreads on the corporate level.
Weighted least square estimates of the parameters of a model of survivorship probabilities.
Mitra, S
1987-06-01
"A weighted regression has been fitted to estimate the parameters of a model involving functions of survivorship probability and age. Earlier, the parameters were estimated by the method of ordinary least squares and the results were very encouraging. However, a multiple regression equation passing through the origin has been found appropriate for the present model from statistical consideration. Fortunately, this method, while methodologically more sophisticated, has a slight edge over the former as evidenced by the respective measures of reproducibility in the model and actual life tables selected for this study." PMID:12281212
Statistical Surrogate Models for Estimating Probability of High-Consequence Climate Change
NASA Astrophysics Data System (ADS)
Field, R.; Constantine, P.; Boslough, M.
2011-12-01
We have posed the climate change problem in a framework similar to that used in safety engineering, by acknowledging that probabilistic risk assessments focused on low-probability, high-consequence climate events are perhaps more appropriate than studies focused simply on best estimates. To properly explore the tails of the distribution requires extensive sampling, which is not possible with existing coupled atmospheric models due to the high computational cost of each simulation. We have developed specialized statistical surrogate models (SSMs) that can be used to make predictions about the tails of the associated probability distributions. A SSM is different than a deterministic surrogate model in that it represents each climate variable of interest as a space/time random field, that is, a random variable for every fixed location in the atmosphere at all times. The SSM can be calibrated to available spatial and temporal data from existing climate databases, or to a collection of outputs from general circulation models. Because of its reduced size and complexity, the realization of a large number of independent model outputs from a SSM becomes computationally straightforward, so that quantifying the risk associated with low-probability, high-consequence climate events becomes feasible. A Bayesian framework was also developed to provide quantitative measures of confidence, via Bayesian credible intervals, to assess these risks. To illustrate the use of the SSM, we considered two collections of NCAR CCSM 3.0 output data. The first collection corresponds to average December surface temperature for years 1990-1999 based on a collection of 8 different model runs obtained from the Program for Climate Model Diagnosis and Intercomparison (PCMDI). We calibrated the surrogate model to the available model data and make various point predictions. We also analyzed average precipitation rate in June, July, and August over a 54-year period assuming a cyclic Y2K ocean model. We
NASA Astrophysics Data System (ADS)
Smith, Leonard A.
2010-05-01
This contribution concerns "deep" or "second-order" uncertainty, such as the uncertainty in our probability forecasts themselves. It asks the question: "Is it rational to take (or offer) bets using model-based probabilities as if they were objective probabilities?" If not, what alternative approaches for determining odds, perhaps non-probabilistic odds, might prove useful in practice, given the fact we know our models are imperfect? We consider the case where the aim is to provide sustainable odds: not to produce a profit but merely to rationally expect to break even in the long run. In other words, to run a quantified risk of ruin that is relatively small. Thus the cooperative insurance schemes of coastal villages provide a more appropriate parallel than a casino. A "better" probability forecast would lead to lower premiums charged and less volatile fluctuations in the cash reserves of the village. Note that the Bayesian paradigm does not constrain one to interpret model distributions as subjective probabilities, unless one believes the model to be empirically adequate for the task at hand. In geophysics, this is rarely the case. When a probability forecast is interpreted as the objective probability of an event, the odds on that event can be easily computed as one divided by the probability of the event, and one need not favour taking either side of the wager. (Here we are using "odds-for" not "odds-to", the difference being whether of not the stake is returned; odds of one to one are equivalent to odds of two for one.) The critical question is how to compute sustainable odds based on information from imperfect models. We suggest that this breaks the symmetry between the odds-on an event and the odds-against it. While a probability distribution can always be translated into odds, interpreting the odds on a set of events might result in "implied-probabilities" that sum to more than one. And/or the set of odds may be incomplete, not covering all events. We ask
NASA Technical Reports Server (NTRS)
Williford, W. O.; Hsieh, P.; Carter, M. C.
1974-01-01
A Bayesian analysis of the two discrete probability models, the negative binomial and the modified negative binomial distributions, which have been used to describe thunderstorm activity at Cape Kennedy, Florida, is presented. The Bayesian approach with beta prior distributions is compared to the classical approach which uses a moment method of estimation or a maximum-likelihood method. The accuracy and simplicity of the Bayesian method is demonstrated.
Experimental and numerical models of basement-detached normal faults
Islam, Q.T.; Lapointe, P.R. ); Withjack, M.O. )
1991-03-01
The ability to infer more accurately the type, timing, and location of folds and faults that develop during the evolution of large-scale geologic structures can help explorationists to interpret subsurface structures and generate new prospects to better assess their risk factors. One type of structural setting that is of importance in many exploration plays is that of the basement-detached normal fault. Key questions regarding such structures are (1) what structures form, (2) where do the structures form, (3) when do the structures form, (4) why do the structures form Clay and finite element models were used to examine the influence of fault shape on the development of folds and faults in the hanging wall of basement-detached normal faults. The use of two, independent methods helps to overcome each method's inherent limitations, providing additional corroboration for conclusions drawn from the modeling. Three fault geometries were modeled: a fault plane dipping uniformly at 45{degree}; a fault plane that steepens from 30{degree} to 45{degree}; and a fault plane that shallows with depth from 45{degree} to 30{degree}. Results from both modeling approaches show that (1) antithetic faults form at fault bends where fault dip increases, (2) faults become progressively younger towards the footwall, (3) the zone(s) of high stress and faulting are stationary relative to the footwall, (4) anticlines with no closure form below faults shallow, and (5) closed anticlines form only above the point where faults steepen.
How to model a negligible probability under the WTO sanitary and phytosanitary agreement?
Powell, Mark R
2013-06-01
Since the 1997 EC--Hormones decision, World Trade Organization (WTO) Dispute Settlement Panels have wrestled with the question of what constitutes a negligible risk under the Sanitary and Phytosanitary Agreement. More recently, the 2010 WTO Australia--Apples Panel focused considerable attention on the appropriate quantitative model for a negligible probability in a risk assessment. The 2006 Australian Import Risk Analysis for Apples from New Zealand translated narrative probability statements into quantitative ranges. The uncertainty about a "negligible" probability was characterized as a uniform distribution with a minimum value of zero and a maximum value of 10(-6) . The Australia - Apples Panel found that the use of this distribution would tend to overestimate the likelihood of "negligible" events and indicated that a triangular distribution with a most probable value of zero and a maximum value of 10⁻⁶ would correct the bias. The Panel observed that the midpoint of the uniform distribution is 5 × 10⁻⁷ but did not consider that the triangular distribution has an expected value of 3.3 × 10⁻⁷. Therefore, if this triangular distribution is the appropriate correction, the magnitude of the bias found by the Panel appears modest. The Panel's detailed critique of the Australian risk assessment, and the conclusions of the WTO Appellate Body about the materiality of flaws found by the Panel, may have important implications for the standard of review for risk assessments under the WTO SPS Agreement. PMID:22985254
Meta-analysis of two-arm studies: Modeling the intervention effect from survival probabilities.
Combescure, C; Courvoisier, D S; Haller, G; Perneger, T V
2016-04-01
Pooling the hazard ratios is not always feasible in meta-analyses of two-arm survival studies, because the measure of the intervention effect is not systematically reported. An alternative approach proposed by Moodie et al. is to use the survival probabilities of the included studies, all collected at a single point in time: the intervention effect is then summarised as the pooled ratio of the logarithm of survival probabilities (which is an estimator of the hazard ratios when hazards are proportional). In this article, we propose a generalization of this method. By using survival probabilities at several points in time, this generalization allows a flexible modeling of the intervention over time. The method is applicable to partially proportional hazards models, with the advantage of not requiring the specification of the baseline survival. As in Moodie et al.'s method, the study-level factors modifying the survival functions can be ignored as long as they do not modify the intervention effect. The procedures of estimation are presented for fixed and random effects models. Two illustrative examples are presented. PMID:23267027
Kurugol, Sila; Freiman, Moti; Afacan, Onur; Perez-Rossello, Jeannette M; Callahan, Michael J; Warfield, Simon K
2016-08-01
Quantitative diffusion-weighted MR imaging (DW-MRI) of the body enables characterization of the tissue microenvironment by measuring variations in the mobility of water molecules. The diffusion signal decay model parameters are increasingly used to evaluate various diseases of abdominal organs such as the liver and spleen. However, previous signal decay models (i.e., mono-exponential, bi-exponential intra-voxel incoherent motion (IVIM) and stretched exponential models) only provide insight into the average of the distribution of the signal decay rather than explicitly describe the entire range of diffusion scales. In this work, we propose a probability distribution model of incoherent motion that uses a mixture of Gamma distributions to fully characterize the multi-scale nature of diffusion within a voxel. Further, we improve the robustness of the distribution parameter estimates by integrating spatial homogeneity prior into the probability distribution model of incoherent motion (SPIM) and by using the fusion bootstrap solver (FBM) to estimate the model parameters. We evaluated the improvement in quantitative DW-MRI analysis achieved with the SPIM model in terms of accuracy, precision and reproducibility of parameter estimation in both simulated data and in 68 abdominal in-vivo DW-MRIs. Our results show that the SPIM model not only substantially reduced parameter estimation errors by up to 26%; it also significantly improved the robustness of the parameter estimates (paired Student's t-test, p < 0.0001) by reducing the coefficient of variation (CV) of estimated parameters compared to those produced by previous models. In addition, the SPIM model improves the parameter estimates reproducibility for both intra- (up to 47%) and inter-session (up to 30%) estimates compared to those generated by previous models. Thus, the SPIM model has the potential to improve accuracy, precision and robustness of quantitative abdominal DW-MRI analysis for clinical applications. PMID
Normalized Texture Motifs and Their Application to Statistical Object Modeling
Newsam, S D
2004-03-09
A fundamental challenge in applying texture features to statistical object modeling is recognizing differently oriented spatial patterns. Rows of moored boats in remote sensed images of harbors should be consistently labeled regardless of the orientation of the harbors, or of the boats within the harbors. This is not straightforward to do, however, when using anisotropic texture features to characterize the spatial patterns. We here propose an elegant solution, termed normalized texture motifs, that uses a parametric statistical model to characterize the patterns regardless of their orientation. The models are learned in an unsupervised fashion from arbitrarily orientated training samples. The proposed approach is general enough to be used with a large category of orientation-selective texture features.
Neurophysiological model of the normal and abnormal human pupil
NASA Technical Reports Server (NTRS)
Krenz, W.; Robin, M.; Barez, S.; Stark, L.
1985-01-01
Anatomical, experimental, and computer simulation studies were used to determine the structure of the neurophysiological model of the pupil size control system. The computer simulation of this model demonstrates the role played by each of the elements in the neurological pathways influencing the size of the pupil. Simulations of the effect of drugs and common abnormalities in the system help to illustrate the workings of the pathways and processes involved. The simulation program allows the user to select pupil condition (normal or an abnormality), specific site along the neurological pathway (retina, hypothalamus, etc.) drug class input (barbiturate, narcotic, etc.), stimulus/response mode, display mode, stimulus type and input waveform, stimulus or background intensity and frequency, the input and output conditions, and the response at the neuroanatomical site. The model can be used as a teaching aid or as a tool for testing hypotheses regarding the system.
Physical models for the normal YORP and diurnal Yarkovsky effects
NASA Astrophysics Data System (ADS)
Golubov, O.; Kravets, Y.; Krugly, Yu. N.; Scheeres, D. J.
2016-06-01
We propose an analytic model for the normal Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) and diurnal Yarkovsky effects experienced by a convex asteroid. Both the YORP torque and the Yarkovsky force are expressed as integrals of a universal function over the surface of an asteroid. Although in general this function can only be calculated numerically from the solution of the heat conductivity equation, approximate solutions can be obtained in quadratures for important limiting cases. We consider three such simplified models: Rubincam's approximation (zero heat conductivity), low thermal inertia limit (including the next order correction and thus valid for small heat conductivity), and high thermal inertia limit (valid for large heat conductivity). All three simplified models are compared with the exact solution.
Gruber, Susan; Logan, Roger W; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A
2015-01-15
Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However, a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V-fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. PMID:25316152
Gruber, Susan; Logan, Roger W.; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A.
2014-01-01
Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V -fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. PMID:25316152
NASA Astrophysics Data System (ADS)
Zhao, Tongtiegang; Wang, Q. J.; Bennett, James C.; Robertson, David E.; Shao, Quanxi; Zhao, Jianshi
2015-09-01
Uncertainty is inherent in streamflow forecasts and is an important determinant of the utility of forecasts for water resources management. However, predictions by deterministic models provide only single values without uncertainty attached. This study presents a method for using a Bayesian joint probability (BJP) model to post-process deterministic streamflow forecasts by quantifying predictive uncertainty. The BJP model is comprised of a log-sinh transformation that normalises hydrological data, and a bi-variate Gaussian distribution that characterises the dependence relationship. The parameters of the transformation and the distribution are estimated through Bayesian inference with a Monte Carlo Markov chain (MCMC) algorithm. The BJP model produces, from a raw deterministic forecast, an ensemble of values to represent forecast uncertainty. The model is applied to raw deterministic forecasts of inflows to the Three Gorges Reservoir in China as a case study. The heteroscedasticity and non-Gaussianity of forecast uncertainty are effectively addressed. The ensemble spread accounts for the forecast uncertainty and leads to considerable improvement in terms of the continuous ranked probability score. The forecasts become less accurate as lead time increases, and the ensemble spread provides reliable information on the forecast uncertainty. We conclude that the BJP model is a useful tool to quantify predictive uncertainty in post-processing deterministic streamflow forecasts.
Huang, Yangxin; Chen, Jiaqing; Yin, Ping
2014-07-17
It is a common practice to analyze longitudinal data frequently arisen in medical studies using various mixed-effects models in the literature. However, the following issues may standout in longitudinal data analysis: (i) In clinical practice, the profile of each subject's response from a longitudinal study may follow a "broken stick" like trajectory, indicating multiple phases of increase, decline and/or stable in response. Such multiple phases (with changepoints) may be an important indicator to help quantify treatment effect and improve management of patient care. To estimate changepoints, the various mixed-effects models become a challenge due to complicated structures of model formulations; (ii) an assumption of homogeneous population for models may be unrealistically obscuring important features of between-subject and within-subject variations; (iii) normality assumption for model errors may not always give robust and reliable results, in particular, if the data exhibit non-normality; and (iv) the response may be missing and the missingness may be non-ignorable. In the literature, there has been considerable interest in accommodating heterogeneity, non-normality or missingness in such models. However, there has been relatively little work concerning all of these features simultaneously. There is a need to fill up this gap as longitudinal data do often have these characteristics. In this article, our objectives are to study simultaneous impact of these data features by developing a Bayesian mixture modeling approach-based Finite Mixture of Changepoint (piecewise) Mixed-Effects (FMCME) models with skew distributions, allowing estimates of both model parameters and class membership probabilities at population and individual levels. Simulation studies are conducted to assess the performance of the proposed method, and an AIDS clinical data example is analyzed to demonstrate the proposed methodologies and to compare modeling results of potential mixture models
Normality Index of Ventricular Contraction Based on a Statistical Model from FADS
Jiménez-Ángeles, Luis; Valdés-Cristerna, Raquel; Vallejo, Enrique; Bialostozky, David; Medina-Bañuelos, Verónica
2013-01-01
Radionuclide-based imaging is an alternative to evaluate ventricular function and synchrony and may be used as a tool for the identification of patients that could benefit from cardiac resynchronization therapy (CRT). In a previous work, we used Factor Analysis of Dynamic Structures (FADS) to analyze the contribution and spatial distribution of the 3 most significant factors (3-MSF) present in a dynamic series of equilibrium radionuclide angiography images. In this work, a probability density function model of the 3-MSF extracted from FADS for a control group is presented; also an index, based on the likelihood between the control group's contraction model and a sample of normal subjects is proposed. This normality index was compared with those computed for two cardiopathic populations, satisfying the clinical criteria to be considered as candidates for a CRT. The proposed normality index provides a measure, consistent with the phase analysis currently used in clinical environment, sensitive enough to show contraction differences between normal and abnormal groups, which suggests that it can be related to the degree of severity in the ventricular contraction dyssynchrony, and therefore shows promise as a follow-up procedure for patients under CRT. PMID:23634177
Mathematical modeling of normal pharyngeal bolus transport: a preliminary study.
Chang, M W; Rosendall, B; Finlayson, B A
1998-07-01
Dysphagia (difficulty in swallowing) is a common clinical symptom associated with many diseases, such as stroke, multiple sclerosis, neuromuscular diseases, and cancer. Its complications include choking, aspiration, malnutrition, cachexia, and dehydration. The goal in dysphagia management is to provide adequate nutrition and hydration while minimizing the risk of choking and aspiration. It is important to advance the individual toward oral feeding in a timely manner to enhance the recovery of swallowing function and preserve the quality of life. Current clinical assessments of dysphagia are limited in providing adequate guidelines for oral feeding. Mathematical modeling of the fluid dynamics of pharyngeal bolus transport provides a unique opportunity for studying the physiology and pathophysiology of swallowing. Finite element analysis (FEA) is a special case of computational fluid dynamics (CFD). In CFD, the flow of a fluid in a space is modeled by covering the space with a grid and predicting how the fluid moves from grid point to grid point. FEA is capable of solving problems with complex geometries and free surfaces. A preliminary pharyngeal model has been constructed using FEA. This model incorporates literature-reported, normal, anatomical data with time-dependent pharyngeal/upper esophageal sphincter (UES) wall motion obtained from videofluorography (VFG). This time-dependent wall motion can be implemented as a moving boundary condition in the model. Clinical kinematic data can be digitized from VFG studies to construct and test the mathematical model. The preliminary model demonstrates the feasibility of modeling pharyngeal bolus transport, which, to our knowledge, has not been attempted before. This model also addresses the need and the potential for CFD in understanding the physiology and pathophysiology of the pharyngeal phase of swallowing. Improvements of the model are underway. Combining the model with individualized clinical data should potentially
A spatial model of bird abundance as adjusted for detection probability
Gorresen, P.M.; Mcmillan, G.P.; Camp, R.J.; Pratt, T.K.
2009-01-01
Modeling the spatial distribution of animals can be complicated by spatial and temporal effects (i.e. spatial autocorrelation and trends in abundance over time) and other factors such as imperfect detection probabilities and observation-related nuisance variables. Recent advances in modeling have demonstrated various approaches that handle most of these factors but which require a degree of sampling effort (e.g. replication) not available to many field studies. We present a two-step approach that addresses these challenges to spatially model species abundance. Habitat, spatial and temporal variables were handled with a Bayesian approach which facilitated modeling hierarchically structured data. Predicted abundance was subsequently adjusted to account for imperfect detection and the area effectively sampled for each species. We provide examples of our modeling approach for two endemic Hawaiian nectarivorous honeycreepers: 'i'iwi Vestiaria coccinea and 'apapane Himatione sanguinea. ?? 2009 Ecography.
A cellular automata model of traffic flow with variable probability of randomization
NASA Astrophysics Data System (ADS)
Zheng, Wei-Fan; Zhang, Ji-Ye
2015-05-01
Research on the stochastic behavior of traffic flow is important to understand the intrinsic evolution rules of a traffic system. By introducing an interactional potential of vehicles into the randomization step, an improved cellular automata traffic flow model with variable probability of randomization is proposed in this paper. In the proposed model, the driver is affected by the interactional potential of vehicles before him, and his decision-making process is related to the interactional potential. Compared with the traditional cellular automata model, the modeling is more suitable for the driver’s random decision-making process based on the vehicle and traffic situations in front of him in actual traffic. From the improved model, the fundamental diagram (flow-density relationship) is obtained, and the detailed high-density traffic phenomenon is reproduced through numerical simulation. Project supported by the National Natural Science Foundation of China (Grant Nos. 11172247, 61273021, 61373009, and 61100118).
Heglund, P.J.; Nichols, J.D.; Hines, J.E.; Sauer, J.; Fallon, J.; Fallon, F.
2001-01-01
Point counts are a controversial sampling method for bird populations because the counts are not censuses, and the proportion of birds missed during counting generally is not estimated. We applied a double-observer approach to estimate detection rates of birds from point counts in Maryland, USA, and test whether detection rates differed between point counts conducted in field habitats as opposed to wooded habitats. We conducted 2 analyses. The first analysis was based on 4 clusters of counts (routes) surveyed by a single pair of observers. A series of models was developed with differing assumptions about sources of variation in detection probabilities and fit using program SURVIV. The most appropriate model was selected using Akaike's Information Criterion. The second analysis was based on 13 routes (7 woods and 6 field routes) surveyed by various observers in which average detection rates were estimated by route and compared using a t-test. In both analyses, little evidence existed for variation in detection probabilities in relation to habitat. Double-observer methods provide a reasonable means of estimating detection probabilities and testing critical assumptions needed for analysis of point counts.
Corrections to vibrational transition probabilities calculated from a three-dimensional model.
NASA Technical Reports Server (NTRS)
Stallcop, J. R.
1972-01-01
Corrections to the collision-induced vibration transition probability calculated by Hansen and Pearson from a three-dimensional semiclassical model are examined. These corrections come from the retention of higher order terms in the expansion of the interaction potential and the use of the actual value of the deflection angle in the calculation of the transition probability. It is found that the contribution to the transition cross section from previously neglected potential terms can be significant for short range potentials and for the large relative collision velocities encountered at high temperatures. The correction to the transition cross section obtained from the use of actual deflection angles will not be appreciable unless the change in the rotational quantum number is large.
Protein single-model quality assessment by feature-based probability density functions.
Cao, Renzhi; Cheng, Jianlin
2016-01-01
Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method-Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob. PMID:27041353
Protein single-model quality assessment by feature-based probability density functions
Cao, Renzhi; Cheng, Jianlin
2016-01-01
Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method–Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob. PMID:27041353
NASA Astrophysics Data System (ADS)
Ellsworth, W. L.; Matthews, M. V.; Simpson, R. W.
2001-12-01
A statistical mechanical description of elastic rebound is used to study earthquake interaction and stress transfer effects in a point process model of earthquakes. The model is a Brownian Relaxation Oscillator (BRO) in which a random walk (standard Brownian motion) is added to a steady tectonic loading to produce a stochastic load state process. Rupture occurs in this model when the load state reaches a critical value. The load state is a random variable and may be described at any point in time by its probability density. Load state evolves toward the failure threshold due to tectonic loading (drift), and diffuses due to Brownian motion (noise) according to a diffusion equation. The Brownian perturbation process formally represents the sum total of all factors, aside from tectonic loading, that govern rupture. Physically, these factors may include effects of earthquakes external to the source, aseismic loading, interaction effects within the source itself, healing, pore pressure evolution, etc. After a sufficiently long time, load state always evolves to a steady state probability density that is independent of the initial condition and completely described by the drift rate and noise scale. Earthquake interaction and stress transfer effects are modeled by an instantaneous change in the load state. A negative step reduces the probability of failure, while a positive step may either immediately trigger rupture or increase the failure probability (hazard). When the load state is far from failure, the effects are well-approximated by ``clock advances'' that shift the unperturbed hazard down or up, as appropriate for the sign of the step. However, when the load state is advanced in the earthquake cycle, the response is a sharp, temporally localized decrease or increase in hazard. Recovery of the hazard is characteristically ``Omori like'' ( ~ 1/t), which can be understood in terms of equilibrium thermodynamical considerations since state evolution is diffusion with
Min, Yong-Ki; Lee, Dong-Yun; Park, Youn-Soo; Moon, Young-Wan; Lim, Seung-Jae; Lee, Young-Kyun; Choi, DooSeok
2015-01-01
Background Recently, a Korean fracture-risk assessment tool (FRAX) model has become available, but large prospective cohort studies, which are needed to validate the model, are still lacking, and there has been little effort to evaluate its usefulness. This study evaluated the clinical usefulness of the FRAX model, a FRAX developed by the World Health Organization, in Korea. Methods In 405 postmenopausal women and 139 men with a proximal femoral fracture, 10-year predicted fracture probabilities calculated by the Korean FRAX model (a country-specific model) were compared with the probabilities calculated with a FRAX model for Japan, which has a similar ethnic background (surrogate model). Results The 10-year probabilities of major osteoporotic and hip fractures calculated by the Korean model were significantly lower than those calculated by the Japanese model in women and men. The fracture probabilities calculated by each model increased significantly with age in both sexes. In patients aged 70 or older, however, there was a significant difference between the two models. In addition, the Korean model led to lower probabilities for major osteoporotic fracture and hip fracture in women when BMD was excluded from the model than when it was included. Conclusions The 10-year fracture probabilities calculated with FRAX models might differ between country-specific and surrogate models, and caution is needed when applying a surrogate model to a new population. A large prospective study is warranted to validate the country-specific Korean model in the general population. PMID:26389086
NASA Astrophysics Data System (ADS)
Zhong, Rumian; Zong, Zhouhong; Niu, Jie; Liu, Qiqi; Zheng, Peijuan
2016-05-01
Modeling and simulation are routinely implemented to predict the behavior of complex structures. These tools powerfully unite theoretical foundations, numerical models and experimental data which include associated uncertainties and errors. A new methodology for multi-scale finite element (FE) model validation is proposed in this paper. The method is based on two-step updating method, a novel approach to obtain coupling parameters in the gluing sub-regions of a multi-scale FE model, and upon Probability Box (P-box) theory that can provide a lower and upper bound for the purpose of quantifying and transmitting the uncertainty of structural parameters. The structural health monitoring data of Guanhe Bridge, a composite cable-stayed bridge with large span, and Monte Carlo simulation were used to verify the proposed method. The results show satisfactory accuracy, as the overlap ratio index of each modal frequency is over 89% without the average absolute value of relative errors, and the CDF of normal distribution has a good coincidence with measured frequencies of Guanhe Bridge. The validated multiscale FE model may be further used in structural damage prognosis and safety prognosis.
Modeling the probability of arsenic in groundwater in New England as a tool for exposure assessment
Ayotte, J.D.; Nolan, B.T.; Nuckols, J.R.; Cantor, K.P.; Robinson, G.R., Jr.; Baris, D.; Hayes, L.; Karagas, M.; Bress, W.; Silverman, D.T.; Lubin, J.H.
2006-01-01
We developed a process-based model to predict the probability of arsenic exceeding 5 ??g/L in drinking water wells in New England bedrock aquifers. The model is being used for exposure assessment in an epidemiologic study of bladder cancer. One important study hypothesis that may explain increased bladder cancer risk is elevated concentrations of inorganic arsenic in drinking water. In eastern New England, 20-30% of private wells exceed the arsenic drinking water standard of 10 micrograms per liter. Our predictive model significantly improves the understanding of factors associated with arsenic contamination in New England. Specific rock types, high arsenic concentrations in stream sediments, geochemical factors related to areas of Pleistocene marine inundation and proximity to intrusive granitic plutons, and hydrologic and landscape variables relating to groundwater residence time increase the probability of arsenic occurrence in groundwater. Previous studies suggest that arsenic in bedrock groundwater may be partly from past arsenical pesticide use. Variables representing historic agricultural inputs do not improve the model, indicating that this source does not significantly contribute to current arsenic concentrations. Due to the complexity of the fractured bedrock aquifers in the region, well depth and related variables also are not significant predictors. ?? 2006 American Chemical Society.
Probability-based damage detection using model updating with efficient uncertainty propagation
NASA Astrophysics Data System (ADS)
Xu, Yalan; Qian, Yu; Chen, Jianjun; Song, Gangbing
2015-08-01
Model updating method has received increasing attention in damage detection of structures based on measured modal parameters. In this article, a probability-based damage detection procedure is presented, in which the random factor method for non-homogeneous random field is developed and used as the forward propagation to analytically evaluate covariance matrices in each iteration step of stochastic model updating. An improved optimization algorithm is introduced to guarantee the convergence and reduce the computational effort, in which the design variables are restricted in search region by region truncation of each iteration step. The developed algorithm is illustrated by a simulated 25-bar planar truss structure and the results have been compared and verified with those obtained from Monte Carlo simulation. In order to assess the influences of uncertainty sources on the results of model updating and damage detection of structures, a comparative study is also given under different cases of uncertainties, that is, structural uncertainty only, measurement uncertainty only and combination of the two. The simulation results show the proposed method can perform well in stochastic model updating and probability-based damage detection of structures with less computational effort.
A formalism to generate probability distributions for performance-assessment modeling
Kaplan, P.G.
1990-12-31
A formalism is presented for generating probability distributions of parameters used in performance-assessment modeling. The formalism is used when data are either sparse or nonexistent. The appropriate distribution is a function of the known or estimated constraints and is chosen to maximize a quantity known as Shannon`s informational entropy. The formalism is applied to a parameter used in performance-assessment modeling. The functional form of the model that defines the parameter, data from the actual field site, and natural analog data are analyzed to estimate the constraints. A beta probability distribution of the example parameter is generated after finding four constraints. As an example of how the formalism is applied to the site characterization studies of Yucca Mountain, the distribution is generated for an input parameter in a performance-assessment model currently used to estimate compliance with disposal of high-level radioactive waste in geologic repositories, 10 CFR 60.113(a)(2), commonly known as the ground water travel time criterion. 8 refs., 2 figs.
Modeling and forecasting foreign exchange daily closing prices with normal inverse Gaussian
NASA Astrophysics Data System (ADS)
Teneng, Dean
2013-09-01
We fit the normal inverse Gaussian(NIG) distribution to foreign exchange closing prices using the open software package R and select best models by Käärik and Umbleja (2011) proposed strategy. We observe that daily closing prices (12/04/2008 - 07/08/2012) of CHF/JPY, AUD/JPY, GBP/JPY, NZD/USD, QAR/CHF, QAR/EUR, SAR/CHF, SAR/EUR, TND/CHF and TND/EUR are excellent fits while EGP/EUR and EUR/GBP are good fits with a Kolmogorov-Smirnov test p-value of 0.062 and 0.08 respectively. It was impossible to estimate normal inverse Gaussian parameters (by maximum likelihood; computational problem) for JPY/CHF but CHF/JPY was an excellent fit. Thus, while the stochastic properties of an exchange rate can be completely modeled with a probability distribution in one direction, it may be impossible the other way around. We also demonstrate that foreign exchange closing prices can be forecasted with the normal inverse Gaussian (NIG) Lévy process, both in cases where the daily closing prices can and cannot be modeled by NIG distribution.
Empirical probability model of cold plasma environment in the Jovian magnetosphere
NASA Astrophysics Data System (ADS)
Futaana, Yoshifumi; Wang, Xiao-Dong; Barabash, Stas; Roussos, Elias; Truscott, Pete
2015-04-01
We analyzed the Galileo PLS dataset to produce a new cold plasma environment model for the Jovian magneto- sphere. Although there exist many sophisticated radiation models, treating energetic plasma (e.g. JOSE, GIRE, or Salammbo), only a limited number of simple models has been utilized for cold plasma environment. By extend- ing the existing cold plasma models toward the probability domain, we can predict the extreme periods of Jovian environment by specifying the percentile of the environmental parameters. The new model was produced in the following procedure. We first referred to the existing cold plasma models of Divine and Garrett, 1983 (DG83) or Bagenal and Delamere 2011 (BD11). These models are scaled to fit the statistical median of the parameters obtained from Galileo PLS data. The scaled model (also called as "mean model") indicates the median environment of Jovian magnetosphere. Then, assuming that the deviations in the Galileo PLS parameters are purely due to variations in the environment, we extended the mean model toward the percentile domain. The input parameter of the model is simply the position of the spacecraft (distance, magnetic longitude and lati- tude) and the specific percentile (e.g. 0.5 for the mean model). All the parameters in the model are described in mathematical forms; therefore the needed computational resources are quite low. The new model can be used for assessing the JUICE mission profile. The spatial extent of the model covers the main phase of the JUICE mission; namely from the Europa orbit to 40 Rj (where Rj is the radius of Jupiter). In addition, theoretical extensions toward the latitudinal direction are also included in the model to support the high latitude orbit of the JUICE spacecraft.
Modelling the probability of ionospheric irregularity occurrence over African low latitude region
NASA Astrophysics Data System (ADS)
Mungufeni, Patrick; Jurua, Edward; Bosco Habarulema, John; Anguma Katrini, Simon
2015-06-01
This study presents models of geomagnetically quiet time probability of occurrence of ionospheric irregularities over the African low latitude region. GNSS-derived ionospheric total electron content data from Mbarara, Uganda (0.60°S, 30.74°E, geographic, 10.22°S, magnetic) and Libreville, Gabon (0.35°N, 9.68°E, geographic, 8.05°S, magnetic) during the period 2001-2012 were used. First, we established the rate of change of total electron content index (ROTI) value associated with background ionospheric irregularity over the region. This was done by analysing GNSS carrier-phases at L-band frequencies L1 and L2 with the aim of identifying cycle slip events associated with ionospheric irregularities. We identified at both stations a total of 699 events of cycle slips. The corresponding median ROTI value at the epochs of the cycle slip events was 0.54 TECU/min. The probability of occurrence of ionospheric irregularities associated with ROTI ≥ 0.5 TECU / min was then modelled by fitting cubic B-splines to the data. The aspects the model captured included diurnal, seasonal, and solar flux dependence patterns of the probability of occurrence of ionospheric irregularities. The model developed over Mbarara was validated with data over Mt. Baker, Uganda (0.35°N, 29.90°E, geographic, 9.25°S, magnetic), Kigali, Rwanda (1.94°S, 30.09°E, geographic, 11.62°S, magnetic), and Kampala, Uganda (0.34°N, 32.60°E, geographic, 9.29°S, magnetic). For the period validated at Mt. Baker (approximately, 137.64 km, north west), Kigali (approximately, 162.42 km, south west), and Kampala (approximately, 237.61 km, north east) the percentages of the number of errors (difference between the observed and the modelled probability of occurrence of ionospheric irregularity) less than 0.05 are 97.3, 89.4, and 81.3, respectively.
The basic reproduction number and the probability of extinction for a dynamic epidemic model.
Neal, Peter
2012-03-01
We consider the spread of an epidemic through a population divided into n sub-populations, in which individuals move between populations according to a Markov transition matrix Σ and infectives can only make infectious contacts with members of their current population. Expressions for the basic reproduction number, R₀, and the probability of extinction of the epidemic are derived. It is shown that in contrast to contact distribution models, the distribution of the infectious period effects both the basic reproduction number and the probability of extinction of the epidemic in the limit as the total population size N→∞. The interactions between the infectious period distribution and the transition matrix Σ mean that it is not possible to draw general conclusions about the effects on R₀ and the probability of extinction. However, it is shown that for n=2, the basic reproduction number, R₀, is maximised by a constant length infectious period and is decreasing in ς, the speed of movement between the two populations. PMID:22269870
NASA Astrophysics Data System (ADS)
Mori, Shohei; Hirata, Shinnosuke; Yamaguchi, Tadashi; Hachiya, Hiroyuki
2015-07-01
We have been developing a quantitative diagnostic method for liver fibrosis using an ultrasound image. In our previous study, we proposed a multi-Rayleigh model to express a probability density function of the echo amplitude from liver fibrosis and proposed a probability imaging method of tissue characteristics on the basis of the multi-Rayleigh model. In an evaluation using the multi-Rayleigh model, we found that a modeling error of the multi-Rayleigh model was increased by the effect of nonspeckle signals. In this paper, we proposed a method of removing nonspeckle signals using the modeling error of the multi-Rayleigh model and evaluated the probability image of tissue characteristics after removing the nonspeckle signals. By removing nonspeckle signals, the modeling error of the multi-Rayleigh model was decreased. A correct probability image of tissue characteristics was obtained by removing nonspeckle signals. We concluded that the removal of nonspeckle signals is important for evaluating liver fibrosis quantitatively.
Spike Train Probability Models for Stimulus-Driven Leaky Integrate-and-Fire Neurons
Koyama, Shinsuke; Kass, Robert E.
2009-01-01
Mathematical models of neurons are widely used to improve understanding of neuronal spiking behavior. These models can produce artificial spike trains that resemble actual spike train data in important ways, but they are not very easy to apply to the analysis of spike train data. Instead, statistical methods based on point process models of spike trains provide a wide range of data-analytical techniques. Two simplified point process models have been introduced in the literature: the time-rescaled renewal process (TRRP) and the multiplicative inhomogeneous Markov interval (m-IMI) model. In this letter we investigate the extent to which the TRRP and m-IMI models are able to fit spike trains produced by stimulus-driven leaky integrate-and-fire (LIF) neurons. With a constant stimulus, the LIF spike train is a renewal process, and the m-IMI and TRRP models will describe accurately the LIF spike train variability. With a time-varying stimulus, the probability of spiking under all three of these models depends on both the experimental clock time relative to the stimulus and the time since the previous spike, but it does so differently for the LIF, m-IMI, and TRRP models. We assessed the distance between the LIF model and each of the two empirical models in the presence of a time-varying stimulus. We found that while lack of fit of a Poisson model to LIF spike train data can be evident even in small samples, the m-IMI and TRRP models tend to fit well, and much larger samples are required before there is statistical evidence of lack of fit of the m-IMI or TRRP models. We also found that when the mean of the stimulus varies across time, the m-IMI model provides a better fit to the LIF data than the TRRP, and when the variance of the stimulus varies across time, the TRRP provides the better fit. PMID:18336078
NASA Astrophysics Data System (ADS)
Koshinchanov, Georgy; Dimitrov, Dobri
2008-11-01
The characteristics of rainfall intensity are important for many purposes, including design of sewage and drainage systems, tuning flood warning procedures, etc. Those estimates are usually statistical estimates of the intensity of precipitation realized for certain period of time (e.g. 5, 10 min., etc) with different return period (e.g. 20, 100 years, etc). The traditional approach in evaluating the mentioned precipitation intensities is to process the pluviometer's records and fit probability distribution to samples of intensities valid for certain locations ore regions. Those estimates further become part of the state regulations to be used for various economic activities. Two problems occur using the mentioned approach: 1. Due to various factors the climate conditions are changed and the precipitation intensity estimates need regular update; 2. As far as the extremes of the probability distribution are of particular importance for the practice, the methodology of the distribution fitting needs specific attention to those parts of the distribution. The aim of this paper is to make review of the existing methodologies for processing the intensive rainfalls and to refresh some of the statistical estimates for the studied areas. The methodologies used in Bulgaria for analyzing the intensive rainfalls and produce relevant statistical estimates: The method of the maximum intensity, used in the National Institute of Meteorology and Hydrology to process and decode the pluviometer's records, followed by distribution fitting for each precipitation duration period; As the above, but with separate modeling of probability distribution for the middle and high probability quantiles. Method is similar to the first one, but with a threshold of 0,36 mm/min of intensity; Another method proposed by the Russian hydrologist G. A. Aleksiev for regionalization of estimates over some territory, improved and adapted by S. Gerasimov for Bulgaria; Next method is considering only the
Insight into Vent Opening Probability in Volcanic Calderas in the Light of a Sill Intrusion Model
NASA Astrophysics Data System (ADS)
Giudicepietro, Flora; Macedonio, G.; D'Auria, L.; Martini, M.
2016-05-01
The aim of this paper is to discuss a novel approach to provide insights on the probability of vent opening in calderas, using a dynamic model of sill intrusion. The evolution of the stress field is the main factor that controls the vent opening processes in volcanic calderas. On the basis of previous studies, we think that the intrusion of sills is one of the most common mechanism governing caldera unrest. Therefore, we have investigated the spatial and temporal evolution of the stress field due to the emplacement of a sill at shallow depth to provide insight on vent opening probability. We carried out several numerical experiments by using a physical model, to assess the role of the magma properties (viscosity), host rock characteristics (Young's modulus and thickness), and dynamics of the intrusion process (mass flow rate) in controlling the stress field. Our experiments highlight that high magma viscosity produces larger stress values, while low magma viscosity leads to lower stresses and favors the radial spreading of the sill. Also high-rock Young's modulus gives high stress intensity, whereas low values of Young's modulus produce a dramatic reduction of the stress associated with the intrusive process. The maximum intensity of tensile stress is concentrated at the front of the sill and propagates radially with it, over time. In our simulations, we find that maximum values of tensile stress occur in ring-shaped areas with radius ranging between 350 m and 2500 m from the injection point, depending on the model parameters. The probability of vent opening is higher in these areas.
Faceted spurs at normal fault scarps: Insights from numerical modeling
NASA Astrophysics Data System (ADS)
Petit, C.; Gunnell, Y.; Gonga-Saholiariliva, N.; Meyer, B.; SéGuinot, J.
2009-05-01
We present a combined surface processes and tectonic model which allows us to determine the climatic and tectonic parameters that control the development of faceted spurs at normal fault scarps. Sensitivity tests to climatic parameter values are performed. For a given precipitation rate, when hillslope diffusion is high and channel bedrock is highly resistant to erosion, the scarp is smooth and undissected. When, instead, the bedrock is easily eroded and diffusion is limited, numerous channels develop and the scarp becomes deeply incised. Between these two end-member states, diffusion and incision compete to produce a range of scarp morphologies, including faceted spurs. The sensitivity tests allow us to determine a dimensionless ratio of erosion, f, for which faceted spurs can develop. This study evidences a strong dependence of facet slope angle on throw rate for throw rates between 0.4 and 0.7 mm/a. Facet height is also shown to be a linear function of fault throw rate. Model performance is tested on the Wasatch Fault, Utah, using topographic, geologic, and seismologic data. A Monte Carlo inversion on the topography of a portion of the Weber segment shows that the 5 Ma long development of this scarp has been dominated by a low effective precipitation rate (˜1.1 m/a) and a moderate diffusion coefficient (0.13 m2/a). Results demonstrate the ability of our model to estimate normal fault throw rates from the height of triangular facets and to retrieve the average long-term diffusion and incision parameters that prevailed during scarp evolution using an accurate 2-D misfit criterion.
Duffy, Stephen
2013-09-09
This project will implement inelastic constitutive models that will yield the requisite stress-strain information necessary for graphite component design. Accurate knowledge of stress states (both elastic and inelastic) is required to assess how close a nuclear core component is to failure. Strain states are needed to assess deformations in order to ascertain serviceability issues relating to failure, e.g., whether too much shrinkage has taken place for the core to function properly. Failure probabilities, as opposed to safety factors, are required in order to capture the bariability in failure strength in tensile regimes. The current stress state is used to predict the probability of failure. Stochastic failure models will be developed that can accommodate possible material anisotropy. This work will also model material damage (i.e., degradation of mechanical properties) due to radiation exposure. The team will design tools for components fabricated from nuclear graphite. These tools must readily interact with finite element software--in particular, COMSOL, the software algorithm currently being utilized by the Idaho National Laboratory. For the eleastic response of graphite, the team will adopt anisotropic stress-strain relationships available in COMSO. Data from the literature will be utilized to characterize the appropriate elastic material constants.
NASA Astrophysics Data System (ADS)
Li, Qi-Lang; Wong, S. C.; Min, Jie; Tian, Shuo; Wang, Bing-Hong
2016-08-01
This study examines the cellular automata traffic flow model, which considers the heterogeneity of vehicle acceleration and the delay probability of vehicles. Computer simulations are used to identify three typical phases in the model: free-flow, synchronized flow, and wide moving traffic jam. In the synchronized flow region of the fundamental diagram, the low and high velocity vehicles compete with each other and play an important role in the evolution of the system. The analysis shows that there are two types of bistable phases. However, in the original Nagel and Schreckenberg cellular automata traffic model, there are only two kinds of traffic conditions, namely, free-flow and traffic jams. The synchronized flow phase and bistable phase have not been found.
Void probability as a function of the void's shape and scale-invariant models
NASA Technical Reports Server (NTRS)
Elizalde, E.; Gaztanaga, E.
1991-01-01
The dependence of counts in cells on the shape of the cell for the large scale galaxy distribution is studied. A very concrete prediction can be done concerning the void distribution for scale invariant models. The prediction is tested on a sample of the CfA catalog, and good agreement is found. It is observed that the probability of a cell to be occupied is bigger for some elongated cells. A phenomenological scale invariant model for the observed distribution of the counts in cells, an extension of the negative binomial distribution, is presented in order to illustrate how this dependence can be quantitatively determined. An original, intuitive derivation of this model is presented.
NASA Astrophysics Data System (ADS)
Kondoh, Hiroshi; Matsushita, Mitsugu
1986-10-01
Diffusion-limited aggregation (DLA) model with anisotropic sticking probability Ps is computer-simulated on two dimensional square lattice. The cluster grows from a seed particle at the origin in the positive y area with the absorption-type boundary along x-axis. The cluster is found to grow anisotropically as R//˜Nν// and R\\bot˜Nν\\bot, where R\\bot and R// are the radii of gyration of the cluster along x- and y-axes, respectively, and N is the particle number constituting the cluster. The two exponents are shown to become assymptotically ν//{=}2/3, ν\\bot{=}1/3 whenever the sticking anisotropy exists. It is also found that the present model is fairly consistent with Hack’s law of river networks, suggesting that it is a good candidate of a prototype model for the evolution of the river network.
Centrifuge modeling of buried continuous pipelines subjected to normal faulting
NASA Astrophysics Data System (ADS)
Moradi, Majid; Rojhani, Mahdi; Galandarzadeh, Abbas; Takada, Shiro
2013-03-01
Seismic ground faulting is the greatest hazard for continuous buried pipelines. Over the years, researchers have attempted to understand pipeline behavior mostly via numerical modeling such as the finite element method. The lack of well-documented field case histories of pipeline failure from seismic ground faulting and the cost and complicated facilities needed for full-scale experimental simulation mean that a centrifuge-based method to determine the behavior of pipelines subjected to faulting is best to verify numerical approaches. This paper presents results from three centrifuge tests designed to investigate continuous buried steel pipeline behavior subjected to normal faulting. The experimental setup and procedure are described and the recorded axial and bending strains induced in a pipeline are presented and compared to those obtained via analytical methods. The influence of factors such as faulting offset, burial depth and pipe diameter on the axial and bending strains of pipes and on ground soil failure and pipeline deformation patterns are also investigated. Finally, the tensile rupture of a pipeline due to normal faulting is investigated.
NASA Astrophysics Data System (ADS)
Kim, S. S. H.; Hughes, J. D.; Chen, J.; Dutta, D.; Vaze, J.
2014-12-01
Achieving predictive success is a major challenge in hydrological modelling. Predictive metrics indicate whether models and parameters are appropriate for impact assessment, design, planning and management, forecasting and underpinning policy. It is often found that very different parameter sets and model structures are equally acceptable system representations (commonly described as equifinality). Furthermore, parameters that produce the best goodness of fit during a calibration period may often yield poor results outside of that period. A calibration method is presented that uses a recursive Bayesian filter to estimate the probability of consistent performance of parameter sets in different sub-periods. The result is a probability distribution for each specified performance interval. This generic method utilises more information within time-series data than what is typically used for calibrations, and could be adopted for different types of time-series modelling applications. Where conventional calibration methods implicitly identify the best performing parameterisations on average, the new method looks at the consistency of performance during sub-periods. The proposed calibration method, therefore, can be used to avoid heavy weighting toward rare periods of good agreement. The method is trialled in a conceptual river system model called the Australian Water Resources Assessments River (AWRA-R) model in the Murray-Darling Basin, Australia. The new method is tested via cross-validation and results are compared to a traditional split-sample calibration/validation to evaluate the new technique's ability to predict daily streamflow. The results showed that the new calibration method could produce parameterisations that performed better in validation periods than optimum calibration parameter sets. The method shows ability to improve on predictive performance and provide more realistic flux terms compared to traditional split-sample calibration methods.
NASA Astrophysics Data System (ADS)
James, P.
2011-12-01
With a growing need for housing in the U.K., the government has proposed increased development of brownfield sites. However, old mine workings and natural cavities represent a potential hazard before, during and after construction on such sites, and add further complication to subsurface parameters. Cavities are hence a limitation to certain redevelopment and their detection is an ever important consideration. The current standard technique for cavity detection is a borehole grid, which is intrusive, non-continuous, slow and expensive. A new robust investigation standard in the detection of cavities is sought and geophysical techniques offer an attractive alternative. Geophysical techniques have previously been utilised successfully in the detection of cavities in various geologies, but still has an uncertain reputation in the engineering industry. Engineers are unsure of the techniques and are inclined to rely on well known techniques than utilise new technologies. Bad experiences with geophysics are commonly due to the indiscriminate choice of particular techniques. It is imperative that a geophysical survey is designed with the specific site and target in mind at all times, and the ability and judgement to rule out some, or all, techniques. To this author's knowledge no comparative software exists to aid technique choice. Also, previous modelling software limit the shapes of bodies and hence typical cavity shapes are not represented. Here, we introduce 3D modelling software (Matlab) which computes and compares the response to various cavity targets from a range of techniques (gravity, gravity gradient, magnetic, magnetic gradient and GPR). Typical near surface cavity shapes are modelled including shafts, bellpits, various lining and capping materials, and migrating voids. The probability of cavity detection is assessed in typical subsurface and noise conditions across a range of survey parameters. Techniques can be compared and the limits of detection distance
Predicting Mortality in Low-Income Country ICUs: The Rwanda Mortality Probability Model (R-MPM)
Kiviri, Willy; Fowler, Robert A.; Mueller, Ariel; Novack, Victor; Banner-Goodspeed, Valerie M.; Weinkauf, Julia L.; Talmor, Daniel S.; Twagirumugabe, Theogene
2016-01-01
Introduction Intensive Care Unit (ICU) risk prediction models are used to compare outcomes for quality improvement initiatives, benchmarking, and research. While such models provide robust tools in high-income countries, an ICU risk prediction model has not been validated in a low-income country where ICU population characteristics are different from those in high-income countries, and where laboratory-based patient data are often unavailable. We sought to validate the Mortality Probability Admission Model, version III (MPM0-III) in two public ICUs in Rwanda and to develop a new Rwanda Mortality Probability Model (R-MPM) for use in low-income countries. Methods We prospectively collected data on all adult patients admitted to Rwanda’s two public ICUs between August 19, 2013 and October 6, 2014. We described demographic and presenting characteristics and outcomes. We assessed the discrimination and calibration of the MPM0-III model. Using stepwise selection, we developed a new logistic model for risk prediction, the R-MPM, and used bootstrapping techniques to test for optimism in the model. Results Among 427 consecutive adults, the median age was 34 (IQR 25–47) years and mortality was 48.7%. Mechanical ventilation was initiated for 85.3%, and 41.9% received vasopressors. The MPM0-III predicted mortality with area under the receiver operating characteristic curve of 0.72 and Hosmer-Lemeshow chi-square statistic p = 0.024. We developed a new model using five variables: age, suspected or confirmed infection within 24 hours of ICU admission, hypotension or shock as a reason for ICU admission, Glasgow Coma Scale score at ICU admission, and heart rate at ICU admission. Using these five variables, the R-MPM predicted outcomes with area under the ROC curve of 0.81 with 95% confidence interval of (0.77, 0.86), and Hosmer-Lemeshow chi-square statistic p = 0.154. Conclusions The MPM0-III has modest ability to predict mortality in a population of Rwandan ICU patients. The R
Modeling Normal Shock Velocity Curvature Relation for Heterogeneous Explosives
NASA Astrophysics Data System (ADS)
Yoo, Sunhee; Crochet, Michael; Pemberton, Steve
2015-06-01
The normal shock velocity and curvature, Dn(κ) , relation on a detonation shock surface has been an important functional quantity to measure to understand the shock strength exerted against the material interface between a main explosive charge and the case of an explosive munition. The Dn(κ) relation is considered an intrinsic property of an explosive, and can be experimentally deduced by rate stick tests at various charge diameters. However, experimental measurements of the Dn(κ) relation for heterogeneous explosives such as PBXN-111 are challenging due to the non-smoothness and asymmetry usually observed in the experimental streak records of explosion fronts. Out of the many possibilities, the asymmetric character may be attributed to the heterogeneity of the explosives, a hypothesis which begs two questions: (1) is there any simple hydrodynamic model that can explain such an asymmetric shock evolution, and (2) what statistics can be derived for the asymmetry using simulations with defined structural heterogeneity in the unreacted explosive? Saenz, Taylor and Stewart studied constitutive models for derivation of the Dn(κ) relation on porous `homogeneous' explosives and carried out simulations in a spherical coordinate frame. In this paper, we extend their model to account for `heterogeneity' and present shock evolutions in heterogeneous explosives using 2-D hydrodynamic simulations with some statistical examination. (96TW-2015-0004)
A radiation damage repair model for normal tissues
NASA Astrophysics Data System (ADS)
Partridge, Mike
2008-07-01
A cellular Monte Carlo model describing radiation damage and repair in normal epithelial tissues is presented. The deliberately simplified model includes cell cycling, cell motility and radiation damage response (cell cycle arrest and cell death) only. Results demonstrate that the model produces a stable equilibrium system for mean cell cycle times in the range 24-96 h. Simulated irradiation of these stable equilibrium systems produced a range of responses that are shown to be consistent with experimental and clinical observation, including (i) re-epithelialization of radiation-induced lesions by a mixture of cell migration into the wound and repopulation at the periphery; (ii) observed radiosensitivity that is quantitatively consistent with both rate of induction of irreparable DNA lesions and, independently, with the observed acute oral and pharyngeal mucosal reactions to radiotherapy; (iii) an observed time between irradiation and maximum toxicity that is consistent with experimental data for skin; (iv) quantitatively accurate predictions of low-dose hyper-radiosensitivity; (v) Gomperzian repopulation for very small lesions (~2000 cells) and (vi) a linear rate of re-epithelialization of 5-10 µm h-1 for large lesions (>15 000 cells).
SAR amplitude probability density function estimation based on a generalized Gaussian model.
Moser, Gabriele; Zerubia, Josiane; Serpico, Sebastiano B
2006-06-01
In the context of remotely sensed data analysis, an important problem is the development of accurate models for the statistics of the pixel intensities. Focusing on synthetic aperture radar (SAR) data, this modeling process turns out to be a crucial task, for instance, for classification or for denoising purposes. In this paper, an innovative parametric estimation methodology for SAR amplitude data is proposed that adopts a generalized Gaussian (GG) model for the complex SAR backscattered signal. A closed-form expression for the corresponding amplitude probability density function (PDF) is derived and a specific parameter estimation algorithm is developed in order to deal with the proposed model. Specifically, the recently proposed "method-of-log-cumulants" (MoLC) is applied, which stems from the adoption of the Mellin transform (instead of the usual Fourier transform) in the computation of characteristic functions and from the corresponding generalization of the concepts of moment and cumulant. For the developed GG-based amplitude model, the resulting MoLC estimates turn out to be numerically feasible and are also analytically proved to be consistent. The proposed parametric approach was validated by using several real ERS-1, XSAR, E-SAR, and NASA/JPL airborne SAR images, and the experimental results prove that the method models the amplitude PDF better than several previously proposed parametric models for backscattering phenomena. PMID:16764268
Royle, J. Andrew; Chandler, Richard B.; Yackulic, Charles; Nichols, James D.
2012-01-01
1. Understanding the factors affecting species occurrence is a pre-eminent focus of applied ecological research. However, direct information about species occurrence is lacking for many species. Instead, researchers sometimes have to rely on so-called presence-only data (i.e. when no direct information about absences is available), which often results from opportunistic, unstructured sampling. MAXENT is a widely used software program designed to model and map species distribution using presence-only data. 2. We provide a critical review of MAXENT as applied to species distribution modelling and discuss how it can lead to inferential errors. A chief concern is that MAXENT produces a number of poorly defined indices that are not directly related to the actual parameter of interest – the probability of occurrence (ψ). This focus on an index was motivated by the belief that it is not possible to estimate ψ from presence-only data; however, we demonstrate that ψ is identifiable using conventional likelihood methods under the assumptions of random sampling and constant probability of species detection. 3. The model is implemented in a convenient r package which we use to apply the model to simulated data and data from the North American Breeding Bird Survey. We demonstrate that MAXENT produces extreme under-predictions when compared to estimates produced by logistic regression which uses the full (presence/absence) data set. We note that MAXENT predictions are extremely sensitive to specification of the background prevalence, which is not objectively estimated using the MAXENT method. 4. As with MAXENT, formal model-based inference requires a random sample of presence locations. Many presence-only data sets, such as those based on museum records and herbarium collections, may not satisfy this assumption. However, when sampling is random, we believe that inference should be based on formal methods that facilitate inference about interpretable ecological quantities
Sato, Tatsuhiko; Hamada, Nobuyuki
2014-01-01
We here propose a new model assembly for estimating the surviving fraction of cells irradiated with various types of ionizing radiation, considering both targeted and nontargeted effects in the same framework. The probability densities of specific energies in two scales, which are the cell nucleus and its substructure called a domain, were employed as the physical index for characterizing the radiation fields. In the model assembly, our previously established double stochastic microdosimetric kinetic (DSMK) model was used to express the targeted effect, whereas a newly developed model was used to express the nontargeted effect. The radioresistance caused by overexpression of anti-apoptotic protein Bcl-2 known to frequently occur in human cancer was also considered by introducing the concept of the adaptive response in the DSMK model. The accuracy of the model assembly was examined by comparing the computationally and experimentally determined surviving fraction of Bcl-2 cells (Bcl-2 overexpressing HeLa cells) and Neo cells (neomycin resistant gene-expressing HeLa cells) irradiated with microbeam or broadbeam of energetic heavy ions, as well as the WI-38 normal human fibroblasts irradiated with X-ray microbeam. The model assembly reproduced very well the experimentally determined surviving fraction over a wide range of dose and linear energy transfer (LET) values. Our newly established model assembly will be worth being incorporated into treatment planning systems for heavy-ion therapy, brachytherapy, and boron neutron capture therapy, given critical roles of the frequent Bcl-2 overexpression and the nontargeted effect in estimating therapeutic outcomes and harmful effects of such advanced therapeutic modalities. PMID:25426641
Sato, Tatsuhiko; Hamada, Nobuyuki
2014-01-01
We here propose a new model assembly for estimating the surviving fraction of cells irradiated with various types of ionizing radiation, considering both targeted and nontargeted effects in the same framework. The probability densities of specific energies in two scales, which are the cell nucleus and its substructure called a domain, were employed as the physical index for characterizing the radiation fields. In the model assembly, our previously established double stochastic microdosimetric kinetic (DSMK) model was used to express the targeted effect, whereas a newly developed model was used to express the nontargeted effect. The radioresistance caused by overexpression of anti-apoptotic protein Bcl-2 known to frequently occur in human cancer was also considered by introducing the concept of the adaptive response in the DSMK model. The accuracy of the model assembly was examined by comparing the computationally and experimentally determined surviving fraction of Bcl-2 cells (Bcl-2 overexpressing HeLa cells) and Neo cells (neomycin resistant gene-expressing HeLa cells) irradiated with microbeam or broadbeam of energetic heavy ions, as well as the WI-38 normal human fibroblasts irradiated with X-ray microbeam. The model assembly reproduced very well the experimentally determined surviving fraction over a wide range of dose and linear energy transfer (LET) values. Our newly established model assembly will be worth being incorporated into treatment planning systems for heavy-ion therapy, brachytherapy, and boron neutron capture therapy, given critical roles of the frequent Bcl-2 overexpression and the nontargeted effect in estimating therapeutic outcomes and harmful effects of such advanced therapeutic modalities. PMID:25426641
Fakir, Hatim; Hlatky, Lynn; Li, Huamin; Sachs, Rainer
2013-12-15
Purpose: Optimal treatment planning for fractionated external beam radiation therapy requires inputs from radiobiology based on recent thinking about the “five Rs” (repopulation, radiosensitivity, reoxygenation, redistribution, and repair). The need is especially acute for the newer, often individualized, protocols made feasible by progress in image guided radiation therapy and dose conformity. Current stochastic tumor control probability (TCP) models incorporating tumor repopulation effects consider “stem-like cancer cells” (SLCC) to be independent, but the authors here propose that SLCC-SLCC interactions may be significant. The authors present a new stochastic TCP model for repopulating SLCC interacting within microenvironmental niches. Our approach is meant mainly for comparing similar protocols. It aims at practical generalizations of previous mathematical models. Methods: The authors consider protocols with complete sublethal damage repair between fractions. The authors use customized open-source software and recent mathematical approaches from stochastic process theory for calculating the time-dependent SLCC number and thereby estimating SLCC eradication probabilities. As specific numerical examples, the authors consider predicted TCP results for a 2 Gy per fraction, 60 Gy protocol compared to 64 Gy protocols involving early or late boosts in a limited volume to some fractions. Results: In sample calculations with linear quadratic parameters α = 0.3 per Gy, α/β = 10 Gy, boosting is predicted to raise TCP from a dismal 14.5% observed in some older protocols for advanced NSCLC to above 70%. This prediction is robust as regards: (a) the assumed values of parameters other than α and (b) the choice of models for intraniche SLCC-SLCC interactions. However, α = 0.03 per Gy leads to a prediction of almost no improvement when boosting. Conclusions: The predicted efficacy of moderate boosts depends sensitively on α. Presumably, the larger values of α are
Ellis, Andrew M.; Yang Shengfu
2007-09-15
A theoretical model has been developed to describe the probability of charge transfer from helium cations to dopant molecules inside helium nanodroplets following electron-impact ionization. The location of the initial charge site inside helium nanodroplets subject to electron impact has been investigated and is found to play an important role in understanding the ionization of dopants inside helium droplets. The model is consistent with a charge migration process in small helium droplets that is strongly directed by intermolecular forces originating from the dopant, whereas for large droplets (tens of thousands of helium atoms and larger) the charge migration increasingly takes on the character of a random walk. This suggests a clear droplet size limit for the use of electron-impact mass spectrometry for detecting molecules in helium droplets.
Inferring Pairwise Interactions from Biological Data Using Maximum-Entropy Probability Models
Stein, Richard R.; Marks, Debora S.; Sander, Chris
2015-01-01
Maximum entropy-based inference methods have been successfully used to infer direct interactions from biological datasets such as gene expression data or sequence ensembles. Here, we review undirected pairwise maximum-entropy probability models in two categories of data types, those with continuous and categorical random variables. As a concrete example, we present recently developed inference methods from the field of protein contact prediction and show that a basic set of assumptions leads to similar solution strategies for inferring the model parameters in both variable types. These parameters reflect interactive couplings between observables, which can be used to predict global properties of the biological system. Such methods are applicable to the important problems of protein 3-D structure prediction and association of gene–gene networks, and they enable potential applications to the analysis of gene alteration patterns and to protein design. PMID:26225866
Models for the probability densities of the turbulent plasma flux in magnetized plasmas
NASA Astrophysics Data System (ADS)
Bergsaker, A. S.; Fredriksen, Å; Pécseli, H. L.; Trulsen, J. K.
2015-10-01
Observations of turbulent transport in magnetized plasmas indicate that plasma losses can be due to coherent structures or bursts of plasma rather than a classical random walk or diffusion process. A model for synthetic data based on coherent plasma flux events is proposed, where all basic properties can be obtained analytically in terms of a few control parameters. One basic parameter in the present case is the density of burst events in a long time-record, together with parameters in a model of the individual pulse shapes and the statistical distribution of these parameters. The model and its extensions give the probability density of the plasma flux. An interesting property of the model is a prediction of a near-parabolic relation between skewness and kurtosis of the statistical flux distribution for a wide range of parameters. The model is generalized by allowing for an additive random noise component. When this noise dominates the signal we can find a transition to standard results for Gaussian random noise. Applications of the model are illustrated by data from the toroidal Blaamann plasma.
On the thresholds, probability densities, and critical exponents of Bak-Sneppen-like models
NASA Astrophysics Data System (ADS)
Garcia, Guilherme J. M.; Dickman, Ronald
2004-10-01
We report a simple method to accurately determine the threshold and the exponent ν of the Bak-Sneppen (BS) model and also investigate the BS universality class. For the random-neighbor version of the BS model, we find the threshold x ∗=0.33332(3) , in agreement with the exact result x ∗= {1}/{3} given by mean-field theory. For the one-dimensional original model, we find x ∗=0.6672(2) in good agreement with the results reported in the literature; for the anisotropic BS model we obtain x ∗=0.7240(1) . We study the finite size effect x ∗(L)-x ∗(L→∞)∝L -ν, observed in a system with L sites, and find ν=1.00(1) for the random-neighbor version, ν=1.40(1) for the original model, and ν=1.58(1) for the anisotropic case. Finally, we discuss the effect of defining the extremal site as the one which minimizes a general function f( x), instead of simply f( x)= x as in the original updating rule. We emphasize that models with extremal dynamics have singular stationary probability distributions p( x). Our simulations indicate the existence of two symmetry-based universality classes.
NASA Technical Reports Server (NTRS)
Smith, N. S. A.; Frolov, S. M.; Bowman, C. T.
1996-01-01
Two types of mixing sub-models are evaluated in connection with a joint-scalar probability density function method for turbulent nonpremixed combustion. Model calculations are made and compared to simulation results for homogeneously distributed methane-air reaction zones mixing and reacting in decaying turbulence within a two-dimensional enclosed domain. The comparison is arranged to ensure that both the simulation and model calculations a) make use of exactly the same chemical mechanism, b) do not involve non-unity Lewis number transport of species, and c) are free from radiation loss. The modified Curl mixing sub-model was found to provide superior predictive accuracy over the simple relaxation-to-mean submodel in the case studied. Accuracy to within 10-20% was found for global means of major species and temperature; however, nitric oxide prediction accuracy was lower and highly dependent on the choice of mixing sub-model. Both mixing submodels were found to produce non-physical mixing behavior for mixture fractions removed from the immediate reaction zone. A suggestion for a further modified Curl mixing sub-model is made in connection with earlier work done in the field.
NASA Astrophysics Data System (ADS)
Kriegler, E.; Held, H.; Zickfeld, K.
2003-04-01
Climate forecasting with simple models has been hampered, among others, by the difficulties to perform an accurate assessment of probabilities for crucial model parameters. Expert elicitations and Bayesian updating of non-informative priors have been used to determine such probabilities. Both methods hinge on the specification of a precise probability, let it be an aggregate of different expert assessments or a particular choice of prior distribution. It is unclear, how such a choice should be made. Imprecise probability models can be used to circumvent the problem. The question arises how imprecise probabilities for the model parameters can be processed to predict the model output. We propose a method to process imprecise probabilities in simple models, which is based on a special type of imprecise probability theory, the Dempster-Shafer theory of evidence (DST). We show for the example of climate sensitivity, how a multitude of expert elicitations is compressed into a lower-upper-probability model that can be quantified in terms of DST. This information together with estimates on radiative forcing is projected onto future temperature change, which in turn is used to force a simple model of the thermohaline circulation. An algorithm to compute the uncertainty in the overturning strength from the uncertainty in the temperature forcing is introduced. It can be shown that the DST specification of the forcing leads to a DST-type uncertainty in the model output. This information is used to present the resulting lower-upper probability model for the overturning strength in a more intuitive way.
Schindler, Dirk; Grebhan, Karin; Albrecht, Axel; Schönborn, Jochen
2009-11-01
The wind damage probability (P (DAM)) in the forests in the federal state of Baden-Wuerttemberg (Southwestern Germany) was calculated using weights of evidence (WofE) methodology and a logistic regression model (LRM) after the winter storm 'Lothar' in December 1999. A geographic information system (GIS) was used for the area-wide spatial prediction and mapping of P (DAM). The combination of the six evidential themes forest type, soil type, geology, soil moisture, soil acidification, and the 'Lothar' maximum gust field predicted wind damage best and was used to map P (DAM) in a 50 x 50 m resolution grid. GIS software was utilised to produce probability maps, which allowed the identification of areas of low, moderate, and high P (DAM) across the study area. The highest P (DAM) values were calculated for coniferous forest growing on acidic, fresh to moist soils on bunter sandstone formations-provided that 'Lothar' maximum gust speed exceeded 35 m s(-1) in the areas in question. One of the most significant benefits associated with the results of this study is that, for the first time, there is a GIS-based area-wide quantification of P (DAM) in the forests in Southwestern Germany. In combination with the experience and expert knowledge of local foresters, the probability maps produced can be used as an important tool for decision support with respect to future silvicultural activities aimed at reducing wind damage. One limitation of the P (DAM)-predictions is that they are based on only one major storm event. At the moment it is not possible to relate storm event intensity to the amount of wind damage in forests due to the lack of comprehensive long-term tree and stand damage data across the study area. PMID:19562383
Analytical expression for the exit probability of the q -voter model in one dimension
NASA Astrophysics Data System (ADS)
Timpanaro, André M.; Galam, Serge
2015-07-01
We present in this paper an approximation that is able to give an analytical expression for the exit probability of the q -voter model in one dimension. This expression gives a better fit for the more recent data about simulations in large networks [A. M. Timpanaro and C. P. C. do Prado, Phys. Rev. E 89, 052808 (2014), 10.1103/PhysRevE.89.052808] and as such departs from the expression ρ/qρq+(1-ρ ) q found in papers that investigated small networks only [R. Lambiotte and S. Redner, Europhys. Lett. 82, 18007 (2008), 10.1209/0295-5075/82/18007; P. Przybyła et al., Phys. Rev. E 84, 031117 (2011), 10.1103/PhysRevE.84.031117; F. Slanina et al., Europhys. Lett. 82, 18006 (2008), 10.1209/0295-5075/82/18006]. The approximation consists in assuming a large separation on the time scales at which active groups of agents convince inactive ones and the time taken in the competition between active groups. Some interesting findings are that for q =2 we still have ρ/2ρ2+(1-ρ ) 2 as the exit probability and for q >2 we can obtain a lower-order approximation of the form ρ/sρs+(1-ρ ) s with s varying from q for low values of q to q -1/2 for large values of q . As such, this work can also be seen as a deduction for why the exit probability ρ/qρq+(1-ρ ) q gives a good fit, without relying on mean-field arguments or on the assumption that only the first step is nondeterministic, as q and q -1/2 will give very similar results when q →∞ .
NASA Astrophysics Data System (ADS)
Blessent, Daniela; Therrien, René; Lemieux, Jean-Michel
2011-12-01
This paper presents numerical simulations of a series of hydraulic interference tests conducted in crystalline bedrock at Olkiluoto (Finland), a potential site for the disposal of the Finnish high-level nuclear waste. The tests are in a block of crystalline bedrock of about 0.03 km3 that contains low-transmissivity fractures. Fracture density, orientation, and fracture transmissivity are estimated from Posiva Flow Log (PFL) measurements in boreholes drilled in the rock block. On the basis of those data, a geostatistical approach relying on a transitional probability and Markov chain models is used to define a conceptual model based on stochastic fractured rock facies. Four facies are defined, from sparsely fractured bedrock to highly fractured bedrock. Using this conceptual model, three-dimensional groundwater flow is then simulated to reproduce interference pumping tests in either open or packed-off boreholes. Hydraulic conductivities of the fracture facies are estimated through automatic calibration using either hydraulic heads or both hydraulic heads and PFL flow rates as targets for calibration. The latter option produces a narrower confidence interval for the calibrated hydraulic conductivities, therefore reducing the associated uncertainty and demonstrating the usefulness of the measured PFL flow rates. Furthermore, the stochastic facies conceptual model is a suitable alternative to discrete fracture network models to simulate fluid flow in fractured geological media.
Probability based remaining capacity estimation using data-driven and neural network model
NASA Astrophysics Data System (ADS)
Wang, Yujie; Yang, Duo; Zhang, Xu; Chen, Zonghai
2016-05-01
Since large numbers of lithium-ion batteries are composed in pack and the batteries are complex electrochemical devices, their monitoring and safety concerns are key issues for the applications of battery technology. An accurate estimation of battery remaining capacity is crucial for optimization of the vehicle control, preventing battery from over-charging and over-discharging and ensuring the safety during its service life. The remaining capacity estimation of a battery includes the estimation of state-of-charge (SOC) and state-of-energy (SOE). In this work, a probability based adaptive estimator is presented to obtain accurate and reliable estimation results for both SOC and SOE. For the SOC estimation, an n ordered RC equivalent circuit model is employed by combining an electrochemical model to obtain more accurate voltage prediction results. For the SOE estimation, a sliding window neural network model is proposed to investigate the relationship between the terminal voltage and the model inputs. To verify the accuracy and robustness of the proposed model and estimation algorithm, experiments under different dynamic operation current profiles are performed on the commercial 1665130-type lithium-ion batteries. The results illustrate that accurate and robust estimation can be obtained by the proposed method.
An investigation of a quantum probability model for the constructive effect of affective evaluation.
White, Lee C; Barqué-Duran, Albert; Pothos, Emmanuel M
2016-01-13
The idea that choices can have a constructive effect has received a great deal of empirical support. The act of choosing appears to influence subsequent preferences for the options available. Recent research has proposed a cognitive model based on quantum probability (QP), which suggests that whether or not a participant provides an affective evaluation for a positively or negatively valenced stimulus can also be constructive and so, for example, influence the affective evaluation of a second oppositely valenced stimulus. However, there are some outstanding methodological questions in relation to this previous research. This paper reports the results of three experiments designed to resolve these questions. Experiment 1, using a binary response format, provides partial support for the interaction predicted by the QP model; and Experiment 2, which controls for the length of time participants have to respond, fully supports the QP model. Finally, Experiment 3 sought to determine whether the key effect can generalize beyond affective judgements about visual stimuli. Using judgements about the trustworthiness of well-known people, the predictions of the QP model were confirmed. Together, these three experiments provide further support for the QP model of the constructive effect of simple evaluations. PMID:26621993
NASA Astrophysics Data System (ADS)
Adeloye, Adebayo J.; Soundharajan, Bankaru-Swamy; Musto, Jagarkhin N.; Chiamsathit, Chuthamat
2015-10-01
This study has carried out an assessment of Phien generalised storage-yield-probability (S-Y-P) models using recorded runoff data of six global rivers that were carefully selected such that they satisfy the criteria specified for the models. Using stochastic hydrology, 2000 replicates of the historic records were generated and used to drive the sequent peak algorithm (SPA) for estimating capacity of hypothetical reservoirs at the respective sites. The resulting ensembles of reservoir capacity estimates were then analysed to determine the mean, standard deviation and quantiles, which were then compared with corresponding estimates produced by the Phien models. The results showed that Phien models produced a mix of significant under- and over-predictions of the mean and standard deviation of capacity, with the under-prediction situations occurring as the level of development reduces. On the other hand, consistent over-prediction was obtained for full regulation for all the rivers analysed. The biases in the reservoir capacity quantiles were equally high, implying that the limitations of the Phien models affect the entire distribution function of reservoir capacity. Due to very high values of these errors, it is recommended that the Phien relationships should be avoided for reservoir planning.
PHOTOMETRIC REDSHIFTS AND QUASAR PROBABILITIES FROM A SINGLE, DATA-DRIVEN GENERATIVE MODEL
Bovy, Jo; Hogg, David W.; Weaver, Benjamin A.; Myers, Adam D.; Hennawi, Joseph F.; McMahon, Richard G.; Schiminovich, David; Sheldon, Erin S.; Brinkmann, Jon; Schneider, Donald P.
2012-04-10
We describe a technique for simultaneously classifying and estimating the redshift of quasars. It can separate quasars from stars in arbitrary redshift ranges, estimate full posterior distribution functions for the redshift, and naturally incorporate flux uncertainties, missing data, and multi-wavelength photometry. We build models of quasars in flux-redshift space by applying the extreme deconvolution technique to estimate the underlying density. By integrating this density over redshift, one can obtain quasar flux densities in different redshift ranges. This approach allows for efficient, consistent, and fast classification and photometric redshift estimation. This is achieved by combining the speed obtained by choosing simple analytical forms as the basis of our density model with the flexibility of non-parametric models through the use of many simple components with many parameters. We show that this technique is competitive with the best photometric quasar classification techniques-which are limited to fixed, broad redshift ranges and high signal-to-noise ratio data-and with the best photometric redshift techniques when applied to broadband optical data. We demonstrate that the inclusion of UV and NIR data significantly improves photometric quasar-star separation and essentially resolves all of the redshift degeneracies for quasars inherent to the ugriz filter system, even when included data have a low signal-to-noise ratio. For quasars spectroscopically confirmed by the SDSS 84% and 97% of the objects with Galaxy Evolution Explorer UV and UKIDSS NIR data have photometric redshifts within 0.1 and 0.3, respectively, of the spectroscopic redshift; this amounts to about a factor of three improvement over ugriz-only photometric redshifts. Our code to calculate quasar probabilities and redshift probability distributions is publicly available.
Photometric redshifts and quasar probabilities from a single, data-driven generative model
Bovy, Jo; Myers, Adam D.; Hennawi, Joseph F.; Hogg, David W.; McMahon, Richard G.; Schiminovich, David; Sheldon, Erin S.; Brinkmann, Jon; Schneider, Donald P.; Weaver, Benjamin A.
2012-03-20
We describe a technique for simultaneously classifying and estimating the redshift of quasars. It can separate quasars from stars in arbitrary redshift ranges, estimate full posterior distribution functions for the redshift, and naturally incorporate flux uncertainties, missing data, and multi-wavelength photometry. We build models of quasars in flux-redshift space by applying the extreme deconvolution technique to estimate the underlying density. By integrating this density over redshift, one can obtain quasar flux densities in different redshift ranges. This approach allows for efficient, consistent, and fast classification and photometric redshift estimation. This is achieved by combining the speed obtained by choosing simple analytical forms as the basis of our density model with the flexibility of non-parametric models through the use of many simple components with many parameters. We show that this technique is competitive with the best photometric quasar classification techniques—which are limited to fixed, broad redshift ranges and high signal-to-noise ratio data—and with the best photometric redshift techniques when applied to broadband optical data. We demonstrate that the inclusion of UV and NIR data significantly improves photometric quasar-star separation and essentially resolves all of the redshift degeneracies for quasars inherent to the ugriz filter system, even when included data have a low signal-to-noise ratio. For quasars spectroscopically confirmed by the SDSS 84% and 97% of the objects with Galaxy Evolution Explorer UV and UKIDSS NIR data have photometric redshifts within 0.1 and 0.3, respectively, of the spectroscopic redshift; this amounts to about a factor of three improvement over ugriz-only photometric redshifts. Our code to calculate quasar probabilities and redshift probability distributions is publicly available.
McClure, Meredith L.; Burdett, Christopher L.; Farnsworth, Matthew L.; Lutman, Mark W.; Theobald, David M.; Riggs, Philip D.; Grear, Daniel A.; Miller, Ryan S.
2015-01-01
Wild pigs (Sus scrofa), also known as wild swine, feral pigs, or feral hogs, are one of the most widespread and successful invasive species around the world. Wild pigs have been linked to extensive and costly agricultural damage and present a serious threat to plant and animal communities due to their rooting behavior and omnivorous diet. We modeled the current distribution of wild pigs in the United States to better understand the physiological and ecological factors that may determine their invasive potential and to guide future study and eradication efforts. Using national-scale wild pig occurrence data reported between 1982 and 2012 by wildlife management professionals, we estimated the probability of wild pig occurrence across the United States using a logistic discrimination function and environmental covariates hypothesized to influence the distribution of the species. Our results suggest the distribution of wild pigs in the U.S. was most strongly limited by cold temperatures and availability of water, and that they were most likely to occur where potential home ranges had higher habitat heterogeneity, providing access to multiple key resources including water, forage, and cover. High probability of occurrence was also associated with frequent high temperatures, up to a high threshold. However, this pattern is driven by pigs’ historic distribution in warm climates of the southern U.S. Further study of pigs’ ability to persist in cold northern climates is needed to better understand whether low temperatures actually limit their distribution. Our model highlights areas at risk of invasion as those with habitat conditions similar to those found in pigs’ current range that are also near current populations. This study provides a macro-scale approach to generalist species distribution modeling that is applicable to other generalist and invasive species. PMID:26267266
McClure, Meredith L; Burdett, Christopher L; Farnsworth, Matthew L; Lutman, Mark W; Theobald, David M; Riggs, Philip D; Grear, Daniel A; Miller, Ryan S
2015-01-01
Wild pigs (Sus scrofa), also known as wild swine, feral pigs, or feral hogs, are one of the most widespread and successful invasive species around the world. Wild pigs have been linked to extensive and costly agricultural damage and present a serious threat to plant and animal communities due to their rooting behavior and omnivorous diet. We modeled the current distribution of wild pigs in the United States to better understand the physiological and ecological factors that may determine their invasive potential and to guide future study and eradication efforts. Using national-scale wild pig occurrence data reported between 1982 and 2012 by wildlife management professionals, we estimated the probability of wild pig occurrence across the United States using a logistic discrimination function and environmental covariates hypothesized to influence the distribution of the species. Our results suggest the distribution of wild pigs in the U.S. was most strongly limited by cold temperatures and availability of water, and that they were most likely to occur where potential home ranges had higher habitat heterogeneity, providing access to multiple key resources including water, forage, and cover. High probability of occurrence was also associated with frequent high temperatures, up to a high threshold. However, this pattern is driven by pigs' historic distribution in warm climates of the southern U.S. Further study of pigs' ability to persist in cold northern climates is needed to better understand whether low temperatures actually limit their distribution. Our model highlights areas at risk of invasion as those with habitat conditions similar to those found in pigs' current range that are also near current populations. This study provides a macro-scale approach to generalist species distribution modeling that is applicable to other generalist and invasive species. PMID:26267266
LaBudde, Robert A; Harnly, James M
2012-01-01
A qualitative botanical identification method (BIM) is an analytical procedure that returns a binary result (1 = Identified, 0 = Not Identified). A BIM may be used by a buyer, manufacturer, or regulator to determine whether a botanical material being tested is the same as the target (desired) material, or whether it contains excessive nontarget (undesirable) material. The report describes the development and validation of studies for a BIM based on the proportion of replicates identified, or probability of identification (POI), as the basic observed statistic. The statistical procedures proposed for data analysis follow closely those of the probability of detection, and harmonize the statistical concepts and parameters between quantitative and qualitative method validation. Use of POI statistics also harmonizes statistical concepts for botanical, microbiological, toxin, and other analyte identification methods that produce binary results. The POI statistical model provides a tool for graphical representation of response curves for qualitative methods, reporting of descriptive statistics, and application of performance requirements. Single collaborator and multicollaborative study examples are given. PMID:22468371
Fixation probability and the crossing time in the Wright-Fisher multiple alleles model
NASA Astrophysics Data System (ADS)
Gill, Wonpyong
2009-08-01
The fixation probability and crossing time in the Wright-Fisher multiple alleles model, which describes a finite haploid population, were calculated by switching on an asymmetric sharply-peaked landscape with a positive asymmetric parameter, r, such that the reversal allele of the optimal allele has higher fitness than the optimal allele. The fixation probability, which was evaluated as the ratio of the first arrival time at the reversal allele to the origination time, was double the selective advantage of the reversal allele compared with the optimal allele in the strong selection region, where the fitness parameter, k, is much larger than the critical fitness parameter, kc. The crossing time in a finite population for r>0 and k
NASA Astrophysics Data System (ADS)
Musho, Matthew K.; Kozak, John J.
1984-10-01
A method is presented for calculating exactly the relative width (σ2)1/2/
TRANSITION PROBABILITIES FOR STUDENT-TEACHER POPULATION GROWTH MODEL (DYNAMOD II).
ERIC Educational Resources Information Center
ZINTER, JUDITH R.
THIS NOTE PRESENTS THE TRANSITION PROBABILITIES CURRENTLY IN USE IN DYNAMOD II. THE ESTIMATING PROCEDURES USED TO DERIVE THESE PROBABILITIES WERE DISCUSSED IN THESE RELATED DOCUMENTS--EA 001 016, EA 001 017, EA 001 018, AND EA 001 063. THE TRANSIT ON PROBABILITIES FOR FOUR SEX-RACE GROUPS ARE SHOWN ALONG WITH THE DONOR-RECEIVER CODES TO WHICH THEY…
A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities
Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.
1999-01-01
A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.
Kukla, G.; Gavin, J.
1994-05-01
This report was prepared at the Lamont-Doherty Geological Observatory of Columbia University at Palisades, New York, under subcontract to Pacific Northwest Laboratory it is a part of a larger project of global climate studies which supports site characterization work required for the selection of a potential high-level nuclear waste repository and forms part of the Performance Assessment Scientific Support (PASS) Program at PNL. The work under the PASS Program is currently focusing on the proposed site at Yucca Mountain, Nevada, and is under the overall direction of the Yucca Mountain Project Office US Department of Energy, Las Vegas, Nevada. The final results of the PNL project will provide input to global atmospheric models designed to test specific climate scenarios which will be used in the site specific modeling work of others. The primary purpose of the data bases compiled and of the astronomic predictive models is to aid in the estimation of the probabilities of future climate states. The results will be used by two other teams working on the global climate study under contract to PNL. They are located at and the University of Maine in Orono, Maine, and the Applied Research Corporation in College Station, Texas. This report presents the results of the third year`s work on the global climate change models and the data bases describing past climates.
3D model retrieval using probability density-based shape descriptors.
Akgül, Ceyhun Burak; Sankur, Bülent; Yemez, Yücel; Schmitt, Francis
2009-06-01
We address content-based retrieval of complete 3D object models by a probabilistic generative description of local shape properties. The proposed shape description framework characterizes a 3D object with sampled multivariate probability density functions of its local surface features. This density-based descriptor can be efficiently computed via kernel density estimation (KDE) coupled with fast Gauss transform. The non-parametric KDE technique allows reliable characterization of a diverse set of shapes and yields descriptors which remain relatively insensitive to small shape perturbations and mesh resolution. Density-based characterization also induces a permutation property which can be used to guarantee invariance at the shape matching stage. As proven by extensive retrieval experiments on several 3D databases, our framework provides state-of-the-art discrimination over a broad and heterogeneous set of shape categories. PMID:19372614
Modeling Longitudinal Data Containing Non-Normal Within Subject Errors
NASA Technical Reports Server (NTRS)
Feiveson, Alan; Glenn, Nancy L.
2013-01-01
The mission of the National Aeronautics and Space Administration’s (NASA) human research program is to advance safe human spaceflight. This involves conducting experiments, collecting data, and analyzing data. The data are longitudinal and result from a relatively few number of subjects; typically 10 – 20. A longitudinal study refers to an investigation where participant outcomes and possibly treatments are collected at multiple follow-up times. Standard statistical designs such as mean regression with random effects and mixed–effects regression are inadequate for such data because the population is typically not approximately normally distributed. Hence, more advanced data analysis methods are necessary. This research focuses on four such methods for longitudinal data analysis: the recently proposed linear quantile mixed models (lqmm) by Geraci and Bottai (2013), quantile regression, multilevel mixed–effects linear regression, and robust regression. This research also provides computational algorithms for longitudinal data that scientists can directly use for human spaceflight and other longitudinal data applications, then presents statistical evidence that verifies which method is best for specific situations. This advances the study of longitudinal data in a broad range of applications including applications in the sciences, technology, engineering and mathematics fields.
Multivariate Models for Normal and Binary Responses in Intervention Studies
ERIC Educational Resources Information Center
Pituch, Keenan A.; Whittaker, Tiffany A.; Chang, Wanchen
2016-01-01
Use of multivariate analysis (e.g., multivariate analysis of variance) is common when normally distributed outcomes are collected in intervention research. However, when mixed responses--a set of normal and binary outcomes--are collected, standard multivariate analyses are no longer suitable. While mixed responses are often obtained in…
Cella, Laura; Palma, Giuseppe; Deasy, Joseph O.; Oh, Jung Hun; Liuzzi, Raffaele; D’Avino, Vittoria; Conson, Manuel; Pugliese, Novella; Picardi, Marco; Salvatore, Marco; Pacelli, Roberto
2014-01-01
Purpose The purpose of this study is to compare different normal tissue complication probability (NTCP) models for predicting heart valve dysfunction (RVD) following thoracic irradiation. Methods All patients from our institutional Hodgkin lymphoma survivors database with analyzable datasets were included (n = 90). All patients were treated with three-dimensional conformal radiotherapy with a median total dose of 32 Gy. The cardiac toxicity profile was available for each patient. Heart and lung dose-volume histograms (DVHs) were extracted and both organs were considered for Lyman-Kutcher-Burman (LKB) and Relative Seriality (RS) NTCP model fitting using maximum likelihood estimation. Bootstrap refitting was used to test the robustness of the model fit. Model performance was estimated using the area under the receiver operating characteristic curve (AUC). Results Using only heart-DVHs, parameter estimates were, for the LKB model: D50 = 32.8 Gy, n = 0.16 and m = 0.67; and for the RS model: D50 = 32.4 Gy, s = 0.99 and γ = 0.42. AUC values were 0.67 for LKB and 0.66 for RS, respectively. Similar performance was obtained for models using only lung-DVHs (LKB: D50 = 33.2 Gy, n = 0.01, m = 0.19, AUC = 0.68; RS: D50 = 24.4 Gy, s = 0.99, γ = 2.12, AUC = 0.66). Bootstrap result showed that the parameter fits for lung-LKB were extremely robust. A combined heart-lung LKB model was also tested and showed a minor improvement (AUC = 0.70). However, the best performance was obtained using the previously determined multivariate regression model including maximum heart dose with increasing risk for larger heart and smaller lung volumes (AUC = 0.82). Conclusions The risk of radiation induced valvular disease cannot be modeled using NTCP models only based on heart dose-volume distribution. A predictive model with an improved performance can be obtained but requires the inclusion of heart and lung volume terms
NASA Astrophysics Data System (ADS)
Kim, Shaun Sang Ho; Hughes, Justin Douglas; Chen, Jie; Dutta, Dushmanta; Vaze, Jai
2015-11-01
A calibration method is presented that uses a sub-period resampling method to estimate probability distributions of performance for different parameter sets. Where conventional calibration methods implicitly identify the best performing parameterisations on average, the new method looks at the consistency of performance during sub-periods. The method is implemented with the conceptual river reach algorithms within the Australian Water Resources Assessments River (AWRA-R) model in the Murray-Darling Basin, Australia. The new method is tested for 192 reaches in a cross-validation scheme and results are compared to a traditional split-sample calibration-validation implementation. This is done to evaluate the new technique's ability to predict daily streamflow outside the calibration period. The new calibration method produced parameterisations that performed better in validation periods than optimum calibration parameter sets for 103 reaches and produced the same parameterisations for 35 reaches. The method showed a statistically significant improvement to predictive performance and potentially provides more rational flux terms over traditional split-sample calibration methods. Particular strengths of the proposed calibration method is that it avoids extra weighting towards rare periods of good agreement and also prevents compensating biases through time. The method can be used as a diagnostic tool to evaluate stochasticity of modelled systems and used to determine suitable model structures of different time-series models. Although the method is demonstrated using a hydrological model, the method is not limited to the field of hydrology and could be adopted for many different time-series modelling applications.
The k-sample problem in a multi-state model and testing transition probability matrices.
Tattar, Prabhanjan N; Vaman, H J
2014-07-01
The choice of multi-state models is natural in analysis of survival data, e.g., when the subjects in a study pass through different states like 'healthy', 'in a state of remission', 'relapse' or 'dead' in a health related quality of life study. Competing risks is another common instance of the use of multi-state models. Statistical inference for such event history data can be carried out by assuming a stochastic process model. Under such a setting, comparison of the event history data generated by two different treatments calls for testing equality of the corresponding transition probability matrices. The present paper proposes solution to this class of problems by assuming a non-homogeneous Markov process to describe the transitions among the health states. A class of test statistics are derived for comparison of [Formula: see text] treatments by using a 'weight process'. This class, in particular, yields generalisations of the log-rank, Gehan, Peto-Peto and Harrington-Fleming tests. For an intrinsic comparison of the treatments, the 'leave-one-out' jackknife method is employed for identifying influential observations. The proposed methods are then used to develop the Kolmogorov-Smirnov type supremum tests corresponding to the various extended tests. To demonstrate the usefulness of the test procedures developed, a simulation study was carried out and an application to the Trial V data provided by International Breast Cancer Study Group is discussed. PMID:23722306
Moment-Based Probability Modeling and Extreme Response Estimation, The FITS Routine Version 1.2
MANUEL,LANCE; KASHEF,TINA; WINTERSTEIN,STEVEN R.
1999-11-01
This report documents the use of the FITS routine, which provides automated fits of various analytical, commonly used probability models from input data. It is intended to complement the previously distributed FITTING routine documented in RMS Report 14 (Winterstein et al., 1994), which implements relatively complex four-moment distribution models whose parameters are fit with numerical optimization routines. Although these four-moment fits can be quite useful and faithful to the observed data, their complexity can make them difficult to automate within standard fitting algorithms. In contrast, FITS provides more robust (lower moment) fits of simpler, more conventional distribution forms. For each database of interest, the routine estimates the distribution of annual maximum response based on the data values and the duration, T, over which they were recorded. To focus on the upper tails of interest, the user can also supply an arbitrary lower-bound threshold, {chi}{sub low}, above which a shifted distribution model--exponential or Weibull--is fit.
Towards smart prosthetic hand: Adaptive probability based skeletan muscle fatigue model.
Kumar, Parmod; Sebastian, Anish; Potluri, Chandrasekhar; Urfer, Alex; Naidu, D; Schoen, Marco P
2010-01-01
Skeletal muscle force can be estimated using surface electromyographic (sEMG) signals. Usually, the surface location for the sensors is near the respective muscle motor unit points. Skeletal muscles generate a spatial EMG signal, which causes cross talk between different sEMG signal sensors. In this study, an array of three sEMG sensors is used to capture the information of muscle dynamics in terms of sEMG signals. The recorded sEMG signals are filtered utilizing optimized nonlinear Half-Gaussian Bayesian filters parameters, and the muscle force signal using a Chebyshev type-II filter. The filter optimization is accomplished using Genetic Algorithms. Three discrete time state-space muscle fatigue models are obtained using system identification and modal transformation for three sets of sensors for single motor unit. The outputs of these three muscle fatigue models are fused with a probabilistic Kullback Information Criterion (KIC) for model selection. The final fused output is estimated with an adaptive probability of KIC, which provides improved force estimates. PMID:21095927
Stacey, W.M.
1992-12-01
A new computational model for neutral particle transport in the outer regions of a diverted tokamak plasma chamber is presented. The model is based on the calculation of transmission and escape probabilities using first-flight integral transport theory and the balancing of fluxes across the surfaces bounding the various regions. The geometrical complexity of the problem is included in precomputed probabilities which depend only on the mean free path of the region.
NASA Astrophysics Data System (ADS)
Merdan, Ziya; Karakuş, Özlem
2016-07-01
The six dimensional Ising model with nearest-neighbor pair interactions has been simulated and verified numerically on the Creutz Cellular Automaton by using five bit demons near the infinite-lattice critical temperature with the linear dimensions L=4,6,8,10. The order parameter probability distribution for six dimensional Ising model has been calculated at the critical temperature. The constants of the analytical function have been estimated by fitting to probability function obtained numerically at the finite size critical point.
NASA Astrophysics Data System (ADS)
Lee, T. S.; Yoon, S.; Jeong, C.
2012-12-01
The primary purpose of frequency analysis in hydrology is to estimate the magnitude of an event with a given frequency of occurrence. The precision of frequency analysis depends on the selection of an appropriate probability distribution model (PDM) and parameter estimation techniques. A number of PDMs have been developed to describe the probability distribution of the hydrological variables. For each of the developed PDMs, estimated parameters are provided based on alternative estimation techniques, such as the method of moments (MOM), probability weighted moments (PWM), linear function of ranked observations (L-moments), and maximum likelihood (ML). Generally, the results using ML are more reliable than the other methods. However, the ML technique is more laborious than the other methods because an iterative numerical solution, such as the Newton-Raphson method, must be used for the parameter estimation of PDMs. In the meantime, meta-heuristic approaches have been developed to solve various engineering optimization problems (e.g., linear and stochastic, dynamic, nonlinear). These approaches include genetic algorithms, ant colony optimization, simulated annealing, tabu searches, and evolutionary computation methods. Meta-heuristic approaches use a stochastic random search instead of a gradient search so that intricate derivative information is unnecessary. Therefore, the meta-heuristic approaches have been shown to be a useful strategy to solve optimization problems in hydrology. A number of studies focus on using meta-heuristic approaches for estimation of hydrological variables with parameter estimation of PDMs. Applied meta-heuristic approaches offer reliable solutions but use more computation time than derivative-based methods. Therefore, the purpose of this study is to enhance the meta-heuristic approach for the parameter estimation of PDMs by using a recently developed algorithm known as a harmony search (HS). The performance of the HS is compared to the
Kausar, A S M Zahid; Reza, Ahmed Wasif; Wo, Lau Chun; Ramiah, Harikrishnan
2014-01-01
Although ray tracing based propagation prediction models are popular for indoor radio wave propagation characterization, most of them do not provide an integrated approach for achieving the goal of optimum coverage, which is a key part in designing wireless network. In this paper, an accelerated technique of three-dimensional ray tracing is presented, where rough surface scattering is included for making a more accurate ray tracing technique. Here, the rough surface scattering is represented by microfacets, for which it becomes possible to compute the scattering field in all possible directions. New optimization techniques, like dual quadrant skipping (DQS) and closest object finder (COF), are implemented for fast characterization of wireless communications and making the ray tracing technique more efficient. In conjunction with the ray tracing technique, probability based coverage optimization algorithm is accumulated with the ray tracing technique to make a compact solution for indoor propagation prediction. The proposed technique decreases the ray tracing time by omitting the unnecessary objects for ray tracing using the DQS technique and by decreasing the ray-object intersection time using the COF technique. On the other hand, the coverage optimization algorithm is based on probability theory, which finds out the minimum number of transmitters and their corresponding positions in order to achieve optimal indoor wireless coverage. Both of the space and time complexities of the proposed algorithm surpass the existing algorithms. For the verification of the proposed ray tracing technique and coverage algorithm, detailed simulation results for different scattering factors, different antenna types, and different operating frequencies are presented. Furthermore, the proposed technique is verified by the experimental results. PMID:25202733
Arterberry, Martha E.; Bornstein, Marc H.; Haynes, O. Maurice
2012-01-01
Two analytical procedures for identifying young children as categorizers, the Monte Carlo Simulation and the Probability Estimate Model, were compared. Using a sequential touching method, children age 12, 18, 24, and 30 months were given seven object sets representing different levels of categorical classification. From their touching performance, the probability that children were categorizing was then determined independently using Monte Carlo Simulation and the Probability Estimate Model. The two analytical procedures resulted in different percentages of children being classified as categorizers. Results using the Monte Carlo Simulation were more consistent with group-level analyses than results using the Probability Estimate Model. These findings recommend using the Monte Carlo Simulation for determining individual categorizer classification. PMID:21402410
NASA Astrophysics Data System (ADS)
Xu, L.; Schull, M. A.; Samanta, A.; Myneni, R. B.; Knyazikhin, Y.
2010-12-01
The concept of canopy spectral invariants expresses the observation that simple algebraic combinations of leaf and canopy spectral reflectance become wavelength independent and determine two canopy structure specific variables - the recollision and escape probabilities. These variables specify an accurate relationship between the spectral response of a vegetation canopy to incident solar radiation at the leaf and the canopy scale. They are sensitive to important structural features of the canopy such as forest cover, tree density, leaf area index, crown geometry, forest type and stand age. The canopy spectral invariant behavior is a very strong effect clearly seen in optical remote sensing data. The relative simplicity of retrieving the spectral invariants however is accompanied by considerable difficulties in their interpretations due to the lack of models for these parameters. We use the stochastic radiative transfer equation to relate the spectral invariants to the 3D canopy structure. Stochastic radiative transfer model treats the vegetation canopy as a stochastic medium. It expresses the 3D spatial correlation with the use of the pair correlation function, which plays a key role in measuring the spatial correlation of the 3D canopy structure over a wide range of scales. Data analysis from a simulated single bush to the comprehensive forest canopy is presented for both passive and active (lidar) remote sensing domain.
Yu Meiling; Xu Mingmei; Liu Lianshou; Liu Zhengyou
2009-12-15
The quantitative dependence of quark-gluon plasma (QGP)-formation probability (P{sub QGP}) on the centrality of Au-Au collisions is studied using a bond percolation model. The P{sub QGP} versus the maximum distance S{sub max} for a bond to form is calculated from the model for various nuclei and the P{sub QGP} at different centralities of Au-Au collisions for the given S{sub max} are obtained therefrom. The experimental data of the nuclear modification factor R{sub AA}(p{sub T}) for the most central Au-Au collisions at {radical}(s{sub NN})=200 and 130 GeV are utilized to transform S{sub max} to {radical}(s{sub NN}). The P{sub QGP} for different centralities of Au-Au collisions at these two energies are thus obtained, which is useful for correctly understanding the centrality dependence of the experimental data.
Model assisted probability of detection for a guided waves based SHM technique
NASA Astrophysics Data System (ADS)
Memmolo, V.; Ricci, F.; Maio, L.; Boffa, N. D.; Monaco, E.
2016-04-01
Guided wave (GW) Structural Health Monitoring (SHM) allows to assess the health of aerostructures thanks to the great sensitivity to delamination and/or debondings appearance. Due to the several complexities affecting wave propagation in composites, an efficient GW SHM system requires its effective quantification associated to a rigorous statistical evaluation procedure. Probability of Detection (POD) approach is a commonly accepted measurement method to quantify NDI results and it can be effectively extended to an SHM context. However, it requires a very complex setup arrangement and many coupons. When a rigorous correlation with measurements is adopted, Model Assisted POD (MAPOD) is an efficient alternative to classic methods. This paper is concerned with the identification of small emerging delaminations in composite structural components. An ultrasonic GW tomography focused to impact damage detection in composite plate-like structures recently developed by authors is investigated, getting the bases for a more complex MAPOD analysis. Experimental tests carried out on a typical wing composite structure demonstrated the effectiveness of modeling approach in order to detect damages with the tomographic algorithm. Environmental disturbances, which affect signal waveforms and consequently damage detection, are considered simulating a mathematical noise in the modeling stage. A statistical method is used for an effective making decision procedure. A Damage Index approach is implemented as metric to interpret the signals collected from a distributed sensor network and a subsequent graphic interpolation is carried out to reconstruct the damage appearance. A model validation and first reliability assessment results are provided, in view of performance system quantification and its optimization as well.
NASA Astrophysics Data System (ADS)
Mazas, Franck; Hamm, Luc; Kergadallan, Xavier
2013-04-01
In France, the storm Xynthia of February 27-28th, 2010 reminded engineers and stakeholders of the necessity for an accurate estimation of extreme sea levels for the risk assessment in coastal areas. Traditionally, two main approaches exist for the statistical extrapolation of extreme sea levels: the direct approach performs a direct extrapolation on the sea level data, while the indirect approach carries out a separate analysis of the deterministic component (astronomical tide) and stochastic component (meteorological residual, or surge). When the tidal component is large compared with the surge one, the latter approach is known to perform better. In this approach, the statistical extrapolation is performed on the surge component then the distribution of extreme seal levels is obtained by convolution of the tide and surge distributions. This model is often referred to as the Joint Probability Method. Different models from the univariate extreme theory have been applied in the past for extrapolating extreme surges, in particular the Annual Maxima Method (AMM) and the r-largest method. In this presentation, we apply the Peaks-Over-Threshold (POT) approach for declustering extreme surge events, coupled with the Poisson-GPD model for fitting extreme surge peaks. This methodology allows a sound estimation of both lower and upper tails of the stochastic distribution, including the estimation of the uncertainties associated to the fit by computing the confidence intervals. After convolution with the tide signal, the model yields the distribution for the whole range of possible sea level values. Particular attention is paid to the necessary distinction between sea level values observed at a regular time step, such as hourly, and sea level events, such as those occurring during a storm. Extremal indexes for both surges and levels are thus introduced. This methodology will be illustrated with a case study at Brest, France.
A Mechanistic Beta-Binomial Probability Model for mRNA Sequencing Data
Smith, Gregory R.; Birtwistle, Marc R.
2016-01-01
A main application for mRNA sequencing (mRNAseq) is determining lists of differentially-expressed genes (DEGs) between two or more conditions. Several software packages exist to produce DEGs from mRNAseq data, but they typically yield different DEGs, sometimes markedly so. The underlying probability model used to describe mRNAseq data is central to deriving DEGs, and not surprisingly most softwares use different models and assumptions to analyze mRNAseq data. Here, we propose a mechanistic justification to model mRNAseq as a binomial process, with data from technical replicates given by a binomial distribution, and data from biological replicates well-described by a beta-binomial distribution. We demonstrate good agreement of this model with two large datasets. We show that an emergent feature of the beta-binomial distribution, given parameter regimes typical for mRNAseq experiments, is the well-known quadratic polynomial scaling of variance with the mean. The so-called dispersion parameter controls this scaling, and our analysis suggests that the dispersion parameter is a continually decreasing function of the mean, as opposed to current approaches that impose an asymptotic value to the dispersion parameter at moderate mean read counts. We show how this leads to current approaches overestimating variance for moderately to highly expressed genes, which inflates false negative rates. Describing mRNAseq data with a beta-binomial distribution thus may be preferred since its parameters are relatable to the mechanistic underpinnings of the technique and may improve the consistency of DEG analysis across softwares, particularly for moderately to highly expressed genes. PMID:27326762
Lancet, D; Sadovsky, E; Seidemann, E
1993-04-15
A generalized phenomenological model is presented for stereospecific recognition between biological receptors and their ligands. We ask what is the distribution of binding constants psi(K) between an arbitrary ligand and members of a large receptor repertoire, such as immunoglobulins or olfactory receptors. For binding surfaces with B potential subsite and S different types of subsite configurations, the number of successful elementary interactions obeys a binomial distribution. The discrete probability function psi(K) is then derived with assumptions on alpha, the free energy contribution per elementary interaction. The functional form of psi(K) may be universal, although the parameter values could vary for different ligand types. An estimate of the parameter values of psi(K) for iodovanillin, an analog of odorants and immunological haptens, is obtained by equilibrium dialysis experiments with nonimmune antibodies. Based on a simple relationship, predicted by the model, between the size of a receptor repertoire and its average maximal affinity toward an arbitrary ligand, the size of the olfactory receptor repertoire (Nolf) is calculated as 300-1000, in very good agreement with recent molecular biological studies. A very similar estimate, Nolf = 500, is independently derived by relating a theoretical distribution of maxima for psi(K) with published human olfactory threshold variations. The present model also has implications to the question of olfactory coding and to the analysis of specific anosmias, genetic deficits in perceiving particular odorants. More generally, the proposed model provides a better understanding of ligand specificity in biological receptors and could help in understanding their evolution. PMID:8475121
A Mechanistic Beta-Binomial Probability Model for mRNA Sequencing Data.
Smith, Gregory R; Birtwistle, Marc R
2016-01-01
A main application for mRNA sequencing (mRNAseq) is determining lists of differentially-expressed genes (DEGs) between two or more conditions. Several software packages exist to produce DEGs from mRNAseq data, but they typically yield different DEGs, sometimes markedly so. The underlying probability model used to describe mRNAseq data is central to deriving DEGs, and not surprisingly most softwares use different models and assumptions to analyze mRNAseq data. Here, we propose a mechanistic justification to model mRNAseq as a binomial process, with data from technical replicates given by a binomial distribution, and data from biological replicates well-described by a beta-binomial distribution. We demonstrate good agreement of this model with two large datasets. We show that an emergent feature of the beta-binomial distribution, given parameter regimes typical for mRNAseq experiments, is the well-known quadratic polynomial scaling of variance with the mean. The so-called dispersion parameter controls this scaling, and our analysis suggests that the dispersion parameter is a continually decreasing function of the mean, as opposed to current approaches that impose an asymptotic value to the dispersion parameter at moderate mean read counts. We show how this leads to current approaches overestimating variance for moderately to highly expressed genes, which inflates false negative rates. Describing mRNAseq data with a beta-binomial distribution thus may be preferred since its parameters are relatable to the mechanistic underpinnings of the technique and may improve the consistency of DEG analysis across softwares, particularly for moderately to highly expressed genes. PMID:27326762
Lancet, D; Sadovsky, E; Seidemann, E
1993-01-01
A generalized phenomenological model is presented for stereospecific recognition between biological receptors and their ligands. We ask what is the distribution of binding constants psi(K) between an arbitrary ligand and members of a large receptor repertoire, such as immunoglobulins or olfactory receptors. For binding surfaces with B potential subsite and S different types of subsite configurations, the number of successful elementary interactions obeys a binomial distribution. The discrete probability function psi(K) is then derived with assumptions on alpha, the free energy contribution per elementary interaction. The functional form of psi(K) may be universal, although the parameter values could vary for different ligand types. An estimate of the parameter values of psi(K) for iodovanillin, an analog of odorants and immunological haptens, is obtained by equilibrium dialysis experiments with nonimmune antibodies. Based on a simple relationship, predicted by the model, between the size of a receptor repertoire and its average maximal affinity toward an arbitrary ligand, the size of the olfactory receptor repertoire (Nolf) is calculated as 300-1000, in very good agreement with recent molecular biological studies. A very similar estimate, Nolf = 500, is independently derived by relating a theoretical distribution of maxima for psi(K) with published human olfactory threshold variations. The present model also has implications to the question of olfactory coding and to the analysis of specific anosmias, genetic deficits in perceiving particular odorants. More generally, the proposed model provides a better understanding of ligand specificity in biological receptors and could help in understanding their evolution. PMID:8475121
A generic probability based model to derive regional patterns of crops in time and space
NASA Astrophysics Data System (ADS)
Wattenbach, Martin; Luedtke, Stefan; Redweik, Richard; van Oijen, Marcel; Balkovic, Juraj; Reinds, Gert Jan
2015-04-01
Croplands are not only the key to human food supply, they also change the biophysical and biogeochemical properties of the land surface leading to changes in the water cycle, energy portioning, they influence soil erosion and substantially contribute to the amount of greenhouse gases entering the atmosphere. The effects of croplands on the environment depend on the type of crop and the associated management which both are related to the site conditions, economic boundary settings as well as preferences of individual farmers. The method described here is designed to predict the most probable crop to appear at a given location and time. The method uses statistical crop area information on NUTS2 level from EUROSTAT and the Common Agricultural Policy Regionalized Impact Model (CAPRI) as observation. These crops are then spatially disaggregated to the 1 x 1 km grid scale within the region, using the assumption that the probability of a crop appearing at a given location and a given year depends on a) the suitability of the land for the cultivation of the crop derived from the MARS Crop Yield Forecast System (MCYFS) and b) expert knowledge of agricultural practices. The latter includes knowledge concerning the feasibility of one crop following another (e.g. a late-maturing crop might leave too little time for the establishment of a winter cereal crop) and the need to combat weed infestations or crop diseases. The model is implemented in R and PostGIS. The quality of the generated crop sequences per grid cell is evaluated on the basis of the given statistics reported by the joint EU/CAPRI database. The assessment is given on NUTS2 level using per cent bias as a measure with a threshold of 15% as minimum quality. The results clearly indicates that crops with a large relative share within the administrative unit are not as error prone as crops that allocate only minor parts of the unit. However, still roughly 40% show an absolute per cent bias above the 15% threshold. This
An EEG-Based Fuzzy Probability Model for Early Diagnosis of Alzheimer's Disease.
Chiang, Hsiu-Sen; Pao, Shun-Chi
2016-05-01
Alzheimer's disease is a degenerative brain disease that results in cardinal memory deterioration and significant cognitive impairments. The early treatment of Alzheimer's disease can significantly reduce deterioration. Early diagnosis is difficult, and early symptoms are frequently overlooked. While much of the literature focuses on disease detection, the use of electroencephalography (EEG) in Alzheimer's diagnosis has received relatively little attention. This study combines the fuzzy and associative Petri net methodologies to develop a model for the effective and objective detection of Alzheimer's disease. Differences in EEG patterns between normal subjects and Alzheimer patients are used to establish prediction criteria for Alzheimer's disease, potentially providing physicians with a reference for early diagnosis, allowing for early action to delay the disease progression. PMID:27059738
Predictions of the solar wind speed by the probability distribution function model
NASA Astrophysics Data System (ADS)
Bussy-Virat, C. D.; Ridley, A. J.
2014-06-01
The near-Earth space environment is strongly driven by the solar wind and interplanetary magnetic field. This study presents a model for predicting the solar wind speed up to 5 days in advance. Probability distribution functions (PDFs) were created that relate the current solar wind speed and slope to the future solar wind speed, as well as the solar wind speed to the solar wind speed one solar rotation in the future. It was found that a major limitation of this type of technique is that the solar wind periodicity is close to 27 days but can be from about 22 to 32 days. Further, the optimum lag between two solar rotations can change from day to day, making a prediction of the future solar wind speed based solely on the solar wind speed approximately 27 days ago quite difficult. It was found that using a linear combination of the solar wind speed one solar rotation ago and a prediction of the solar wind speed based on the current speed and slope is optimal. The linear weights change as a function of the prediction horizon, with shorter prediction times putting more weight on the prediction based on the current solar wind speed and the longer prediction times based on an even spread between the two. For all prediction horizons from 8 h up to 120 h, the PDF Model is shown to be better than using the current solar wind speed (i.e., persistence), and better than the Wang-Sheeley-Arge Model for prediction horizons of 24 h.
NASA Astrophysics Data System (ADS)
Peng, Guanghan; Liu, Changqing; Tuo, Manxian
2015-10-01
In this paper, a new lattice model is proposed with the traffic interruption probability term in two-lane traffic system. The linear stability condition and the mKdV equation are derived from linear stability analysis and nonlinear analysis by introducing the traffic interruption probability of optimal current for two-lane traffic freeway, respectively. Numerical simulation shows that the traffic interruption probability corresponding to high reaction coefficient can efficiently improve the stability of two-lane traffic flow as traffic interruption occurs with lane changing.
Gomberg, J.; Felzer, K.
2008-01-01
We have used observations from Felzer and Brodsky (2006) of the variation of linear aftershock densities (i.e., aftershocks per unit length) with the magnitude of and distance from the main shock fault to derive constraints on how the probability of a main shock triggering a single aftershock at a point, P(r, D), varies as a function of distance, r, and main shock rupture dimension, D. We find that P(r, D) becomes independent of D as the triggering fault is approached. When r ??? D P(r, D) scales as Dm where m-2 and decays with distance approximately as r-n with n = 2, with a possible change to r-(n-1) at r > h, where h is the closest distance between the fault and the boundaries of the seismogenic zone. These constraints may be used to test hypotheses about the types of deformations and mechanisms that trigger aftershocks. We illustrate this using dynamic deformations (i.e., radiated seismic waves) and a posited proportionality with P(r, D). Deformation characteristics examined include peak displacements, peak accelerations and velocities (proportional to strain rates and strains, respectively), and two measures that account for cumulative deformations. Our model indicates that either peak strains alone or strain rates averaged over the duration of rupture may be responsible for aftershock triggering.
Volkov, M. V.; Ostrovsky, V. N.
2007-02-15
Multistate generalizations of Landau-Zener model are studied by summing entire series of perturbation theory. A technique for analysis of the series is developed. Analytical expressions for probabilities of survival at the diabatic potential curves with extreme slope are proved. Degenerate situations are considered when there are several potential curves with extreme slope. Expressions for some state-to-state transition probabilities are derived in degenerate cases.
Terrestrial Food-Chain Model for Normal Operations.
Energy Science and Technology Software Center (ESTSC)
1991-10-01
Version 00 TERFOC-N calculates radiation doses to the public due to atmospheric releases of radionuclides in normal operations of nuclear facilities. The code estimates the highest individual dose and the collective dose from four exposure highways: internal doses from ingestion and inhalation, external doses from cloudshine and groundshine.
2015-01-01
The asymptotic behavior of the recovery probability for the dual renewal risk model with constant interest and debit force is studied. By means the idea of Markov Skeleton method, we studied the times that the random premium incomes happened and transformed the continuous time model into a discrete time model. By investigating the fluctuations of this discrete time model, we obtained the asymptotic behavior when the random premium income belongs to a kind of heavy-tailed distributions.
NASA Technical Reports Server (NTRS)
1979-01-01
The computer model for erythropoietic control was adapted to the mouse system by altering system parameters originally given for the human to those which more realistically represent the mouse. Parameter values were obtained from a variety of literature sources. Using the mouse model, the mouse was studied as a potential experimental model for spaceflight. Simulation studies of dehydration and hypoxia were performed. A comparison of system parameters for the mouse and human models is presented. Aside from the obvious differences expected in fluid volumes, blood flows and metabolic rates, larger differences were observed in the following: erythrocyte life span, erythropoietin half-life, and normal arterial pO2.
ERIC Educational Resources Information Center
Edwards, William F.; Shiflett, Ray C.; Shultz, Harris
2008-01-01
The mathematical model used to describe independence between two events in probability has a non-intuitive consequence called dependent spaces. The paper begins with a very brief history of the development of probability, then defines dependent spaces, and reviews what is known about finite spaces with uniform probability. The study of finite…
A Collision Probability Model of Portal Vein Tumor Thrombus Formation in Hepatocellular Carcinoma
Xiong, Fei
2015-01-01
Hepatocellular carcinoma is one of the most common malignancies worldwide, with a high risk of portal vein tumor thrombus (PVTT). Some promising results have been achieved for venous metastases of hepatocellular carcinoma; however, the etiology of PVTT is largely unknown, and it is unclear why the incidence of PVTT is not proportional to its distance from the carcinoma. We attempted to address this issue using physical concepts and mathematical tools. Finally, we discuss the relationship between the probability of a collision event and the microenvironment of the PVTT. Our formulae suggest that the collision probability can alter the tumor microenvironment by increasing the number of tumor cells. PMID:26131562
Latent Partially Ordered Classification Models and Normal Mixtures
ERIC Educational Resources Information Center
Tatsuoka, Curtis; Varadi, Ferenc; Jaeger, Judith
2013-01-01
Latent partially ordered sets (posets) can be employed in modeling cognitive functioning, such as in the analysis of neuropsychological (NP) and educational test data. Posets are cognitively diagnostic in the sense that classification states in these models are associated with detailed profiles of cognitive functioning. These profiles allow for…
Random forest models for the probable biological condition of streams and rivers in the USA
The National Rivers and Streams Assessment (NRSA) is a probability based survey conducted by the US Environmental Protection Agency and its state and tribal partners. It provides information on the ecological condition of the rivers and streams in the conterminous USA, and the ex...
Spatial prediction models for the probable biological condition of streams and rivers in the USA
The National Rivers and Streams Assessment (NRSA) is a probability-based survey conducted by the US Environmental Protection Agency and its state and tribal partners. It provides information on the ecological condition of the rivers and streams in the conterminous USA, and the ex...
ERIC Educational Resources Information Center
Rasanen, Okko
2011-01-01
Word segmentation from continuous speech is a difficult task that is faced by human infants when they start to learn their native language. Several studies indicate that infants might use several different cues to solve this problem, including intonation, linguistic stress, and transitional probabilities between subsequent speech sounds. In this…
A normal tissue dose response model of dynamic repair processes
NASA Astrophysics Data System (ADS)
Alber, Markus; Belka, Claus
2006-01-01
A model is presented for serial, critical element complication mechanisms for irradiated volumes from length scales of a few millimetres up to the entire organ. The central element of the model is the description of radiation complication as the failure of a dynamic repair process. The nature of the repair process is seen as reestablishing the structural organization of the tissue, rather than mere replenishment of lost cells. The interactions between the cells, such as migration, involved in the repair process are assumed to have finite ranges, which limits the repair capacity and is the defining property of a finite-sized reconstruction unit. Since the details of the repair processes are largely unknown, the development aims to make the most general assumptions about them. The model employs analogies and methods from thermodynamics and statistical physics. An explicit analytical form of the dose response of the reconstruction unit for total, partial and inhomogeneous irradiation is derived. The use of the model is demonstrated with data from animal spinal cord experiments and clinical data about heart, lung and rectum. The three-parameter model lends a new perspective to the equivalent uniform dose formalism and the established serial and parallel complication models. Its implications for dose optimization are discussed.
Technology Transfer Automated Retrieval System (TEKTRAN)
Staphylococcus aureus is a foodborne pathogen widespread in the environment and found in various food products. This pathogen can produce enterotoxins that cause illnesses in humans. The objectives of this study were to develop a probability model of S. aureus enterotoxin production as affected by w...
Carney, J.H.; DeAngelis, D.L.; Gardner, R.H.; Mankin, J.B.; Post, W.M.
1981-02-01
Six indices are presented for linear compartment systems that quantify the probable pathways of matter or energy transfer, the likelihood of recurrence if the model contains feedback loops, and the number of steps (transfers) through the system. General examples are used to illustrate how these indices can simplify the comparison of complex systems or organisms in unrelated systems.
NASA Astrophysics Data System (ADS)
Baran, Sándor; Lerch, Sebastian
2015-07-01
Ensembles of forecasts are obtained from multiple runs of numerical weather forecasting models with different initial conditions and typically employed to account for forecast uncertainties. However, biases and dispersion errors often occur in forecast ensembles, they are usually under-dispersive and uncalibrated and require statistical post-processing. We present an Ensemble Model Output Statistics (EMOS) method for calibration of wind speed forecasts based on the log-normal (LN) distribution, and we also show a regime-switching extension of the model which combines the previously studied truncated normal (TN) distribution with the LN. Both presented models are applied to wind speed forecasts of the eight-member University of Washington mesoscale ensemble, of the fifty-member ECMWF ensemble and of the eleven-member ALADIN-HUNEPS ensemble of the Hungarian Meteorological Service, and their predictive performances are compared to those of the TN and general extreme value (GEV) distribution based EMOS methods and to the TN-GEV mixture model. The results indicate improved calibration of probabilistic and accuracy of point forecasts in comparison to the raw ensemble and to climatological forecasts. Further, the TN-LN mixture model outperforms the traditional TN method and its predictive performance is able to keep up with the models utilizing the GEV distribution without assigning mass to negative values.
Regional Permafrost Probability Modelling in the northwestern Cordillera, 59°N - 61°N, Canada
NASA Astrophysics Data System (ADS)
Bonnaventure, P. P.; Lewkowicz, A. G.
2010-12-01
High resolution (30 x 30 m) permafrost probability models were created for eight mountainous areas in the Yukon and northernmost British Columbia. Empirical-statistical modelling based on the Basal Temperature of Snow (BTS) method was used to develop spatial relationships. Model inputs include equivalent elevation (a variable that incorporates non-uniform temperature change with elevation), potential incoming solar radiation and slope. Probability relationships between predicted BTS and permafrost presence were developed for each area using late-summer physical observations in pits, or by using year-round ground temperature measurements. A high-resolution spatial model for the region has now been generated based on seven of the area models. Each was applied to the entire region, and their predictions were then blended based on a distance decay function from the model source area. The regional model is challenging to validate independently because there are few boreholes in the region. However, a comparison of results to a recently established inventory of rock glaciers for the Yukon suggests its validity because predicted permafrost probabilities were 0.8 or greater for almost 90% of these landforms. Furthermore, the regional model results have a similar spatial pattern to those modelled independently in the eighth area, although predicted probabilities using the regional model are generally higher. The regional model predicts that permafrost underlies about half of the non-glaciated terrain in the region, with probabilities increasing regionally from south to north and from east to west. Elevation is significant, but not always linked in a straightforward fashion because of weak or inverted trends in permafrost probability below treeline. Above treeline, however, permafrost probabilities increase and approach 1.0 in very high elevation areas throughout the study region. The regional model shows many similarities to previous Canadian permafrost maps (Heginbottom
ERIC Educational Resources Information Center
Weatherly, Myra S.
1984-01-01
Instruction in mathematical probability to enhance higher levels of critical and creative thinking with gifted students is described. Among thinking skills developed by such an approach are analysis, synthesis, evaluation, fluency, and complexity. (CL)
Valve, explosive actuated, normally open, pyronetics model 1399
NASA Technical Reports Server (NTRS)
Avalos, E.
1971-01-01
Results of the tests to evaluate open valve, Model 1399 are reported for the the following tests: proof pressure leakage, actuation, disassembly, and burst pressure. It is concluded that the tests demonstrate the soundness of the structural integrity of the valve.
NASA Astrophysics Data System (ADS)
Rosa, A. N. F.; Wiatr, P.; Cavdar, C.; Carvalho, S. V.; Costa, J. C. W. A.; Wosinska, L.
2015-11-01
In Elastic Optical Network (EON), spectrum fragmentation refers to the existence of non-aligned, small-sized blocks of free subcarrier slots in the optical spectrum. Several metrics have been proposed in order to quantify a level of spectrum fragmentation. Approximation methods might be used for estimating average blocking probability and some fragmentation measures, but are so far unable to accurately evaluate the influence of different sizes of connection requests and do not allow in-depth investigation of blocking events and their relation to fragmentation. The analytical study of the effect of fragmentation on requests' blocking probability is still under-explored. In this work, we introduce new definitions for blocking that differentiate between the reasons for the blocking events. We developed a framework based on Markov modeling to calculate steady-state probabilities for the different blocking events and to analyze fragmentation related problems in elastic optical links under dynamic traffic conditions. This framework can also be used for evaluation of different definitions of fragmentation in terms of their relation to the blocking probability. We investigate how different allocation request sizes contribute to fragmentation and blocking probability. Moreover, we show to which extend blocking events, due to insufficient amount of available resources, become inevitable and, compared to the amount of blocking events due to fragmented spectrum, we draw conclusions on the possible gains one can achieve by system defragmentation. We also show how efficient spectrum allocation policies really are in reducing the part of fragmentation that in particular leads to actual blocking events. Simulation experiments are carried out showing good match with our analytical results for blocking probability in a small scale scenario. Simulated blocking probabilities for the different blocking events are provided for a larger scale elastic optical link.
Presenting Thin Media Models Affects Women's Choice of Diet or Normal Snacks
ERIC Educational Resources Information Center
Krahe, Barbara; Krause, Christina
2010-01-01
Our study explored the influence of thin- versus normal-size media models and of self-reported restrained eating behavior on women's observed snacking behavior. Fifty female undergraduates saw a set of advertisements for beauty products showing either thin or computer-altered normal-size female models, allegedly as part of a study on effective…
Haber, M; An, Q; Foppa, I M; Shay, D K; Ferdinands, J M; Orenstein, W A
2015-05-01
As influenza vaccination is now widely recommended, randomized clinical trials are no longer ethical in many populations. Therefore, observational studies on patients seeking medical care for acute respiratory illnesses (ARIs) are a popular option for estimating influenza vaccine effectiveness (VE). We developed a probability model for evaluating and comparing bias and precision of estimates of VE against symptomatic influenza from two commonly used case-control study designs: the test-negative design and the traditional case-control design. We show that when vaccination does not affect the probability of developing non-influenza ARI then VE estimates from test-negative design studies are unbiased even if vaccinees and non-vaccinees have different probabilities of seeking medical care against ARI, as long as the ratio of these probabilities is the same for illnesses resulting from influenza and non-influenza infections. Our numerical results suggest that in general, estimates from the test-negative design have smaller bias compared to estimates from the traditional case-control design as long as the probability of non-influenza ARI is similar among vaccinated and unvaccinated individuals. We did not find consistent differences between the standard errors of the estimates from the two study designs. PMID:25147970
NASA Astrophysics Data System (ADS)
Tan, Elcin
A new physically-based methodology for probable maximum precipitation (PMP) estimation is developed over the American River Watershed (ARW) using the Weather Research and Forecast (WRF-ARW) model. A persistent moisture flux convergence pattern, called Pineapple Express, is analyzed for 42 historical extreme precipitation events, and it is found that Pineapple Express causes extreme precipitation over the basin of interest. An average correlation between moisture flux convergence and maximum precipitation is estimated as 0.71 for 42 events. The performance of the WRF model is verified for precipitation by means of calibration and independent validation of the model. The calibration procedure is performed only for the first ranked flood event 1997 case, whereas the WRF model is validated for 42 historical cases. Three nested model domains are set up with horizontal resolutions of 27 km, 9 km, and 3 km over the basin of interest. As a result of Chi-square goodness-of-fit tests, the hypothesis that "the WRF model can be used in the determination of PMP over the ARW for both areal average and point estimates" is accepted at the 5% level of significance. The sensitivities of model physics options on precipitation are determined using 28 microphysics, atmospheric boundary layer, and cumulus parameterization schemes combinations. It is concluded that the best triplet option is Thompson microphysics, Grell 3D ensemble cumulus, and YSU boundary layer (TGY), based on 42 historical cases, and this TGY triplet is used for all analyses of this research. Four techniques are proposed to evaluate physically possible maximum precipitation using the WRF: 1. Perturbations of atmospheric conditions; 2. Shift in atmospheric conditions; 3. Replacement of atmospheric conditions among historical events; and 4. Thermodynamically possible worst-case scenario creation. Moreover, climate change effect on precipitation is discussed by emphasizing temperature increase in order to determine the
2-D Model for Normal and Sickle Cell Blood Microcirculation
NASA Astrophysics Data System (ADS)
Tekleab, Yonatan; Harris, Wesley
2011-11-01
Sickle cell disease (SCD) is a genetic disorder that alters the red blood cell (RBC) structure and function such that hemoglobin (Hb) cannot effectively bind and release oxygen. Previous computational models have been designed to study the microcirculation for insight into blood disorders such as SCD. Our novel 2-D computational model represents a fast, time efficient method developed to analyze flow dynamics, O2 diffusion, and cell deformation in the microcirculation. The model uses a finite difference, Crank-Nicholson scheme to compute the flow and O2 concentration, and the level set computational method to advect the RBC membrane on a staggered grid. Several sets of initial and boundary conditions were tested. Simulation data indicate a few parameters to be significant in the perturbation of the blood flow and O2 concentration profiles. Specifically, the Hill coefficient, arterial O2 partial pressure, O2 partial pressure at 50% Hb saturation, and cell membrane stiffness are significant factors. Results were found to be consistent with those of Le Floch [2010] and Secomb [2006].
Adamovich, Igor V.
2014-04-15
A three-dimensional, nonperturbative, semiclassical analytic model of vibrational energy transfer in collisions between a rotating diatomic molecule and an atom, and between two rotating diatomic molecules (Forced Harmonic Oscillator–Free Rotation model) has been extended to incorporate rotational relaxation and coupling between vibrational, translational, and rotational energy transfer. The model is based on analysis of semiclassical trajectories of rotating molecules interacting by a repulsive exponential atom-to-atom potential. The model predictions are compared with the results of three-dimensional close-coupled semiclassical trajectory calculations using the same potential energy surface. The comparison demonstrates good agreement between analytic and numerical probabilities of rotational and vibrational energy transfer processes, over a wide range of total collision energies, rotational energies, and impact parameter. The model predicts probabilities of single-quantum and multi-quantum vibrational-rotational transitions and is applicable up to very high collision energies and quantum numbers. Closed-form analytic expressions for these transition probabilities lend themselves to straightforward incorporation into DSMC nonequilibrium flow codes.
NASA Astrophysics Data System (ADS)
Wang, Zhengzi
2015-08-01
The influence of ambient temperature is a big challenge to robust infrared face recognition. This paper proposes a new ambient temperature normalization algorithm to improve the performance of infrared face recognition under variable ambient temperatures. Based on statistical regression theory, a second order polynomial model is learned to describe the ambient temperature's impact on infrared face image. Then, infrared image was normalized to reference ambient temperature by the second order polynomial model. Finally, this normalization method is applied to infrared face recognition to verify its efficiency. The experiments demonstrate that the proposed temperature normalization method is feasible and can significantly improve the robustness of infrared face recognition.
NASA Astrophysics Data System (ADS)
Fan, Niannian; Singh, Arvind; Guala, Michele; Foufoula-Georgiou, Efi; Wu, Baosheng
2016-04-01
Bed load transport is a highly stochastic, multiscale process, where particle advection and diffusion regimes are governed by the dynamics of each sediment grain during its motion and resting states. Having a quantitative understanding of the macroscale behavior emerging from the microscale interactions is important for proper model selection in the absence of individual grain-scale observations. Here we develop a semimechanistic sediment transport model based on individual particle dynamics, which incorporates the episodic movement (steps separated by rests) of sediment particles and study their macroscale behavior. By incorporating different types of probability distribution functions (PDFs) of particle resting times Tr, under the assumption of thin-tailed PDF of particle velocities, we study the emergent behavior of particle advection and diffusion regimes across a wide range of spatial and temporal scales. For exponential PDFs of resting times Tr, we observe normal advection and diffusion at long time scales. For a power-law PDF of resting times (i.e., f>(Tr>)˜Tr-ν), the tail thickness parameter ν is observed to affect the advection regimes (both sub and normal advective), and the diffusion regimes (both subdiffusive and superdiffusive). By comparing our semimechanistic model with two random walk models in the literature, we further suggest that in order to reproduce accurately the emerging diffusive regimes, the resting time model has to be coupled with a particle motion model able to produce finite particle velocities during steps, as the episodic model discussed here.
A two-stage approach in solving the state probabilities of the multi-queue M/G/1 model
NASA Astrophysics Data System (ADS)
Chen, Mu-Song; Yen, Hao-Wei
2016-04-01
The M/G/1 model is the fundamental basis of the queueing system in many network systems. Usually, the study of the M/G/1 is limited by the assumption of single queue and infinite capacity. In practice, however, these postulations may not be valid, particularly when dealing with many real-world problems. In this paper, a two-stage state-space approach is devoted to solving the state probabilities for the multi-queue finite-capacity M/G/1 model, i.e. q-M/G/1/Ki with Ki buffers in the ith queue. The state probabilities at departure instants are determined by solving a set of state transition equations. Afterward, an embedded Markov chain analysis is applied to derive the state probabilities with another set of state balance equations at arbitrary time instants. The closed forms of the state probabilities are also presented with theorems for reference. Applications of Little's theorem further present the corresponding results for queue lengths and average waiting times. Simulation experiments have demonstrated the correctness of the proposed approaches.
Echoes from anharmonic normal modes in model glasses.
Burton, Justin C; Nagel, Sidney R
2016-03-01
Glasses display a wide array of nonlinear acoustic phenomena at temperatures T ≲ 1 K. This behavior has traditionally been explained by an ensemble of weakly coupled, two-level tunneling states, a theory that is also used to describe the thermodynamic properties of glasses at low temperatures. One of the most striking acoustic signatures in this regime is the existence of phonon echoes, a feature that has been associated with two-level systems with the same formalism as spin echoes in NMR. Here we report the existence of a distinctly different type of acoustic echo in classical models of glassy materials. Our simulations consist of finite-ranged, repulsive spheres and also particles with attractive forces using Lennard-Jones interactions. We show that these echoes are due to anharmonic, weakly coupled vibrational modes and perhaps provide an alternative explanation for the phonon echoes observed in glasses at low temperatures. PMID:27078434
Equivariant minimax dominators of the MLE in the array normal model
Hoff, Peter
2015-01-01
Inference about dependencies in a multiway data array can be made using the array normal model, which corresponds to the class of multivariate normal distributions with separable covariance matrices. Maximum likelihood and Bayesian methods for inference in the array normal model have appeared in the literature, but there have not been any results concerning the optimality properties of such estimators. In this article, we obtain results for the array normal model that are analogous to some classical results concerning covariance estimation for the multivariate normal model. We show that under a lower triangular product group, a uniformly minimum risk equivariant estimator (UMREE) can be obtained via a generalized Bayes procedure. Although this UMREE is minimax and dominates the MLE, it can be improved upon via an orthogonally equivariant modification. Numerical comparisons of the risks of these estimators show that the equivariant estimators can have substantially lower risks than the MLE. PMID:25745274
Mesh-Based Entry Vehicle and Explosive Debris Re-Contact Probability Modeling
NASA Technical Reports Server (NTRS)
McPherson, Mark A.; Mendeck, Gavin F.
2011-01-01
The risk to a crewed vehicle arising from potential re-contact with fragments from an explosive breakup of any jettisoned spacecraft segments during entry has long sought to be quantified. However, great difficulty lies in efficiently capturing the potential locations of each fragment and their collective threat to the vehicle. The method presented in this paper addresses this problem by using a stochastic approach that discretizes simulated debris pieces into volumetric cells, and then assesses strike probabilities accordingly. Combining spatial debris density and relative velocity between the debris and the entry vehicle, the strike probability can be calculated from the integral of the debris flux inside each cell over time. Using this technique it is possible to assess the risk to an entry vehicle along an entire trajectory as it separates from the jettisoned segment. By decoupling the fragment trajectories from that of the entry vehicle, multiple potential separation maneuvers can then be evaluated rapidly to provide an assessment of the best strategy to mitigate the re-contact risk.
Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation
Jordan, P.D.; Oldenburg, C.M.; Nicot, J.-P.
2011-05-15
Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.
Nixon, Zachary; Michel, Jacqueline
2015-04-01
To better understand the distribution of remaining lingering subsurface oil residues from the Exxon Valdez oil spill (EVOS) along the shorelines of Prince William Sound (PWS), AK, we revised previous modeling efforts to allow spatially explicit predictions of the distribution of subsurface oil. We used a set of pooled field data and predictor variables stored as Geographic Information Systems (GIS) data to generate calibrated boosted tree models predicting the encounter probability of different categories of subsurface oil. The models demonstrated excellent predictive performance as evaluated by cross-validated performance statistics. While the average encounter probabilities at most shoreline locations are low across western PWS, clusters of shoreline locations with elevated encounter probabilities remain in the northern parts of the PWS, as well as more isolated locations. These results can be applied to estimate the location and amount of remaining oil, evaluate potential ongoing impacts, and guide remediation. This is the first application of quantitative machine-learning based modeling techniques in estimating the likelihood of ongoing, long-term shoreline oil persistence after a major oil spill. PMID:25719970
NASA Astrophysics Data System (ADS)
Wenmackers, S.; Vanpoucke, D. E. P.; Douven, I.
2012-01-01
We present a model for studying communities of epistemically interacting agents who update their belief states by averaging (in a specified way) the belief states of other agents in the community. The agents in our model have a rich belief state, involving multiple independent issues which are interrelated in such a way that they form a theory of the world. Our main goal is to calculate the probability for an agent to end up in an inconsistent belief state due to updating (in the given way). To that end, an analytical expression is given and evaluated numerically, both exactly and using statistical sampling. It is shown that, under the assumptions of our model, an agent always has a probability of less than 2% of ending up in an inconsistent belief state. Moreover, this probability can be made arbitrarily small by increasing the number of independent issues the agents have to judge or by increasing the group size. A real-world situation to which this model applies is a group of experts participating in a Delphi-study.
Estimation of value at risk and conditional value at risk using normal mixture distributions model
NASA Astrophysics Data System (ADS)
Kamaruzzaman, Zetty Ain; Isa, Zaidi
2013-04-01
Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.
NASA Astrophysics Data System (ADS)
Akbari, Hamed; Fei, Baowei
2012-02-01
Numerical estimation of the size of the kidney is useful in evaluating conditions of the kidney, especially, when serial MR imaging is performed to evaluate the kidney function. This paper presents a new method for automatic segmentation of the kidney in three-dimensional (3D) MR images, by extracting texture features and statistical matching of geometrical shape of the kidney. A set of Wavelet-based support vector machines (W-SVMs) is trained on the MR images. The W-SVMs capture texture priors of MRI for classification of the kidney and non-kidney tissues in different zones around the kidney boundary. In the segmentation procedure, these W-SVMs are trained to tentatively label each voxel around the kidney model as a kidney or non-kidney voxel by texture matching. A probability kidney model is created using 10 segmented MRI data. The model is initially localized based on the intensity profiles in three directions. The weight functions are defined for each labeled voxel for each Wavelet-based, intensity-based, and model-based label. Consequently, each voxel has three labels and three weights for the Wavelet feature, intensity, and probability model. Using a 3D edge detection method, the model is re-localized and the segmented kidney is modified based on a region growing method in the model region. The probability model is re-localized based on the results and this loop continues until the segmentation converges. Experimental results with mouse MRI data show the good performance of the proposed method in segmenting the kidney in MR images.
Notes on power of normality tests of error terms in regression models
Střelec, Luboš
2015-03-10
Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importance of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models.
NASA Astrophysics Data System (ADS)
Salis, Michele; Arca, Bachisio; Bacciu, Valentina; Spano, Donatella; Duce, Pierpaolo; Santoni, Paul; Ager, Alan; Finney, Mark
2010-05-01
Characterizing the spatial pattern of large fire occurrence and severity is an important feature of the fire management planning in the Mediterranean region. The spatial characterization of fire probabilities, fire behavior distributions and value changes are key components for quantitative risk assessment and for prioritizing fire suppression resources, fuel treatments and law enforcement. Because of the growing wildfire severity and frequency in recent years (e.g.: Portugal, 2003 and 2005; Italy and Greece, 2007 and 2009), there is an increasing demand for models and tools that can aid in wildfire prediction and prevention. Newer wildfire simulation systems offer promise in this regard, and allow for fine scale modeling of wildfire severity and probability. Several new applications has resulted from the development of a minimum travel time (MTT) fire spread algorithm (Finney, 2002), that models the fire growth searching for the minimum time for fire to travel among nodes in a 2D network. The MTT approach makes computationally feasible to simulate thousands of fires and generate burn probability and fire severity maps over large areas. The MTT algorithm is imbedded in a number of research and fire modeling applications. High performance computers are typically used for MTT simulations, although the algorithm is also implemented in the FlamMap program (www.fire.org). In this work, we described the application of the MTT algorithm to estimate spatial patterns of burn probability and to analyze wildfire severity in three fire prone areas of the Mediterranean Basin, specifically Sardinia (Italy), Sicily (Italy) and Corsica (France) islands. We assembled fuels and topographic data for the simulations in 500 x 500 m grids for the study areas. The simulations were run using 100,000 ignitions under weather conditions that replicated severe and moderate weather conditions (97th and 70th percentile, July and August weather, 1995-2007). We used both random ignition locations
NASA Technical Reports Server (NTRS)
Chase, Thomas D.; Splawn, Keith; Christiansen, Eric L.
2007-01-01
The NASA Extravehicular Mobility Unit (EMU) micrometeoroid and orbital debris protection ability has recently been assessed against an updated, higher threat space environment model. The new environment was analyzed in conjunction with a revised EMU solid model using a NASA computer code. Results showed that the EMU exceeds the required mathematical Probability of having No Penetrations (PNP) of any suit pressure bladder over the remaining life of the program (2,700 projected hours of 2 person spacewalks). The success probability was calculated to be 0.94, versus a requirement of >0.91, for the current spacesuit s outer protective garment. In parallel to the probability assessment, potential improvements to the current spacesuit s outer protective garment were built and impact tested. A NASA light gas gun was used to launch projectiles at test items, at speeds of approximately 7 km per second. Test results showed that substantial garment improvements could be made, with mild material enhancements and moderate assembly development. The spacesuit s PNP would improve marginally with the tested enhancements, if they were available for immediate incorporation. This paper discusses the results of the model assessment process and test program. These findings add confidence to the continued use of the existing NASA EMU during International Space Station (ISS) assembly and Shuttle Operations. They provide a viable avenue for improved hypervelocity impact protection for the EMU, or for future space suits.
Jafarzadeh, S Reza; Johnson, Wesley O; Gardner, Ian A
2016-03-15
The area under the receiver operating characteristic (ROC) curve (AUC) is used as a performance metric for quantitative tests. Although multiple biomarkers may be available for diagnostic or screening purposes, diagnostic accuracy is often assessed individually rather than in combination. In this paper, we consider the interesting problem of combining multiple biomarkers for use in a single diagnostic criterion with the goal of improving the diagnostic accuracy above that of an individual biomarker. The diagnostic criterion created from multiple biomarkers is based on the predictive probability of disease, conditional on given multiple biomarker outcomes. If the computed predictive probability exceeds a specified cutoff, the corresponding subject is allocated as 'diseased'. This defines a standard diagnostic criterion that has its own ROC curve, namely, the combined ROC (cROC). The AUC metric for cROC, namely, the combined AUC (cAUC), is used to compare the predictive criterion based on multiple biomarkers to one based on fewer biomarkers. A multivariate random-effects model is proposed for modeling multiple normally distributed dependent scores. Bayesian methods for estimating ROC curves and corresponding (marginal) AUCs are developed when a perfect reference standard is not available. In addition, cAUCs are computed to compare the accuracy of different combinations of biomarkers for diagnosis. The methods are evaluated using simulations and are applied to data for Johne's disease (paratuberculosis) in cattle. PMID:26415924
NASA Astrophysics Data System (ADS)
Mahanti, P.; Robinson, M. S.; Boyd, A. K.
2013-12-01
Craters ~20-km diameter and above significantly shaped the lunar landscape. The statistical nature of the slope distribution on their walls and floors dominate the overall slope distribution statistics for the lunar surface. Slope statistics are inherently useful for characterizing the current topography of the surface, determining accurate photometric and surface scattering properties, and in defining lunar surface trafficability [1-4]. Earlier experimental studies on the statistical nature of lunar surface slopes were restricted either by resolution limits (Apollo era photogrammetric studies) or by model error considerations (photoclinometric and radar scattering studies) where the true nature of slope probability distribution was not discernible at baselines smaller than a kilometer[2,3,5]. Accordingly, historical modeling of lunar surface slopes probability distributions for applications such as in scattering theory development or rover traversability assessment is more general in nature (use of simple statistical models such as the Gaussian distribution[1,2,5,6]). With the advent of high resolution, high precision topographic models of the Moon[7,8], slopes in lunar craters can now be obtained at baselines as low as 6-meters allowing unprecedented multi-scale (multiple baselines) modeling possibilities for slope probability distributions. Topographic analysis (Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) 2-m digital elevation models (DEM)) of ~20-km diameter Copernican lunar craters revealed generally steep slopes on interior walls (30° to 36°, locally exceeding 40°) over 15-meter baselines[9]. In this work, we extend the analysis from a probability distribution modeling point-of-view with NAC DEMs to characterize the slope statistics for the floors and walls for the same ~20-km Copernican lunar craters. The difference in slope standard deviations between the Gaussian approximation and the actual distribution (2-meter sampling) was
Blyton, Michaela D J; Banks, Sam C; Peakall, Rod; Lindenmayer, David B
2012-02-01
The formal testing of mating system theories with empirical data is important for evaluating the relative importance of different processes in shaping mating systems in wild populations. Here, we present a generally applicable probability modelling framework to test the role of local mate availability in determining a population's level of genetic monogamy. We provide a significance test for detecting departures in observed mating patterns from model expectations based on mate availability alone, allowing the presence and direction of behavioural effects to be inferred. The assessment of mate availability can be flexible and in this study it was based on population density, sex ratio and spatial arrangement. This approach provides a useful tool for (1) isolating the effect of mate availability in variable mating systems and (2) in combination with genetic parentage analyses, gaining insights into the nature of mating behaviours in elusive species. To illustrate this modelling approach, we have applied it to investigate the variable mating system of the mountain brushtail possum (Trichosurus cunninghami) and compared the model expectations with the outcomes of genetic parentage analysis over an 18-year study. The observed level of monogamy was higher than predicted under the model. Thus, behavioural traits, such as mate guarding or selective mate choice, may increase the population level of monogamy. We show that combining genetic parentage data with probability modelling can facilitate an improved understanding of the complex interactions between behavioural adaptations and demographic dynamics in driving mating system variation. PMID:21899620
NASA Astrophysics Data System (ADS)
Szczygieł, Bartłomiej; Dudyński, Marek; Kwiatkowski, Kamil; Lewenstein, Maciej; Lapeyre, Gerald John; Wehr, Jan
2016-02-01
We introduce a class of discrete-continuous percolation models and an efficient Monte Carlo algorithm for computing their properties. The class is general enough to include well-known discrete and continuous models as special cases. We focus on a particular example of such a model, a nanotube model of disintegration of activated carbon. We calculate its exact critical threshold in two dimensions and obtain a Monte Carlo estimate in three dimensions. Furthermore, we use this example to analyze and characterize the efficiency of our algorithm, by computing critical exponents and properties, finding that it compares favorably to well-known algorithms for simpler systems.
NASA Astrophysics Data System (ADS)
Tomas, A.; Menendez, M.; Mendez, F. J.; Coco, G.; Losada, I. J.
2012-04-01
In the last decades, freak or rogue waves have become an important topic in engineering and science. Forecasting the occurrence probability of freak waves is a challenge for oceanographers, engineers, physicists and statisticians. There are several mechanisms responsible for the formation of freak waves, and different theoretical formulations (primarily based on numerical models with simplifying assumption) have been proposed to predict the occurrence probability of freak wave in a sea state as a function of N (number of individual waves) and kurtosis (k). On the other hand, different attempts to parameterize k as a function of spectral parameters such as the Benjamin-Feir Index (BFI) and the directional spreading (Mori et al., 2011) have been proposed. The objective of this work is twofold: (1) develop a statistical model to describe the uncertainty of maxima individual wave height, Hmax, considering N and k as covariates; (2) obtain a predictive formulation to estimate k as a function of aggregated sea state spectral parameters. For both purposes, we use free surface measurements (more than 300,000 20-minutes sea states) from the Spanish deep water buoy network (Puertos del Estado, Spanish Ministry of Public Works). Non-stationary extreme value models are nowadays widely used to analyze the time-dependent or directional-dependent behavior of extreme values of geophysical variables such as significant wave height (Izaguirre et al., 2010). In this work, a Generalized Extreme Value (GEV) statistical model for the dimensionless maximum wave height (x=Hmax/Hs) in every sea state is used to assess the probability of freak waves. We allow the location, scale and shape parameters of the GEV distribution to vary as a function of k and N. The kurtosis-dependency is parameterized using third-order polynomials and the model is fitted using standard log-likelihood theory, obtaining a very good behavior to predict the occurrence probability of freak waves (x>2). Regarding the
NASA Astrophysics Data System (ADS)
Timpanaro, André M.; Prado, Carmen P. C.
2014-05-01
We discuss the exit probability of the one-dimensional q-voter model and present tools to obtain estimates about this probability, both through simulations in large networks (around 107 sites) and analytically in the limit where the network is infinitely large. We argue that the result E(ρ )=ρq/ρq+(1-ρ)q, that was found in three previous works [F. Slanina, K. Sznajd-Weron, and P. Przybyła, Europhys. Lett. 82, 18006 (2008), 10.1209/0295-5075/82/18006; R. Lambiotte and S. Redner, Europhys. Lett. 82, 18007 (2008), 10.1209/0295-5075/82/18007, for the case q =2; and P. Przybyła, K. Sznajd-Weron, and M. Tabiszewski, Phys. Rev. E 84, 031117 (2011), 10.1103/PhysRevE.84.031117, for q >2] using small networks (around 103 sites), is a good approximation, but there are noticeable deviations that appear even for small systems and that do not disappear when the system size is increased (with the notable exception of the case q =2). We also show that, under some simple and intuitive hypotheses, the exit probability must obey the inequality ρq/ρq+(1-ρ)≤E(ρ)≤ρ/ρ +(1-ρ)q in the infinite size limit. We believe this settles in the negative the suggestion made [S. Galam and A. C. R. Martins, Europhys. Lett. 95, 48005 (2001), 10.1209/0295-5075/95/48005] that this result would be a finite size effect, with the exit probability actually being a step function. We also show how the result that the exit probability cannot be a step function can be reconciled with the Galam unified frame, which was also a source of controversy.
Per capita invasion probabilities: an empirical model to predict rates of invasion via ballast water
Reusser, Deborah A.; Lee, Henry, II; Frazier, Melanie; Ruiz, Gregory M.; Fofonoff, Paul W.; Minton, Mark S.; Miller, A. Whitman
2013-01-01
Ballast water discharges are a major source of species introductions into marine and estuarine ecosystems. To mitigate the introduction of new invaders into these ecosystems, many agencies are proposing standards that establish upper concentration limits for organisms in ballast discharge. Ideally, ballast discharge standards will be biologically defensible and adequately protective of the marine environment. We propose a new technique, the per capita invasion probability (PCIP), for managers to quantitatively evaluate the relative risk of different concentration-based ballast water discharge standards. PCIP represents the likelihood that a single discharged organism will become established as a new nonindigenous species. This value is calculated by dividing the total number of ballast water invaders per year by the total number of organisms discharged from ballast. Analysis was done at the coast-wide scale for the Atlantic, Gulf, and Pacific coasts, as well as the Great Lakes, to reduce uncertainty due to secondary invasions between estuaries on a single coast. The PCIP metric is then used to predict the rate of new ballast-associated invasions given various regulatory scenarios. Depending upon the assumptions used in the risk analysis, this approach predicts that approximately one new species will invade every 10–100 years with the International Maritime Organization (IMO) discharge standard of 50 μm per m3 of ballast. This approach resolves many of the limitations associated with other methods of establishing ecologically sound discharge standards, and it allows policy makers to use risk-based methodologies to establish biologically defensible discharge standards.
NASA Technical Reports Server (NTRS)
Courey, Karim J.; Asfour, Shihab S.; Onar, Arzu; Bayliss, Jon A.; Ludwig, Larry L.; Wright, Maria C.
2009-01-01
To comply with lead-free legislation, many manufacturers have converted from tin-lead to pure tin finishes of electronic components. However, pure tin finishes have a greater propensity to grow tin whiskers than tin-lead finishes. Since tin whiskers present an electrical short circuit hazard in electronic components, simulations have been developed to quantify the risk of said short circuits occurring. Existing risk simulations make the assumption that when a free tin whisker has bridged two adjacent exposed electrical conductors, the result is an electrical short circuit. This conservative assumption is made because shorting is a random event that had an unknown probability associated with it. Note however that due to contact resistance electrical shorts may not occur at lower voltage levels. In our first article we developed an empirical probability model for tin whisker shorting. In this paper, we develop a more comprehensive empirical model using a refined experiment with a larger sample size, in which we studied the effect of varying voltage on the breakdown of the contact resistance which leads to a short circuit. From the resulting data we estimated the probability distribution of an electrical short, as a function of voltage. In addition, the unexpected polycrystalline structure seen in the focused ion beam (FIB) cross section in the first experiment was confirmed in this experiment using transmission electron microscopy (TEM). The FIB was also used to cross section two card guides to facilitate the measurement of the grain size of each card guide's tin plating to determine its finish.
NASA Technical Reports Server (NTRS)
Nemeth, Noel
2013-01-01
Models that predict the failure probability of monolithic glass and ceramic components under multiaxial loading have been developed by authors such as Batdorf, Evans, and Matsuo. These "unit-sphere" failure models assume that the strength-controlling flaws are randomly oriented, noninteracting planar microcracks of specified geometry but of variable size. This report develops a formulation to describe the probability density distribution of the orientation of critical strength-controlling flaws that results from an applied load. This distribution is a function of the multiaxial stress state, the shear sensitivity of the flaws, the Weibull modulus, and the strength anisotropy. Examples are provided showing the predicted response on the unit sphere for various stress states for isotropic and transversely isotropic (anisotropic) materials--including the most probable orientation of critical flaws for offset uniaxial loads with strength anisotropy. The author anticipates that this information could be used to determine anisotropic stiffness degradation or anisotropic damage evolution for individual brittle (or quasi-brittle) composite material constituents within finite element or micromechanics-based software
Evaluation of Geometrically Nonlinear Reduced Order Models with Nonlinear Normal Modes
Kuether, Robert J.; Deaner, Brandon J.; Hollkamp, Joseph J.; Allen, Matthew S.
2015-09-15
Several reduced-order modeling strategies have been developed to create low-order models of geometrically nonlinear structures from detailed finite element models, allowing one to compute the dynamic response of the structure at a dramatically reduced cost. But, the parameters of these reduced-order models are estimated by applying a series of static loads to the finite element model, and the quality of the reduced-order model can be highly sensitive to the amplitudes of the static load cases used and to the type/number of modes used in the basis. Our paper proposes to combine reduced-order modeling and numerical continuation to estimate the nonlinearmore » normal modes of geometrically nonlinear finite element models. Not only does this make it possible to compute the nonlinear normal modes far more quickly than existing approaches, but the nonlinear normal modes are also shown to be an excellent metric by which the quality of the reduced-order model can be assessed. Hence, the second contribution of this work is to demonstrate how nonlinear normal modes can be used as a metric by which nonlinear reduced-order models can be compared. Moreover, various reduced-order models with hardening nonlinearities are compared for two different structures to demonstrate these concepts: a clamped–clamped beam model, and a more complicated finite element model of an exhaust panel cover.« less
Evaluation of Geometrically Nonlinear Reduced Order Models with Nonlinear Normal Modes
Kuether, Robert J.; Deaner, Brandon J.; Hollkamp, Joseph J.; Allen, Matthew S.
2015-09-15
Several reduced-order modeling strategies have been developed to create low-order models of geometrically nonlinear structures from detailed finite element models, allowing one to compute the dynamic response of the structure at a dramatically reduced cost. But, the parameters of these reduced-order models are estimated by applying a series of static loads to the finite element model, and the quality of the reduced-order model can be highly sensitive to the amplitudes of the static load cases used and to the type/number of modes used in the basis. Our paper proposes to combine reduced-order modeling and numerical continuation to estimate the nonlinear normal modes of geometrically nonlinear finite element models. Not only does this make it possible to compute the nonlinear normal modes far more quickly than existing approaches, but the nonlinear normal modes are also shown to be an excellent metric by which the quality of the reduced-order model can be assessed. Hence, the second contribution of this work is to demonstrate how nonlinear normal modes can be used as a metric by which nonlinear reduced-order models can be compared. Moreover, various reduced-order models with hardening nonlinearities are compared for two different structures to demonstrate these concepts: a clamped–clamped beam model, and a more complicated finite element model of an exhaust panel cover.
Scale normalization of histopathological images for batch invariant cancer diagnostic models
Kothari, Sonal; Phan, John H.
2016-01-01
Histopathological images acquired from different experimental set-ups often suffer from batch-effects due to color variations and scale variations. In this paper, we develop a novel scale normalization model for histopathological images based on nuclear area distributions. Results indicate that the normalization model closely fits empirical values for two renal tumor datasets. We study the effect of scale normalization on classification of renal tumor images. Scale normalization improves classification performance in most cases. However, performance decreases in a few cases. In order to understand this, we propose two methods to filter extracted image features that are sensitive to image scaling and features that are uncorrelated with scaling factor. Feature filtering improves the classification performance of cases that were initially negatively affected by scale normalization. PMID:23366904
Greene, Earl A.; LaMotte, Andrew E.; Cullinan, Kerri-Ann
2005-01-01
The U.S. Geological Survey, in cooperation with the U.S. Environmental Protection Agency?s Regional Vulnerability Assessment Program, has developed a set of statistical tools to support regional-scale, ground-water quality and vulnerability assessments. The Regional Vulnerability Assessment Program?s goals are to develop and demonstrate approaches to comprehensive, regional-scale assessments that effectively inform managers and decision-makers as to the magnitude, extent, distribution, and uncertainty of current and anticipated environmental risks. The U.S. Geological Survey is developing and exploring the use of statistical probability models to characterize the relation between ground-water quality and geographic factors in the Mid-Atlantic Region. Available water-quality data obtained from U.S. Geological Survey National Water-Quality Assessment Program studies conducted in the Mid-Atlantic Region were used in association with geographic data (land cover, geology, soils, and others) to develop logistic-regression equations that use explanatory variables to predict the presence of a selected water-quality parameter exceeding a specified management concentration threshold. The resulting logistic-regression equations were transformed to determine the probability, P(X), of a water-quality parameter exceeding a specified management threshold. Additional statistical procedures modified by the U.S. Geological Survey were used to compare the observed values to model-predicted values at each sample point. In addition, procedures to evaluate the confidence of the model predictions and estimate the uncertainty of the probability value were developed and applied. The resulting logistic-regression models were applied to the Mid-Atlantic Region to predict the spatial probability of nitrate concentrations exceeding specified management thresholds. These thresholds are usually set or established by regulators or managers at National or local levels. At management thresholds of
Amundson, Courtney L.; Royle, J. Andrew; Handel, Colleen M.
2014-01-01
Imperfect detection during animal surveys biases estimates of abundance and can lead to improper conclusions regarding distribution and population trends. Farnsworth et al. (2005) developed a combined distance-sampling and time-removal model for point-transect surveys that addresses both availability (the probability that an animal is available for detection; e.g., that a bird sings) and perceptibility (the probability that an observer detects an animal, given that it is available for detection). We developed a hierarchical extension of the combined model that provides an integrated analysis framework for a collection of survey points at which both distance from the observer and time of initial detection are recorded. Implemented in a Bayesian framework, this extension facilitates evaluating covariates on abundance and detection probability, incorporating excess zero counts (i.e. zero-inflation), accounting for spatial autocorrelation, and estimating population density. Species-specific characteristics, such as behavioral displays and territorial dispersion, may lead to different patterns of availability and perceptibility, which may, in turn, influence the performance of such hierarchical models. Therefore, we first test our proposed model using simulated data under different scenarios of availability and perceptibility. We then illustrate its performance with empirical point-transect data for a songbird that consistently produces loud, frequent, primarily auditory signals, the Golden-crowned Sparrow (Zonotrichia atricapilla); and for 2 ptarmigan species (Lagopus spp.) that produce more intermittent, subtle, and primarily visual cues. Data were collected by multiple observers along point transects across a broad landscape in southwest Alaska, so we evaluated point-level covariates on perceptibility (observer and habitat), availability (date within season and time of day), and abundance (habitat, elevation, and slope), and included a nested point
de Uña-Álvarez, Jacobo; Meira-Machado, Luís
2015-06-01
Multi-state models are often used for modeling complex event history data. In these models the estimation of the transition probabilities is of particular interest, since they allow for long-term predictions of the process. These quantities have been traditionally estimated by the Aalen-Johansen estimator, which is consistent if the process is Markov. Several non-Markov estimators have been proposed in the recent literature, and their superiority with respect to the Aalen-Johansen estimator has been proved in situations in which the Markov condition is strongly violated. However, the existing estimators have the drawback of requiring that the support of the censoring distribution contains the support of the lifetime distribution, which is not often the case. In this article, we propose two new methods for estimating the transition probabilities in the progressive illness-death model. Some asymptotic results are derived. The proposed estimators are consistent regardless the Markov condition and the referred assumption about the censoring support. We explore the finite sample behavior of the estimators through simulations. The main conclusion of this piece of research is that the proposed estimators are much more efficient than the existing non-Markov estimators in most cases. An application to a clinical trial on colon cancer is included. Extensions to progressive processes beyond the three-state illness-death model are discussed. PMID:25735883
Detection of the optic disc in fundus images by combining probability models.
Harangi, Balazs; Hajdu, Andras
2015-10-01
In this paper, we propose a combination method for the automatic detection of the optic disc (OD) in fundus images based on ensembles of individual algorithms. We have studied and adapted some of the state-of-the-art OD detectors and finally organized them into a complex framework in order to maximize the accuracy of the localization of the OD. The detection of the OD can be considered as a single-object detection problem. This object can be localized with high accuracy by several algorithms extracting single candidates for the center of the OD and the final location can be defined using a single majority voting rule. To include more information to support the final decision, we can use member algorithms providing more candidates which can be ranked based on the confidence ordered by the algorithms. In this case, a spatial weighted graph is defined where the candidates are considered as its nodes, and the final OD position is determined in terms of finding a maximum-weighted clique. Now, we examine how to apply in our ensemble-based framework all the accessible information supplied by the member algorithms by making them return confidence values for each image pixel. These confidence values inform us about the probability that a given pixel is the center point of the object. We apply axiomatic and Bayesian approaches, as in the case of aggregation of judgments of experts in decision and risk analysis, to combine these confidence values. According to our experimental study, the accuracy of the localization of OD increases further. Besides single localization, this approach can be adapted for the precise detection of the boundary of the OD. Comparative experimental results are also given for several publicly available datasets. PMID:26259029
NASA Technical Reports Server (NTRS)
Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor)
2012-01-01
This invention develops a mathematical model to describe battery behavior during individual discharge cycles as well as over its cycle life. The basis for the form of the model has been linked to the internal processes of the battery and validated using experimental data. Effects of temperature and load current have also been incorporated into the model. Subsequently, the model has been used in a Particle Filtering framework to make predictions of remaining useful life for individual discharge cycles as well as for cycle life. The prediction performance was found to be satisfactory as measured by performance metrics customized for prognostics for a sample case. The work presented here provides initial steps towards a comprehensive health management solution for energy storage devices.
Fitting the Normal-Ogive Factor Analytic Model to Scores on Tests.
ERIC Educational Resources Information Center
Ferrando, Pere J.; Lorenzo-Seva, Urbano
2001-01-01
Describes how the nonlinear factor analytic approach of R. McDonald to the normal ogive curve can be used to factor analyze test scores. Discusses the conditions in which this model is more appropriate than the linear model and illustrates the applicability of both models using an empirical example based on data from 1,769 adolescents who took the…
NASA Technical Reports Server (NTRS)
Litchford, Ron J.; Jeng, San-Mou
1992-01-01
The performance of a recently introduced statistical transport model for turbulent particle dispersion is studied here for rigid particles injected into a round turbulent jet. Both uniform and isosceles triangle pdfs are used. The statistical sensitivity to parcel pdf shape is demonstrated.
ERIC Educational Resources Information Center
Mirman, Daniel; Estes, Katharine Graf; Magnuson, James S.
2010-01-01
Statistical learning mechanisms play an important role in theories of language acquisition and processing. Recurrent neural network models have provided important insights into how these mechanisms might operate. We examined whether such networks capture two key findings in human statistical learning. In Simulation 1, a simple recurrent network…
ERIC Educational Resources Information Center
Nussbaum, E. Michael
2011-01-01
Toulmin's model of argumentation, developed in 1958, has guided much argumentation research in education. However, argumentation theory in philosophy and cognitive science has advanced considerably since 1958. There are currently several alternative frameworks of argumentation that can be useful for both research and practice in education. These…
NASA Technical Reports Server (NTRS)
Smith, O. E.; Adelfang, S. I.
1981-01-01
A model of the largest gust amplitude and gust length is presented which uses the properties of the bivariate gamma distribution. The gust amplitude and length are strongly dependent on the filter function; the amplitude increases with altitude and is larger in winter than in summer.
NASA Astrophysics Data System (ADS)
Mahmud, Zamalia; Porter, Anne; Salikin, Masniyati; Ghani, Nor Azura Md
2015-12-01
Students' understanding of probability concepts have been investigated from various different perspectives. Competency on the other hand is often measured separately in the form of test structure. This study was set out to show that perceived understanding and competency can be calibrated and assessed together using Rasch measurement tools. Forty-four students from the STAT131 Understanding Uncertainty and Variation course at the University of Wollongong, NSW have volunteered to participate in the study. Rasch measurement which is based on a probabilistic model is used to calibrate the responses from two survey instruments and investigate the interactions between them. Data were captured from the e-learning platform Moodle where students provided their responses through an online quiz. The study shows that majority of the students perceived little understanding about conditional and independent events prior to learning about it but tend to demonstrate a slightly higher competency level afterward. Based on the Rasch map, there is indication of some increase in learning and knowledge about some probability concepts at the end of the two weeks lessons on probability concepts.
NASA Astrophysics Data System (ADS)
Mandal, K. G.; Padhi, J.; Kumar, A.; Ghosh, S.; Panda, D. K.; Mohanty, R. K.; Raychaudhuri, M.
2015-08-01
Rainfed agriculture plays and will continue to play a dominant role in providing food and livelihoods for an increasing world population. Rainfall analyses are helpful for proper crop planning under changing environment in any region. Therefore, in this paper, an attempt has been made to analyse 16 years of rainfall (1995-2010) at the Daspalla region in Odisha, eastern India for prediction using six probability distribution functions, forecasting the probable date of onset and withdrawal of monsoon, occurrence of dry spells by using Markov chain model and finally crop planning for the region. For prediction of monsoon and post-monsoon rainfall, log Pearson type III and Gumbel distribution were the best-fit probability distribution functions. The earliest and most delayed week of the onset of rainy season was the 20th standard meteorological week (SMW) (14th-20th May) and 25th SMW (18th-24th June), respectively. Similarly, the earliest and most delayed week of withdrawal of rainfall was the 39th SMW (24th-30th September) and 47th SMW (19th-25th November), respectively. The longest and shortest length of rainy season was 26 and 17 weeks, respectively. The chances of occurrence of dry spells are high from the 1st-22nd SMW and again the 42nd SMW to the end of the year. The probability of weeks (23rd-40th SMW) remaining wet varies between 62 and 100 % for the region. Results obtained through this analysis would be utilised for agricultural planning and mitigation of dry spells at the Daspalla region in Odisha, India.
Modeling of Kidney Hemodynamics: Probability-Based Topology of an Arterial Network
Postnov, Dmitry D.; Postnov, Dmitry E.; Braunstein, Thomas H.; Holstein-Rathlou, Niels-Henrik; Sosnovtseva, Olga
2016-01-01
Through regulation of the extracellular fluid volume, the kidneys provide important long-term regulation of blood pressure. At the level of the individual functional unit (the nephron), pressure and flow control involves two different mechanisms that both produce oscillations. The nephrons are arranged in a complex branching structure that delivers blood to each nephron and, at the same time, provides a basis for an interaction between adjacent nephrons. The functional consequences of this interaction are not understood, and at present it is not possible to address this question experimentally. We provide experimental data and a new modeling approach to clarify this problem. To resolve details of microvascular structure, we collected 3D data from more than 150 afferent arterioles in an optically cleared rat kidney. Using these results together with published micro-computed tomography (μCT) data we develop an algorithm for generating the renal arterial network. We then introduce a mathematical model describing blood flow dynamics and nephron to nephron interaction in the network. The model includes an implementation of electrical signal propagation along a vascular wall. Simulation results show that the renal arterial architecture plays an important role in maintaining adequate pressure levels and the self-sustained dynamics of nephrons. PMID:27447287
Kawaguchi, Isao; Ouchi, Noriyuki B.; Hara, Takamitsu; Kobayashi, Yasuhiko; Saito, Kimiaki
2013-01-01
Background Clonogenicity gives important information about the cellular reproductive potential following ionizing irradiation, but an abortive colony that fails to continue to grow remains poorly characterized. It was recently reported that the fraction of abortive colonies increases with increasing dose. Thus, we set out to investigate the production kinetics of abortive colonies using a model of branching processes. Methodology/Principal Findings We firstly plotted the experimentally determined colony size distribution of abortive colonies in irradiated normal human fibroblasts, and found the linear relationship on the log-linear or log-log plot. By applying the simple model of branching processes to the linear relationship, we found the persistent reproductive cell death (RCD) over several generations following irradiation. To verify the estimated probability of RCD, abortive colony size distribution (≤15 cells) and the surviving fraction were simulated by the Monte Carlo computational approach for colony expansion. Parameters estimated from the log-log fit demonstrated the good performance in both simulations than those from the log-linear fit. Radiation-induced RCD, i.e. excess probability, lasted over 16 generations and mainly consisted of two components in the early (<3 generations) and late phases. Intriguingly, the survival curve was sensitive to the excess probability over 5 generations, whereas abortive colony size distribution was robust against it. These results suggest that, whereas short-term RCD is critical to the abortive colony size distribution, long-lasting RCD is important for the dose response of the surviving fraction. Conclusions/Significance Our present model provides a single framework for understanding the behavior of primary cell colonies in culture following irradiation. PMID:23894635
NASA Astrophysics Data System (ADS)
Lopes Cardozo, David; Holdsworth, Peter C. W.
2016-04-01
The magnetization probability density in d = 2 and 3 dimensional Ising models in slab geometry of volume L\\paralleld-1× {{L}\\bot} is computed through Monte-Carlo simulation at the critical temperature and zero magnetic field. The finite-size scaling of this distribution and its dependence on the system aspect-ratio ρ =\\frac{{{L}\\bot}}{{{L}\\parallel}} and boundary conditions are discussed. In the limiting case ρ \\to 0 of a macroscopically large slab ({{L}\\parallel}\\gg {{L}\\bot} ) the distribution is found to scale as a Gaussian function for all tested system sizes and boundary conditions.
Jacobsen, J L; Saleur, H
2008-02-29
We determine exactly the probability distribution of the number N_(c) of valence bonds connecting a subsystem of length L>1 to the rest of the system in the ground state of the XXX antiferromagnetic spin chain. This provides, in particular, the asymptotic behavior of the valence-bond entanglement entropy S_(VB)=N_(c)ln2=4ln2/pi(2)lnL disproving a recent conjecture that this should be related with the von Neumann entropy, and thus equal to 1/3lnL. Our results generalize to the Q-state Potts model. PMID:18352661
Normality of raw data in general linear models: The most widespread myth in statistics
Kery, M.; Hatfield, J.S.
2003-01-01
In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.
NASA Technical Reports Server (NTRS)
Elizalde, E.; Gaztanaga, E.
1992-01-01
The dependence of counts in cells on the shape of the cell for the large scale galaxy distribution is studied. A very concrete prediction can be done concerning the void distribution for scale invariant models. The prediction is tested on a sample of the CfA catalog, and good agreement is found. It is observed that the probability of a cell to be occupied is bigger for some elongated cells. A phenomenological scale invariant model for the observed distribution of the counts in cells, an extension of the negative binomial distribution, is presented in order to illustrate how this dependence can be quantitatively determined. An original, intuitive derivation of this model is presented.
Huang, Yangxin; Yan, Chunning; Yin, Ping; Lu, Meixia
2016-01-01
Longitudinal data arise frequently in medical studies and it is a common practice to analyze such complex data with nonlinear mixed-effects (NLME) models. However, the following four issues may be critical in longitudinal data analysis. (i) A homogeneous population assumption for models may be unrealistically obscuring important features of between-subject and within-subject variations; (ii) normality assumption for model errors may not always give robust and reliable results, in particular, if the data exhibit skewness; (iii) the responses may be missing and the missingness may be nonignorable; and (iv) some covariates of interest may often be measured with substantial errors. When carrying out statistical inference in such settings, it is important to account for the effects of these data features; otherwise, erroneous or even misleading results may be produced. Inferential procedures can be complicated dramatically when these four data features arise. In this article, the Bayesian joint modeling approach based on a finite mixture of NLME joint models with skew distributions is developed to study simultaneous impact of these four data features, allowing estimates of both model parameters and class membership probabilities at population and individual levels. A real data example is analyzed to demonstrate the proposed methodologies, and to compare various scenarios-based potential models with different specifications of distributions. PMID:25629642
Ko, Mi Mi
2016-01-01
Background. Pattern identification (PI) is the basic system for diagnosis of patients in traditional Korean medicine (TKM). The purpose of this study was to identify misclassification objects in discriminant model of PI for improving the classification accuracy of PI for stroke. Methods. The study included 3306 patients with stroke who were admitted to 15 TKM hospitals from June 2006 to December 2012. We derive the four kinds of measure (D, R, S, and C score) based on the pattern of the profile graphs according to classification types. The proposed measures are applied to the data to evaluate how well those detect misclassification objects. Results. In 10–20% of the filtered data, misclassification rate of C score was highest compared to those rates of other scores (42.60%, 41.15%, resp.). In 30% of the filtered data, misclassification rate of R score was highest compared to those rates of other scores (40.32%). And, in 40–90% of the filtered data, misclassification rate of D score was highest compared to those rates of other scores. Additionally, we can derive the same result of C score from multiple regression model with two independent variables. Conclusions. The results of this study should assist the development of diagnostic standards in TKM. PMID:27087819
Funk, Sebastian; Watson, Conall H; Kucharski, Adam J; Edmunds, W John
2015-01-01
Objectives We investigate the chance of demonstrating Ebola vaccine efficacy in an individually randomised controlled trial implemented in the declining epidemic of Forécariah prefecture, Guinea. Methods We extend a previously published dynamic transmission model to include a simulated individually randomised controlled trial of 100 000 participants. Using Bayesian methods, we fit the model to Ebola case incidence before a trial and forecast the expected dynamics until disease elimination. We simulate trials under these forecasts and test potential start dates and rollout schemes to assess power to detect efficacy, and bias in vaccine efficacy estimates that may be introduced. Results Under realistic assumptions, we found that a trial of 100 000 participants starting after 1 August had less than 5% chance of having enough cases to detect vaccine efficacy. In particular, gradual recruitment precludes detection of vaccine efficacy because the epidemic is likely to go extinct before enough participants are recruited. Exclusion of early cases in either arm of the trial creates bias in vaccine efficacy estimates. Conclusions The very low Ebola virus disease incidence in Forécariah prefecture means any individually randomised controlled trial implemented there is unlikely to be successful, unless there is a substantial increase in the number of cases. PMID:26671958
NASA Astrophysics Data System (ADS)
Hufnagel, Heike; Ehrhardt, Jan; Pennec, Xavier; Schmidt-Richberg, Alexander; Handels, Heinz
2010-03-01
In this article, we propose a unified statistical framework for image segmentation with shape prior information. The approach combines an explicitely parameterized point-based probabilistic statistical shape model (SSM) with a segmentation contour which is implicitly represented by the zero level set of a higher dimensional surface. These two aspects are unified in a Maximum a Posteriori (MAP) estimation where the level set is evolved to converge towards the boundary of the organ to be segmented based on the image information while taking into account the prior given by the SSM information. The optimization of the energy functional obtained by the MAP formulation leads to an alternate update of the level set and an update of the fitting of the SSM. We then adapt the probabilistic SSM for multi-shape modeling and extend the approach to multiple-structure segmentation by introducing a level set function for each structure. During segmentation, the evolution of the different level set functions is coupled by the multi-shape SSM. First experimental evaluations indicate that our method is well suited for the segmentation of topologically complex, non spheric and multiple-structure shapes. We demonstrate the effectiveness of the method by experiments on kidney segmentation as well as on hip joint segmentation in CT images.
Model-Based Calculations of the Probability of a Country's Nuclear Proliferation Decisions
Li, Jun; Yim, Man-Sung; McNelis, David N.
2007-07-01
explain the occurrences of proliferation decisions. However, predicting major historical proliferation events using model-based predictions has been unreliable. Nuclear proliferation decisions by a country is affected by three main factors: (1) technology; (2) finance; and (3) political motivation [1]. Technological capability is important as nuclear weapons development needs special materials, detonation mechanism, delivery capability, and the supporting human resources and knowledge base. Financial capability is likewise important as the development of the technological capabilities requires a serious financial commitment. It would be difficult for any state with a gross national product (GNP) significantly less than that of about $100 billion to devote enough annual governmental funding to a nuclear weapon program to actually achieve positive results within a reasonable time frame (i.e., 10 years). At the same time, nuclear proliferation is not a matter determined by a mastery of technical details or overcoming financial constraints. Technology or finance is a necessary condition but not a sufficient condition for nuclear proliferation. At the most fundamental level, the proliferation decision by a state is controlled by its political motivation. To effectively address the issue of predicting proliferation events, all three of the factors must be included in the model. To the knowledge of the authors, none of the exiting models considered the 'technology' variable as part of the modeling. This paper presents an attempt to develop a methodology for statistical modeling and predicting a country's nuclear proliferation decisions. The approach is based on the combined use of data on a country's nuclear technical capability profiles economic development status, security environment factors and internal political and cultural factors. All of the information utilized in the study was from open source literature. (authors)
Blood Vessel Normalization in the Hamster Oral Cancer Model for Experimental Cancer Therapy Studies
Ana J. Molinari; Romina F. Aromando; Maria E. Itoiz; Marcela A. Garabalino; Andrea Monti Hughes; Elisa M. Heber; Emiliano C. C. Pozzi; David W. Nigg; Veronica A. Trivillin; Amanda E. Schwint
2012-07-01
Normalization of tumor blood vessels improves drug and oxygen delivery to cancer cells. The aim of this study was to develop a technique to normalize blood vessels in the hamster cheek pouch model of oral cancer. Materials and Methods: Tumor-bearing hamsters were treated with thalidomide and were compared with controls. Results: Twenty eight hours after treatment with thalidomide, the blood vessels of premalignant tissue observable in vivo became narrower and less tortuous than those of controls; Evans Blue Dye extravasation in tumor was significantly reduced (indicating a reduction in aberrant tumor vascular hyperpermeability that compromises blood flow), and tumor blood vessel morphology in histological sections, labeled for Factor VIII, revealed a significant reduction in compressive forces. These findings indicated blood vessel normalization with a window of 48 h. Conclusion: The technique developed herein has rendered the hamster oral cancer model amenable to research, with the potential benefit of vascular normalization in head and neck cancer therapy.
NASA Astrophysics Data System (ADS)
Varouchakis, Emmanouil; Kourgialas, Nektarios; Karatzas, George; Giannakis, Georgios; Lilli, Maria; Nikolaidis, Nikolaos
2014-05-01
Riverbank erosion affects the river morphology and the local habitat and results in riparian land loss, damage to property and infrastructures, ultimately weakening flood defences. An important issue concerning riverbank erosion is the identification of the areas vulnerable to erosion, as it allows for predicting changes and assists with stream management and restoration. One way to predict the vulnerable to erosion areas is to determine the erosion probability by identifying the underlying relations between riverbank erosion and the geomorphological and/or hydrological variables that prevent or stimulate erosion. A statistical model for evaluating the probability of erosion based on a series of independent local variables and by using logistic regression is developed in this work. The main variables affecting erosion are vegetation index (stability), the presence or absence of meanders, bank material (classification), stream power, bank height, river bank slope, riverbed slope, cross section width and water velocities (Luppi et al. 2009). In statistics, logistic regression is a type of regression analysis used for predicting the outcome of a categorical dependent variable, e.g. binary response, based on one or more predictor variables (continuous or categorical). The probabilities of the possible outcomes are modelled as a function of independent variables using a logistic function. Logistic regression measures the relationship between a categorical dependent variable and, usually, one or several continuous independent variables by converting the dependent variable to probability scores. Then, a logistic regression is formed, which predicts success or failure of a given binary variable (e.g. 1 = "presence of erosion" and 0 = "no erosion") for any value of the independent variables. The regression coefficients are estimated by using maximum likelihood estimation. The erosion occurrence probability can be calculated in conjunction with the model deviance regarding
Canonical and quasi-canonical probability models of class A interference
NASA Astrophysics Data System (ADS)
Middleton, D.
1983-05-01
It is pointed out that most electromagnetic interference (EMI) phenomena include highly non-Gaussian random processes, whose effects on telecommunication system performance can be severely degrading. In this connection, attention has been given to the development of methods for operating telecommunication equipment in EMI environments. Middleton (1953, 1979, 1981) has developed various forms of canonical interference models. Berry (1980, 1981) has shown that the earlier so-called 'strictly canonical' forms do not cover all situations of practical importance. It is the principal aim of the present investigation to extend work conducted by Middleton (1977, 1979), to include additional source distributions, such as interfering sources at widely distributed distances. A second aim is to establish more precisely the conditions under which the earlier strictly canonical form can still be employed with ignorable error.
Binary logistic regression modelling: Measuring the probability of relapse cases among drug addict
NASA Astrophysics Data System (ADS)
Ismail, Mohd Tahir; Alias, Siti Nor Shadila
2014-07-01
For many years Malaysia faced the drug addiction issues. The most serious case is relapse phenomenon among treated drug addict (drug addict who have under gone the rehabilitation programme at Narcotic Addiction Rehabilitation Centre, PUSPEN). Thus, the main objective of this study is to find the most significant factor that contributes to relapse to happen. The binary logistic regression analysis was employed to model the relationship between independent variables (predictors) and dependent variable. The dependent variable is the status of the drug addict either relapse, (Yes coded as 1) or not, (No coded as 0). Meanwhile the predictors involved are age, age at first taking drug, family history, education level, family crisis, community support and self motivation. The total of the sample is 200 which the data are provided by AADK (National Antidrug Agency). The finding of the study revealed that age and self motivation are statistically significant towards the relapse cases..
NASA Astrophysics Data System (ADS)
Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo
2016-07-01
Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.
Bejaei, M; Wiseman, K; Cheng, K M
2015-01-01
Consumers' interest in specialty eggs appears to be growing in Europe and North America. The objective of this research was to develop logistic regression models that utilise purchaser attributes and demographics to predict the probability of a consumer purchasing a specific type of table egg including regular (white and brown), non-caged (free-run, free-range and organic) or nutrient-enhanced eggs. These purchase prediction models, together with the purchasers' attributes, can be used to assess market opportunities of different egg types specifically in British Columbia (BC). An online survey was used to gather data for the models. A total of 702 completed questionnaires were submitted by BC residents. Selected independent variables included in the logistic regression to develop models for different egg types to predict the probability of a consumer purchasing a specific type of table egg. The variables used in the model accounted for 54% and 49% of variances in the purchase of regular and non-caged eggs, respectively. Research results indicate that consumers of different egg types exhibit a set of unique and statistically significant characteristics and/or demographics. For example, consumers of regular eggs were less educated, older, price sensitive, major chain store buyers, and store flyer users, and had lower awareness about different types of eggs and less concern regarding animal welfare issues. However, most of the non-caged egg consumers were less concerned about price, had higher awareness about different types of table eggs, purchased their eggs from local/organic grocery stores, farm gates or farmers markets, and they were more concerned about care and feeding of hens compared to consumers of other eggs types. PMID:26103791
Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu
2013-01-01
Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ . PMID:23163785
Field, Edward H.
2015-01-01
A methodology is presented for computing elastic‐rebound‐based probabilities in an unsegmented fault or fault system, which involves computing along‐fault averages of renewal‐model parameters. The approach is less biased and more self‐consistent than a logical extension of that applied most recently for multisegment ruptures in California. It also enables the application of magnitude‐dependent aperiodicity values, which the previous approach does not. Monte Carlo simulations are used to analyze long‐term system behavior, which is generally found to be consistent with that of physics‐based earthquake simulators. Results cast doubt that recurrence‐interval distributions at points on faults look anything like traditionally applied renewal models, a fact that should be considered when interpreting paleoseismic data. We avoid such assumptions by changing the "probability of what" question (from offset at a point to the occurrence of a rupture, assuming it is the next event to occur). The new methodology is simple, although not perfect in terms of recovering long‐term rates in Monte Carlo simulations. It represents a reasonable, improved way to represent first‐order elastic‐rebound predictability, assuming it is there in the first place, and for a system that clearly exhibits other unmodeled complexities, such as aftershock triggering.
Innovative estimation of survival using log-normal survival modelling on ACCENT database
Chapman, J W; O'Callaghan, C J; Hu, N; Ding, K; Yothers, G A; Catalano, P J; Shi, Q; Gray, R G; O'Connell, M J; Sargent, D J
2013-01-01
Background: The ACCENT database, with individual patient data for 20 898 patients from 18 colon cancer clinical trials, was used to support Food and Drug Administration (FDA) approval of 3-year disease-free survival as a surrogate for 5-year overall survival. We hypothesised substantive differences in survival estimation with log-normal modelling rather than standard Kaplan–Meier or Cox approaches. Methods: Time to relapse, disease-free survival, and overall survival were estimated using Kaplan–Meier, Cox, and log-normal approaches for male subjects aged 60–65 years, with stage III colon cancer, treated with 5-fluorouracil-based chemotherapy regimens (with 5FU), or with surgery alone (without 5FU). Results: Absolute differences between Cox and log-normal estimates with (without) 5FU varied by end point. The log-normal model had 5.8 (6.3)% higher estimated 3-year time to relapse than the Cox model; 4.8 (5.1)% higher 3-year disease-free survival; and 3.2 (2.2)% higher 5-year overall survival. Model checking indicated greater data support for the log-normal than the Cox model, with Cox and Kaplan–Meier estimates being more similar. All three model types indicate consistent evidence of treatment benefit on both 3-year disease-free survival and 5-year overall survival; patients allocated to 5FU had 5.0–6.7% higher 3-year disease-free survival and 5.3–6.8% higher 5-year overall survival. Conclusion: Substantive absolute differences between estimates of 3-year disease-free survival and 5-year overall survival with log-normal and Cox models were large enough to be clinically relevant, and warrant further consideration. PMID:23385733
NASA Astrophysics Data System (ADS)
Koch, J.; Nowak, W.
2015-02-01
Improper storage and disposal of nonaqueous-phase liquids (NAPLs) has resulted in widespread contamination of the subsurface, threatening the quality of groundwater as a freshwater resource. The high frequency of contaminated sites and the difficulties of remediation efforts demand rational decisions based on a sound risk assessment. Due to sparse data and natural heterogeneities, this risk assessment needs to be supported by appropriate predictive models with quantified uncertainty. This study proposes a physically and stochastically coherent model concept to simulate and predict crucial impact metrics for DNAPL contaminated sites, such as contaminant mass discharge and DNAPL source longevity. To this end, aquifer parameters and the contaminant source architecture are conceptualized as random space functions. The governing processes are simulated in a three-dimensional, highly resolved, stochastic, and coupled model that can predict probability density functions of mass discharge and source depletion times. While it is not possible to determine whether the presented model framework is sufficiently complex or not, we can investigate whether and to which degree the desired model predictions are sensitive to simplifications often found in the literature. By testing four commonly made simplifications, we identified aquifer heterogeneity, groundwater flow irregularity, uncertain and physically based contaminant source zones, and their mutual interlinkages as indispensable components of a sound model framework.
Tang, An-Min; Tang, Nian-Sheng
2015-02-28
We propose a semiparametric multivariate skew-normal joint model for multivariate longitudinal and multivariate survival data. One main feature of the posited model is that we relax the commonly used normality assumption for random effects and within-subject error by using a centered Dirichlet process prior to specify the random effects distribution and using a multivariate skew-normal distribution to specify the within-subject error distribution and model trajectory functions of longitudinal responses semiparametrically. A Bayesian approach is proposed to simultaneously obtain Bayesian estimates of unknown parameters, random effects and nonparametric functions by combining the Gibbs sampler and the Metropolis-Hastings algorithm. Particularly, a Bayesian local influence approach is developed to assess the effect of minor perturbations to within-subject measurement error and random effects. Several simulation studies and an example are presented to illustrate the proposed methodologies. PMID:25404574
NASA Astrophysics Data System (ADS)
Chanrion, M.-A.; Sauerwein, W.; Jelen, U.; Wittig, A.; Engenhart-Cabillic, R.; Beuve, M.
2014-06-01
In carbon ion beams, biological effects vary along the ion track; hence, to quantify them, specific radiobiological models are needed. One of them, the local effect model (LEM), in particular version I (LEM I), is implemented in treatment planning systems (TPS) clinically used in European particle therapy centers. From the physical properties of the specific ion radiation, the LEM calculates the survival probabilities of the cell or tissue type under study, provided that some determinant input parameters are initially defined. Mathematical models can be used to predict, for instance, the tumor control probability (TCP), and then evaluate treatment outcomes. This work studies the influence of the LEM I input parameters on the TCP predictions in the specific case of prostate cancer. Several published input parameters and their combinations were tested. Their influence on the dose distributions calculated for a water phantom and for a patient geometry was evaluated using the TPS TRiP98. Changing input parameters induced clinically significant modifications of the mean dose (up to a factor of 3.5), spatial dose distribution, and TCP predictions (up to factor of 2.6 for D50). TCP predictions were found to be more sensitive to the parameter threshold dose (Dt) than to the biological parameters α and β. Additionally, an analytical expression was derived for correlating α, β and Dt, and this has emphasized the importance of \\frac{D_t}{\\alpha /\\beta }. The improvement of radiobiological models for particle TPS will only be achieved when more patient outcome data with well-defined patient groups, fractionation schemes and well-defined end-points are available.
NASA Astrophysics Data System (ADS)
Gill, Wonpyong
2016-01-01
This study calculated the growing probability of additional offspring with the advantageous reversal allele in an asymmetric sharply-peaked landscape using the decoupled continuous-time mutation-selection model. The growing probability was calculated for various population sizes, N, sequence lengths, L, selective advantages, s, fitness parameters, k and measuring parameters, C. The saturated growing probability in the stochastic region was approximately the effective selective advantage, s*, when C≫1/Ns* and s*≪1. The present study suggests that the growing probability in the stochastic region in the decoupled continuous-time mutation-selection model can be described using the theoretical formula for the growing probability in the Moran two-allele model. The selective advantage ratio, which represents the ratio of the effective selective advantage to the selective advantage, does not depend on the population size, selective advantage, measuring parameter and fitness parameter; instead the selective advantage ratio decreases with the increasing sequence length.
Shankar Subramaniam
2009-04-01
This final project report summarizes progress made towards the objectives described in the proposal entitled “Developing New Mathematical Models for Multiphase Flows Based on a Fundamental Probability Density Function Approach”. Substantial progress has been made in theory, modeling and numerical simulation of turbulent multiphase flows. The consistent mathematical framework based on probability density functions is described. New models are proposed for turbulent particle-laden flows and sprays.
NASA Astrophysics Data System (ADS)
Gomez, Thomas A.; Winget, Donald E.; Montgomery, Michael H.; Kilcrease, Dave; Nagayama, Taisuke
2016-01-01
White dwarfs are interesting for a number of applications including studying equations of state, stellar pulsations, and determining the age of the universe.These interesting applications require accurate determination of surface conditions: temperatures and surface gravity (or mass).The most common technique to estimate the temperature and gravity is to find the model spectrun that best fits the observed spectra of a star (known as the spectroscopic method); however, this model rests on our ability to accurately model the hydrogen spectrum at high densities.There are currently disagreements between the spectroscopic method and other techniques to determine mass.We seek to resolve this issue by exploring the continuum lowering (or disappearance of states) of the hydrogen atom.The current formalism, called "occupation probability," defines some criteria for the isolated atom's bound state to be ionized, then extrapolates the continuous spectrum to the same energy threshold.The two are then combined to create the final cross-section.I introduce a new way of calculating the atomic spectrum by doing some averaging of the plasma interaction potential energy (previously used in the physics community) and directly integrating the Schrodinger equation.This technique is a major improvement over the Taylor expansion used to describe the ion-emitter interaction and removes the need of the occupation probability and treats continuum states and discrete states on the same footing in the spectrum calculation.The resulting energy spectrum is in fact many discrete states that when averaged over the electric field distribution in the plasma appears to be a continuum.In the low density limit, the two methods are in agreement, but show some differences at high densities (above 10$^{17} e/cc$) including line shifts near the ``continuum'' edge.
Ivanek, R.; Gröhn, Y. T.; Wells, M. T.; Lembo, A. J.; Sauders, B. D.; Wiedmann, M.
2009-01-01
Many pathogens have the ability to survive and multiply in abiotic environments, representing a possible reservoir and source of human and animal exposure. Our objective was to develop a methodological framework to study spatially explicit environmental and meteorological factors affecting the probability of pathogen isolation from a location. Isolation of Listeria spp. from the natural environment was used as a model system. Logistic regression and classification tree methods were applied, and their predictive performances were compared. Analyses revealed that precipitation and occurrence of alternating freezing and thawing temperatures prior to sample collection, loam soil, water storage to a soil depth of 50 cm, slope gradient, and cardinal direction to the north are key predictors for isolation of Listeria spp. from a spatial location. Different combinations of factors affected the probability of isolation of Listeria spp. from the soil, vegetation, and water layers of a location, indicating that the three layers represent different ecological niches for Listeria spp. The predictive power of classification trees was comparable to that of logistic regression. However, the former were easier to interpret, making them more appealing for field applications. Our study demonstrates how the analysis of a pathogen's spatial distribution improves understanding of the predictors of the pathogen's presence in a particular location and could be used to propose novel control strategies to reduce human and animal environmental exposure. PMID:19648372
Minaiyan, Mohsen; Hajhashemi, Valiollah; Rabbani, Mohammad; Fattahian, Ehsan; Mahzouni, Parvin
2014-01-01
Background. Anti-inflammatory and immunomodulatory activities have been reported for maprotiline, a strong norepinephrine reuptake inhibitor. In addition, some other antidepressant drugs have shown beneficial effects in experimental colitis. Methods. All the animals were divided into normal and depressed groups. In normal rats colitis was induced by instillation of 2 mL of 4% acetic acid and after 2 hours, maprotiline (10, 20, and 40 mg/kg, i.p.) was administered. In reserpinised depressed rats, depression was induced by injection of reserpine (6 mg/kg, i.p.), 1 h prior to colitis induction, and then treated with maprotiline (10, 20, and 40 mg/kg). Treatment continued daily for four days. Dexamethasone (1 mg/kg, i.p.) was given as a reference drug. On day five following colitis induction, animals were euthanized and distal colons were assessed macroscopically, histologically, and biochemically (assessment of myeloperoxidase activity). Results. Maprotiline significantly improved macroscopic and histologic scores and diminished myeloperoxidase activity in both normal and depressed rats while reserpine exacerbated the colonic damage. Conclusion. Our data suggests that the salutary effects of maprotiline on acetic acid colitis are probably mediated first through depressive behavioral changes that could be mediated through the brain-gut axis and second for the anti-inflammatory effect of the drug. PMID:27355055
Ganjali, Mojtaba; Baghfalaki, Taban; Berridge, Damon
2015-01-01
In this paper, the problem of identifying differentially expressed genes under different conditions using gene expression microarray data, in the presence of outliers, is discussed. For this purpose, the robust modeling of gene expression data using some powerful distributions known as normal/independent distributions is considered. These distributions include the Student’s t and normal distributions which have been used previously, but also include extensions such as the slash, the contaminated normal and the Laplace distributions. The purpose of this paper is to identify differentially expressed genes by considering these distributional assumptions instead of the normal distribution. A Bayesian approach using the Markov Chain Monte Carlo method is adopted for parameter estimation. Two publicly available gene expression data sets are analyzed using the proposed approach. The use of the robust models for detecting differentially expressed genes is investigated. This investigation shows that the choice of model for differentiating gene expression data is very important. This is due to the small number of replicates for each gene and the existence of outlying data. Comparison of the performance of these models is made using different statistical criteria and the ROC curve. The method is illustrated using some simulation studies. We demonstrate the flexibility of these robust models in identifying differentially expressed genes. PMID:25910040
Bandyopadhyay, Dipankar; Lachos, Victor H.; Castro, Luis M.; Dey, Dipak K.
2012-01-01
Often in biomedical studies, the routine use of linear mixed-effects models (based on Gaussian assumptions) can be questionable when the longitudinal responses are skewed in nature. Skew-normal/elliptical models are widely used in those situations. Often, those skewed responses might also be subjected to some upper and lower quantification limits (viz. longitudinal viral load measures in HIV studies), beyond which they are not measurable. In this paper, we develop a Bayesian analysis of censored linear mixed models replacing the Gaussian assumptions with skew-normal/independent (SNI) distributions. The SNI is an attractive class of asymmetric heavy-tailed distributions that includes the skew-normal, the skew-t, skew-slash and the skew-contaminated normal distributions as special cases. The proposed model provides flexibility in capturing the effects of skewness and heavy tail for responses which are either left- or right-censored. For our analysis, we adopt a Bayesian framework and develop a MCMC algorithm to carry out the posterior analyses. The marginal likelihood is tractable, and utilized to compute not only some Bayesian model selection measures but also case-deletion influence diagnostics based on the Kullback-Leibler divergence. The newly developed procedures are illustrated with a simulation study as well as a HIV case study involving analysis of longitudinal viral loads. PMID:22685005
Modeling absolute differences in life expectancy with a censored skew-normal regression approach
Clough-Gorr, Kerri; Zwahlen, Marcel
2015-01-01
Parameter estimates from commonly used multivariable parametric survival regression models do not directly quantify differences in years of life expectancy. Gaussian linear regression models give results in terms of absolute mean differences, but are not appropriate in modeling life expectancy, because in many situations time to death has a negative skewed distribution. A regression approach using a skew-normal distribution would be an alternative to parametric survival models in the modeling of life expectancy, because parameter estimates can be interpreted in terms of survival time differences while allowing for skewness of the distribution. In this paper we show how to use the skew-normal regression so that censored and left-truncated observations are accounted for. With this we model differences in life expectancy using data from the Swiss National Cohort Study and from official life expectancy estimates and compare the results with those derived from commonly used survival regression models. We conclude that a censored skew-normal survival regression approach for left-truncated observations can be used to model differences in life expectancy across covariates of interest. PMID:26339544
Ponomarev, Artem L; George, Kerry; Cucinotta, Francis A
2014-03-01
We have developed a model that can simulate the yield of radiation-induced chromosomal aberrations (CAs) and unrejoined chromosome breaks in normal and repair-deficient cells. The model predicts the kinetics of chromosomal aberration formation after exposure in the G₀/G₁ phase of the cell cycle to either low- or high-LET radiation. A previously formulated model based on a stochastic Monte Carlo approach was updated to consider the time dependence of DNA double-strand break (DSB) repair (proper or improper), and different cell types were assigned different kinetics of DSB repair. The distribution of the DSB free ends was derived from a mechanistic model that takes into account the structure of chromatin and DSB clustering from high-LET radiation. The kinetics of chromosomal aberration formation were derived from experimental data on DSB repair kinetics in normal and repair-deficient cell lines. We assessed different types of chromosomal aberrations with the focus on simple and complex exchanges, and predicted the DSB rejoining kinetics and misrepair probabilities for different cell types. The results identify major cell-dependent factors, such as a greater yield of chromosome misrepair in ataxia telangiectasia (AT) cells and slower rejoining in Nijmegen (NBS) cells relative to the wild-type. The model's predictions suggest that two mechanisms could exist for the inefficiency of DSB repair in AT and NBS cells, one that depends on the overall speed of joining (either proper or improper) of DNA broken ends, and another that depends on geometric factors, such as the Euclidian distance between DNA broken ends, which influences the relative frequency of misrepair. PMID:24611656
NASA Astrophysics Data System (ADS)
Liu, Zhihua; Magal, Pierre; Ruan, Shigui
2014-08-01
Normal form theory is very important and useful in simplifying the forms of equations restricted on the center manifolds in studying nonlinear dynamical problems. In this paper, using the center manifold theorem associated with the integrated semigroup theory, we develop a normal form theory for semilinear Cauchy problems in which the linear operator is not densely defined and is not a Hille-Yosida operator and present procedures to compute the Taylor expansion and normal form of the reduced system restricted on the center manifold. We then apply the main results and computation procedures to determine the direction of the Hopf bifurcation and stability of the bifurcating periodic solutions in a structured evolutionary epidemiological model of influenza A drift and an age structured population model.
Lee, Tsair-Fwu; Lin, Wei-Chun; Wang, Hung-Yu; Lin, Shu-Yuan; Wu, Li-Fu; Guo, Shih-Sian; Huang, Hsiang-Jui; Ting, Hui-Min; Chao, Pei-Ju
2015-01-01
To develop the logistic and the probit models to analyse electromyographic (EMG) equivalent uniform voltage- (EUV-) response for the tenderness of tennis elbow. In total, 78 hands from 39 subjects were enrolled. In this study, surface EMG (sEMG) signal is obtained by an innovative device with electrodes over forearm region. The analytical endpoint was defined as Visual Analog Score (VAS) 3+ tenderness of tennis elbow. The logistic and the probit diseased probability (DP) models were established for the VAS score and EMG absolute voltage-time histograms (AVTH). TV50 is the threshold equivalent uniform voltage predicting a 50% risk of disease. Twenty-one out of 78 samples (27%) developed VAS 3+ tenderness of tennis elbow reported by the subject and confirmed by the physician. The fitted DP parameters were TV50 = 153.0 mV (CI: 136.3–169.7 mV), γ50 = 0.84 (CI: 0.78–0.90) and TV50 = 155.6 mV (CI: 138.9–172.4 mV), m = 0.54 (CI: 0.49–0.59) for logistic and probit models, respectively. When the EUV ≥ 153 mV, the DP of the patient is greater than 50% and vice versa. The logistic and the probit models are valuable tools to predict the DP of VAS 3+ tenderness of tennis elbow. PMID:26380281
NASA Astrophysics Data System (ADS)
Lolli, Barbara; Gasperini, Paolo
We analyzed the available instrumental data on Italian earthquakes from1960 to 1996 to compute the parameters of the time-magnitudedistribution model proposed by Reasenberg and Jones (1989) andcurrently used to make aftershock forecasting in California. From 1981 to1996 we used the recently released Catalogo Strumentale deiTerremoti `Italiani' (CSTI) (Instrumental Catalog Working Group, 2001)joining the data of the Istituto Nazionale di Geofisica e Vulcanologia(INGV) and of the Italian major local seismic network, with magnituderevalued according to Gasperini (2001). From 1960 to 1980 we usedinstead the Progetto Finalizzato Geodinamica (PFG) catalog(Postpischl, 1985) with magnitude corrected to be homogeneous with thefollowing period. About 40 sequences are detected using two differentalgorithms and the results of the modeling for the corresponding ones arecompared. The average values of distribution parameters (p= 0.93±0.21, Log10(c) = -1.53±0.54, b = 0.96±0.18 and a = -1.66±0.72) are in fair agreementwith similar computations performed in other regions of the World. We alsoanalyzed the spatial variation of model parameters that can be used topredict the sequence behavior in the first days of future Italian seismic crisis,before a reliable modeling of the ongoing sequence is available. Moreoversome nomograms to expeditiously estimate probabilities and rates ofaftershock in Italy are also computed.
NASA Astrophysics Data System (ADS)
Ng, W.; Rasmussen, P. F.; Panu, U. S.
2009-12-01
Stochastic weather modeling is subject to a number of challenges including varied spatial-dependency and the existence of missing observations. Daily precipitation possesses unique statistical characteristics in distribution, such as the existence of high frequency of zero records and the high skewness of the distribution of precipitation amount. To address for these difficulties, a methodology based on the multivariate truncated Normal distribution model is proposed. The methodology transforms the skewed distribution of precipitation amounts at multiple sites into a multivariate Normal distribution model. The missing observations are then be estimated through the conditional mean and variance obtained from the multivariate Normal distribution model. The adequacy of the proposed model structure was first verified using a synthetic data set. Subsequently, 30 years of historical daily precipitation records from 10 Canadian meteorological stations were used to evaluate the performance of the model. The result of the evaluation shows that the proposed model reasonably can preserve the statistical characteristics of the historical records in estimated the missing records at multiple sites.
López, E; Ibarz, E; Herrera, A; Puértolas, S; Gabarre, S; Más, Y; Mateo, J; Gil-Albarova, J; Gracia, L
2016-07-01
Osteoporotic vertebral fractures represent a major cause of disability, loss of quality of life and even mortality among the elderly population. Decisions on drug therapy are based on the assessment of risk factors for fracture from bone mineral density (BMD) measurements. A previously developed model, based on the Damage and Fracture Mechanics, was applied for the evaluation of the mechanical magnitudes involved in the fracture process from clinical BMD measurements. BMD evolution in untreated patients and in patients with seven different treatments was analyzed from clinical studies in order to compare the variation in the risk of fracture. The predictive model was applied in a finite element simulation of the whole lumbar spine, obtaining detailed maps of damage and fracture probability, identifying high-risk local zones at vertebral body. For every vertebra, strontium ranelate exhibits the highest decrease, whereas minimum decrease is achieved with oral ibandronate. All the treatments manifest similar trends for every vertebra. Conversely, for the natural BMD evolution, as bone stiffness decreases, the mechanical damage and fracture probability show a significant increase (as it occurs in the natural history of BMD). Vertebral walls and external areas of vertebral end plates are the zones at greatest risk, in coincidence with the typical locations of osteoporotic fractures, characterized by a vertebral crushing due to the collapse of vertebral walls. This methodology could be applied for an individual patient, in order to obtain the trends corresponding to different treatments, in identifying at-risk individuals in early stages of osteoporosis and might be helpful for treatment decisions. PMID:27265047
ERIC Educational Resources Information Center
MacIntosh, Randall
1997-01-01
Presents KANT, a FORTRAN 77 software program that tests assumptions of multivariate normality in a data set. Based on the test developed by M. V. Mardia (1985), the KANT program is useful for those engaged in structural equation modeling with latent variables. (SLD)
A Comparison of Normal and Elliptical Estimation Methods in Structural Equation Models.
ERIC Educational Resources Information Center
Schumacker, Randall E.; Cheevatanarak, Suchittra
Monte Carlo simulation compared chi-square statistics, parameter estimates, and root mean square error of approximation values using normal and elliptical estimation methods. Three research conditions were imposed on the simulated data: sample size, population contamination percent, and kurtosis. A Bentler-Weeks structural model established the…
Bayesian Normalization Model for Label-Free Quantitative Analysis by LC-MS
Nezami Ranjbar, Mohammad R.; Tadesse, Mahlet G.; Wang, Yue; Ressom, Habtom W.
2016-01-01
We introduce a new method for normalization of data acquired by liquid chromatography coupled with mass spectrometry (LC-MS) in label-free differential expression analysis. Normalization of LC-MS data is desired prior to subsequent statistical analysis to adjust variabilities in ion intensities that are not caused by biological differences but experimental bias. There are different sources of bias including variabilities during sample collection and sample storage, poor experimental design, noise, etc. In addition, instrument variability in experiments involving a large number of LC-MS runs leads to a significant drift in intensity measurements. Although various methods have been proposed for normalization of LC-MS data, there is no universally applicable approach. In this paper, we propose a Bayesian normalization model (BNM) that utilizes scan-level information from LC-MS data. Specifically, the proposed method uses peak shapes to model the scan-level data acquired from extracted ion chromatograms (EIC) with parameters considered as a linear mixed effects model. We extended the model into BNM with drift (BNMD) to compensate for the variability in intensity measurements due to long LC-MS runs. We evaluated the performance of our method using synthetic and experimental data. In comparison with several existing methods, the proposed BNM and BNMD yielded significant improvement. PMID:26357332
How To Generate Non-normal Data for Simulation of Structural Equation Models.
ERIC Educational Resources Information Center
Mattson, Stefan
1997-01-01
A procedure is proposed to generate non-normal data for simulation of structural equation models. The procedure uses a simple transformation of univariate random variables for the generation of data on latent and error variables under some restrictions for the elements of the covariance matrices for these variables. (SLD)
Brown, Patrick O.
2013-01-01
Background High throughput molecular-interaction studies using immunoprecipitations (IP) or affinity purifications are powerful and widely used in biology research. One of many important applications of this method is to identify the set of RNAs that interact with a particular RNA-binding protein (RBP). Here, the unique statistical challenge presented is to delineate a specific set of RNAs that are enriched in one sample relative to another, typically a specific IP compared to a non-specific control to model background. The choice of normalization procedure critically impacts the number of RNAs that will be identified as interacting with an RBP at a given significance threshold – yet existing normalization methods make assumptions that are often fundamentally inaccurate when applied to IP enrichment data. Methods In this paper, we present a new normalization methodology that is specifically designed for identifying enriched RNA or DNA sequences in an IP. The normalization (called adaptive or AD normalization) uses a basic model of the IP experiment and is not a variant of mean, quantile, or other methodology previously proposed. The approach is evaluated statistically and tested with simulated and empirical data. Results and Conclusions The adaptive (AD) normalization method results in a greatly increased range in the number of enriched RNAs identified, fewer false positives, and overall better concordance with independent biological evidence, for the RBPs we analyzed, compared to median normalization. The approach is also applicable to the study of pairwise RNA, DNA and protein interactions such as the analysis of transcription factors via chromatin immunoprecipitation (ChIP) or any other experiments where samples from two conditions, one of which contains an enriched subset of the other, are studied. PMID:23349766
Ding, Tian; Yu, Yan-Yan; Hwang, Cheng-An; Dong, Qing-Li; Chen, Shi-Guo; Ye, Xing-Qian; Liu, Dong-Hong
2016-01-01
The objectives of this study were to develop a probability model of Staphylococcus aureus enterotoxin A (SEA) production as affected by water activity (a(w)), pH, and temperature in broth and assess its applicability for milk. The probability of SEA production was assessed in tryptic soy broth using 24 combinations of a(w) (0.86 to 0.99), pH (5.0 to 7.0), and storage temperature (10 to 30°C). The observed probabilities were fitted with a logistic regression to develop a probability model. The model had a concordant value of 97.5% and concordant index of 0.98, indicating that the model satisfactorily describes the probability of SEA production. The model showed that a(w), pH, and temperature were significant factors affecting the probability of toxin production. The model predictions were in good agreement with the observed values obtained from milk. The model may help manufacturers in selecting product pH and a(w) and storage temperatures to prevent SEA production. PMID:26735042
NASA Astrophysics Data System (ADS)
Jian, Y.; Yao, R.; Mulnix, T.; Jin, X.; Carson, R. E.
2015-01-01
Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners—the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [11C]AFM rats imaged on the HRRT and [11C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods.
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1992-01-01
Turbulent combustion can not be simulated adequately by conventional moment closure turbulent models. The probability density function (PDF) method offers an attractive alternative: in a PDF model, the chemical source terms are closed and do not require additional models. Because the number of computational operations grows only linearly in the Monte Carlo scheme, it is chosen over finite differencing schemes. A grid dependent Monte Carlo scheme following J.Y. Chen and W. Kollmann has been studied in the present work. It was found that in order to conserve the mass fractions absolutely, one needs to add further restrictions to the scheme, namely alpha(sub j) + gamma(sub j) = alpha(sub j - 1) + gamma(sub j + 1). A new algorithm was devised that satisfied this restriction in the case of pure diffusion or uniform flow problems. Using examples, it is shown that absolute conservation can be achieved. Although for non-uniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.
Muscle function may depend on model selection in forward simulation of normal walking.
Xiao, Ming; Higginson, Jill S
2008-11-14
The purpose of this study was to quantify how the predicted muscle function would change in a muscle-driven forward simulation of normal walking when changing the number of degrees of freedom in the model. Muscle function was described by individual muscle contributions to the vertical acceleration of the center of mass (COM). We built a two-dimensional (2D) sagittal plane model and a three-dimensional (3D) model in OpenSim and used both models to reproduce the same normal walking data. Perturbation analysis was applied to deduce muscle function in each model. Muscle excitations and contributions to COM support were compared between the 2D and 3D models. We found that the 2D model was able to reproduce similar joint kinematics and kinetics patterns as the 3D model. Individual muscle excitations were different for most of the hip muscles but ankle and knee muscles were able to attain similar excitations. Total induced vertical COM acceleration by muscles and gravity was the same for both models. However, individual muscle contributions to COM support varied, especially for hip muscles. Although there is currently no standard way to validate muscle function predictions, a 3D model seems to be more appropriate for estimating individual hip muscle function. PMID:18804767
Normal fault growth above pre-existing structures: insights from discrete element modelling
NASA Astrophysics Data System (ADS)
Wrona, Thilo; Finch, Emma; Bell, Rebecca; Jackson, Christopher; Gawthorpe, Robert; Phillips, Thomas
2016-04-01
In extensional systems, pre-existing structures such as shear zones may affect the growth, geometry and location of normal faults. Recent seismic reflection-based observations from the North Sea suggest that shear zones not only localise deformation in the host rock, but also in the overlying sedimentary succession. While pre-existing weaknesses are known to localise deformation in the host rock, their effect on deformation in the overlying succession is less well understood. Here, we use 3-D discrete element modelling to determine if and how kilometre-scale shear zones affect normal fault growth in the overlying succession. Discrete element models use a large number of interacting particles to describe the dynamic evolution of complex systems. The technique has therefore been applied to describe fault and fracture growth in a variety of geological settings. We model normal faulting by extending a 60×60×30 km crustal rift-basin model including brittle and ductile interactions and gravitation and isostatic forces by 30%. An inclined plane of weakness which represents a pre-existing shear zone is introduced in the lower section of the upper brittle layer at the start of the experiment. The length, width, orientation and dip of the weak zone are systematically varied between experiments to test how these parameters control the geometric and kinematic development of overlying normal fault systems. Consistent with our seismic reflection-based observations, our results show that strain is indeed localised in and above these weak zones. In the lower brittle layer, normal faults nucleate, as expected, within the zone of weakness and control the initiation and propagation of neighbouring faults. Above this, normal faults nucleate throughout the overlying strata where their orientations are strongly influenced by the underlying zone of weakness. These results challenge the notion that overburden normal faults simply form due to reactivation and upwards propagation of pre
NASA Astrophysics Data System (ADS)
Tai, An; Liu, Feng; Gore, Elizabeth; Li, X. Allen
2016-05-01
We report a modeling study of tumor response after stereotactic body radiation therapy (SBRT) for early-stage non-small-cell lung carcinoma using published clinical data with a regrowth model. A linear-quadratic inspired regrowth model was proposed to analyze the tumor control probability (TCP) based on a series of published data of SBRT, in which a tumor is controlled for an individual patient if number of tumor cells is smaller than a critical value K cr. The regrowth model contains radiobiological parameters such as α, α/β the potential doubling time T p. This model also takes into account the heterogeneity of tumors and tumor regrowth after radiation treatment. The model was first used to fit TCP data from a single institution. The extracted fitting parameters were then used to predict the TCP data from another institution with a similar dose fractionation scheme. Finally, the model was used to fit the pooled TCP data selected from 48 publications available in the literature at the time when this manuscript was written. Excellent agreement between model predictions and single-institution data was found and the extracted radiobiological parameters were α = 0.010 ± 0.001 Gy‑1, α /β = 21.5 ± 1.0 Gy and T p = 133.4 ± 7.6 d. These parameters were α = 0.072 ± 0.006 Gy‑1, α/β = 15.9 ± 1.0 Gy and T p = 85.6 ± 24.7 d when extracted from multi-institution data. This study shows that TCP saturates at a BED of around 120 Gy. A few new dose-fractionation schemes were proposed based on the extracted model parameters from multi-institution data. It is found that the regrowth model with an α/β around 16 Gy can be used to predict the dose response of lung tumors treated with SBRT. The extracted radiobiological parameters may be useful for comparing clinical outcome data of various SBRT trials and for designing new treatment regimens.
Tai, An; Liu, Feng; Gore, Elizabeth; Li, X Allen
2016-05-21
We report a modeling study of tumor response after stereotactic body radiation therapy (SBRT) for early-stage non-small-cell lung carcinoma using published clinical data with a regrowth model. A linear-quadratic inspired regrowth model was proposed to analyze the tumor control probability (TCP) based on a series of published data of SBRT, in which a tumor is controlled for an individual patient if number of tumor cells is smaller than a critical value K cr. The regrowth model contains radiobiological parameters such as α, α/β the potential doubling time T p. This model also takes into account the heterogeneity of tumors and tumor regrowth after radiation treatment. The model was first used to fit TCP data from a single institution. The extracted fitting parameters were then used to predict the TCP data from another institution with a similar dose fractionation scheme. Finally, the model was used to fit the pooled TCP data selected from 48 publications available in the literature at the time when this manuscript was written. Excellent agreement between model predictions and single-institution data was found and the extracted radiobiological parameters were α = 0.010 ± 0.001 Gy(-1), α /β = 21.5 ± 1.0 Gy and T p = 133.4 ± 7.6 d. These parameters were α = 0.072 ± 0.006 Gy(-1), α/β = 15.9 ± 1.0 Gy and T p = 85.6 ± 24.7 d when extracted from multi-institution data. This study shows that TCP saturates at a BED of around 120 Gy. A few new dose-fractionation schemes were proposed based on the extracted model parameters from multi-institution data. It is found that the regrowth model with an α/β around 16 Gy can be used to predict the dose response of lung tumors treated with SBRT. The extracted radiobiological parameters may be useful for comparing clinical outcome data of various SBRT trials and for designing new treatment regimens
Stein, Ross S.
2008-01-01
The Working Group for California Earthquake Probabilities must transform fault lengths and their slip rates into earthquake moment-magnitudes. First, the down-dip coseismic fault dimension, W, must be inferred. We have chosen the Nazareth and Hauksson (2004) method, which uses the depth above which 99% of the background seismicity occurs to assign W. The product of the observed or inferred fault length, L, with the down-dip dimension, W, gives the fault area, A. We must then use a scaling relation to relate A to moment-magnitude, Mw. We assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2007) equations. The former uses a single logarithmic relation fitted to the M=6.5 portion of data of Wells and Coppersmith (1994); the latter uses a bilinear relation with a slope change at M=6.65 (A=537 km2) and also was tested against a greatly expanded dataset for large continental transform earthquakes. We also present an alternative power law relation, which fits the newly expanded Hanks and Bakun (2007) data best, and captures the change in slope that Hanks and Bakun attribute to a transition from area- to length-scaling of earthquake slip. We have not opted to use the alternative relation for the current model. The selections and weights were developed by unanimous consensus of the Executive Committee of the Working Group, following an open meeting of scientists, a solicitation of outside opinions from additional scientists, and presentation of our approach to the Scientific Review Panel. The magnitude-area relations and their assigned weights are unchanged from that used in Working Group (2003).
Glenn E McCreery; Keith G Condie
2006-09-01
The Very High Temperature Reactor (VHTR) is the leading candidate for the Next Generation Nuclear Power (NGNP) Project in the U.S. which has the goal of demonstrating the production of emissions free electricity and hydrogen by 2015. The present document addresses experimental modeling of flow and thermal mixing phenomena of importance during normal or reduced power operation and during a loss of forced reactor cooling (pressurized conduction cooldown) scenario. The objectives of the experiments are, 1), provide benchmark data for assessment and improvement of codes proposed for NGNP designs and safety studies, and, 2), obtain a better understanding of related phenomena, behavior and needs. Physical models of VHTR vessel upper and lower plenums which use various working fluids to scale phenomena of interest are described. The models may be used to both simulate natural convection conditions during pressurized conduction cooldown and turbulent lower plenum flow during normal or reduced power operation.
The zoom lens of attention: Simulating shuffled versus normal text reading using the SWIFT model.
Schad, Daniel J; Engbert, Ralf
2012-04-01
Assumptions on the allocation of attention during reading are crucial for theoretical models of eye guidance. The zoom lens model of attention postulates that attentional deployment can vary from a sharp focus to a broad window. The model is closely related to the foveal load hypothesis, i.e., the assumption that the perceptual span is modulated by the difficulty of the fixated word. However, these important theoretical concepts for cognitive research have not been tested quantitatively in eye movement models. Here we show that the zoom lens model, implemented in the SWIFT model of saccade generation, captures many important patterns of eye movements. We compared the model's performance to experimental data from normal and shuffled text reading. Our results demonstrate that the zoom lens of attention might be an important concept for eye movement control in reading. PMID:22754295
NASA Astrophysics Data System (ADS)
Hong, Ban Zhen; Keong, Lau Kok; Shariff, Azmi Mohd
2016-05-01
The employment of different mathematical models to address specifically for the bubble nucleation rates of water vapour and dissolved air molecules is essential as the physics for them to form bubble nuclei is different. The available methods to calculate bubble nucleation rate in binary mixture such as density functional theory are complicated to be coupled along with computational fluid dynamics (CFD) approach. In addition, effect of dissolved gas concentration was neglected in most study for the prediction of bubble nucleation rates. The most probable bubble nucleation rate for the water vapour and dissolved air mixture in a 2D quasi-stable flow across a cavitating nozzle in current work was estimated via the statistical mean of all possible bubble nucleation rates of the mixture (different mole fractions of water vapour and dissolved air) and the corresponding number of molecules in critical cluster. Theoretically, the bubble nucleation rate is greatly dependent on components' mole fraction in a critical cluster. Hence, the dissolved gas concentration effect was included in current work. Besides, the possible bubble nucleation rates were predicted based on the calculated number of molecules required to form a critical cluster. The estimation of components' mole fraction in critical cluster for water vapour and dissolved air mixture was obtained by coupling the enhanced classical nucleation theory and CFD approach. In addition, the distribution of bubble nuclei of water vapour and dissolved air mixture could be predicted via the utilisation of population balance model.
Boitard, Simon; Loisel, Patrice
2007-05-01
The probability distribution of haplotype frequencies in a population, and the way it is influenced by genetical forces such as recombination, selection, random drift ...is a question of fundamental interest in population genetics. For large populations, the distribution of haplotype frequencies for two linked loci under the classical Wright-Fisher model is almost impossible to compute because of numerical reasons. However the Wright-Fisher process can in such cases be approximated by a diffusion process and the transition density can then be deduced from the Kolmogorov equations. As no exact solution has been found for these equations, we developed a numerical method based on finite differences to solve them. It applies to transient states and models including selection or mutations. We show by several tests that this method is accurate for computing the conditional joint density of haplotype frequencies given that no haplotype has been lost. We also prove that it is far less time consuming than other methods such as Monte Carlo simulations. PMID:17316725
NASA Astrophysics Data System (ADS)
Kikuchi, Ryota; Misaka, Takashi; Obayashi, Shigeru
2015-10-01
An integrated method of a proper orthogonal decomposition based reduced-order model (ROM) and data assimilation is proposed for the real-time prediction of an unsteady flow field. In this paper, a particle filter (PF) and an ensemble Kalman filter (EnKF) are compared for data assimilation and the difference in the predicted flow fields is evaluated focusing on the probability density function (PDF) of the model variables. The proposed method is demonstrated using identical twin experiments of an unsteady flow field around a circular cylinder at the Reynolds number of 1000. The PF and EnKF are employed to estimate temporal coefficients of the ROM based on the observed velocity components in the wake of the circular cylinder. The prediction accuracy of ROM-PF is significantly better than that of ROM-EnKF due to the flexibility of PF for representing a PDF compared to EnKF. Furthermore, the proposed method reproduces the unsteady flow field several orders faster than the reference numerical simulation based on the Navier-Stokes equations.
Widesott, Lamberto; Pierelli, Alessio; Fiorino, Claudio; Lomax, Antony J.; Amichetti, Maurizio; Cozzarini, Cesare; Soukup, Martin; Schneider, Ralf; Hug, Eugen; Di Muzio, Nadia; Calandrino, Riccardo; Schwarz, Marco
2011-08-01
Purpose: To compare intensity-modulated proton therapy (IMPT) and helical tomotherapy (HT) treatment plans for high-risk prostate cancer (HRPCa) patients. Methods and Materials: The plans of 8 patients with HRPCa treated with HT were compared with IMPT plans with two quasilateral fields set up (-100{sup o}; 100{sup o}) and optimized with the Hyperion treatment planning system. Both techniques were optimized to simultaneously deliver 74.2 Gy/Gy relative biologic effectiveness (RBE) in 28 fractions on planning target volumes (PTVs)3-4 (P + proximal seminal vesicles), 65.5 Gy/Gy(RBE) on PTV2 (distal seminal vesicles and rectum/prostate overlapping), and 51.8 Gy/Gy(RBE) to PTV1 (pelvic lymph nodes). Normal tissue calculation probability (NTCP) calculations were performed for the rectum, and generalized equivalent uniform dose (gEUD) was estimated for the bowel cavity, penile bulb and bladder. Results: A slightly better PTV coverage and homogeneity of target dose distribution with IMPT was found: the percentage of PTV volume receiving {>=}95% of the prescribed dose (V{sub 95%}) was on average >97% in HT and >99% in IMPT. The conformity indexes were significantly lower for protons than for photons, and there was a statistically significant reduction of the IMPT dosimetric parameters, up to 50 Gy/Gy(RBE) for the rectum and bowel and 60 Gy/Gy(RBE) for the bladder. The NTCP values for the rectum were higher in HT for all the sets of parameters, but the gain was small and in only a few cases statistically significant. Conclusions: Comparable PTV coverage was observed. Based on NTCP calculation, IMPT is expected to allow a small reduction in rectal toxicity, and a significant dosimetric gain with IMPT, both in medium-dose and in low-dose range in all OARs, was observed.
NASA Technical Reports Server (NTRS)
Butler, Doug; Bauman, David; Johnson-Throop, Kathy
2011-01-01
The Integrated Medical Model (IMM) Project has been developing a probabilistic risk assessment tool, the IMM, to help evaluate in-flight crew health needs and impacts to the mission due to medical events. This package is a follow-up to a data package provided in June 2009. The IMM currently represents 83 medical conditions and associated ISS resources required to mitigate medical events. IMM end state forecasts relevant to the ISS PRA model include evacuation (EVAC) and loss of crew life (LOCL). The current version of the IMM provides the basis for the operational version of IMM expected in the January 2011 timeframe. The objectives of this data package are: 1. To provide a preliminary understanding of medical risk data used to update the ISS PRA Model. The IMM has had limited validation and an initial characterization of maturity has been completed using NASA STD 7009 Standard for Models and Simulation. The IMM has been internally validated by IMM personnel but has not been validated by an independent body external to the IMM Project. 2. To support a continued dialogue between the ISS PRA and IMM teams. To ensure accurate data interpretation, and that IMM output format and content meets the needs of the ISS Risk Management Office and ISS PRA Model, periodic discussions are anticipated between the risk teams. 3. To help assess the differences between the current ISS PRA and IMM medical risk forecasts of EVAC and LOCL. Follow-on activities are anticipated based on the differences between the current ISS PRA medical risk data and the latest medical risk data produced by IMM.
Modeling of dislocations in a CDW junction: Interference of the CDW and normal carriers
NASA Astrophysics Data System (ADS)
Rojo Bravo, Alvaro; Yi, Tianyou; Kirova, Natasha; Brazovskii, Serguei
2015-03-01
We derive and study equations for dissipative transient processes in a constraint incommensurate charge density wave (CDW) with remnant pockets or a thermal population of normal carriers. The attention was paid to give the correct conservation of condensed and normal electrons, which was problematic at presence of moving dislocation cores if working within an intuitive Ginzburg-Landau like model. We performed a numeric modeling for stationary and transient states in a rectangular geometry when the voltage V or the normal current are applied across the conducting chains. We observe creation of an array of electronic vortices, the dislocations, at or close to the junction surface; their number increases stepwise with increasing V. The dislocation core strongly concentrates the normal carriers but the CDW phase distortions almost neutralize the total charge. At other regimes, the lines of the zero CDW amplitude flash across the sample working as phase slips. The studies were inspired by, and can be applied to experiments on mesa-junctions in NbSe3 and TaS3.
The July 17, 2006 Java Tsunami: Tsunami Modeling and the Probable Causes of the Extreme Run-up
NASA Astrophysics Data System (ADS)
Kongko, W.; Schlurmann, T.
2009-04-01
On 17 July 2006, an Earthquake magnitude Mw 7.8 off the south coast of west Java, Indonesia generated tsunami that affected over 300 km of south Java coastline and killed more than 600 people. Observed tsunami heights and field measurement of run-up distributions were uniformly scattered approximately 5 to 7 m along a 200 km coastal stretch; remarkably, a locally focused tsunami run-up height exceeding 20 m at Nusakambangan Island has been observed. Within the framework of the German Indonesia Tsunami Early Warning System (GITEWS) Project, a high-resolution near-shore bathymetrical survey equipped by multi-beam echo-sounder has been recently conducted. Additional geodata have been collected using Intermap Technologies STAR-4 airborne interferometric SAR data acquisition system on a 5 m ground sample distance basis in order to establish a most-sophisticated Digital Terrain Model (DTM). This paper describes the outcome of tsunami modelling approaches using high resolution data of bathymetry and topography being part of a general case study in Cilacap, Indonesia, and medium resolution data for other area along coastline of south Java Island. By means of two different seismic deformation models to mimic the tsunami source generation, a numerical code based on the 2D nonlinear shallow water equations is used to simulate probable tsunami run-up scenarios. Several model tests are done and virtual points in offshore, near-shore, coastline, as well as tsunami run-up on the coast are collected. For the purpose of validation, the model results are compared with field observations and sea level data observed at several tide gauges stations. The performance of numerical simulations and correlations with observed field data are highlighted, and probable causes for the extreme wave heights and run-ups are outlined. References Ammon, C.J., Kanamori, K., Lay, T., and Velasco, A., 2006. The July 2006 Java Tsunami Earthquake, Geophysical Research Letters, 33(L24308). Fritz, H
NASA Astrophysics Data System (ADS)
Mandache, C.; Khan, M.; Fahr, A.; Yanishevsky, M.
2011-03-01
Probability of detection (PoD) studies are broadly used to determine the reliability of specific nondestructive inspection procedures, as well as to provide data for damage tolerance life estimations and calculation of inspection intervals for critical components. They require inspections on a large set of samples, a fact that makes these statistical assessments time- and cost-consuming. Physics-based numerical simulations of nondestructive testing inspections could be used as a cost-effective alternative to empirical investigations. They realistically predict the inspection outputs as functions of the input characteristics related to the test piece, transducer and instrument settings, which are subsequently used to partially substitute and/or complement inspection data in PoD analysis. This work focuses on the numerical modelling aspects of eddy current testing for the bolt hole inspections of wing box structures typical of the Lockheed Martin C-130 Hercules and P-3 Orion aircraft, found in the air force inventory of many countries. Boundary element-based numerical modelling software was employed to predict the eddy current signal responses when varying inspection parameters related to probe characteristics, crack geometry and test piece properties. Two demonstrator exercises were used for eddy current signal prediction when lowering the driver probe frequency and changing the material's electrical conductivity, followed by subsequent discussions and examination of the implications on using simulated data in the PoD analysis. Despite some simplifying assumptions, the modelled eddy current signals were found to provide similar results to the actual inspections. It is concluded that physics-based numerical simulations have the potential to partially substitute or complement inspection data required for PoD studies, reducing the cost, time, effort and resources necessary for a full empirical PoD assessment.
NASA Technical Reports Server (NTRS)
Demoulin, P.; Forbes, T. G.
1992-01-01
A technique which incorporates both photospheric and prominence magnetic field observations is used to analyze the magnetic support of solar prominences in two dimensions. The prominence is modeled by a mass-loaded current sheet which is supported against gravity by magnetic fields from a bipolar source in the photosphere and a massless line current in the corona. It is found that prominence support can be achieved in three different kinds of configurations: an arcade topology with a normal polarity; a helical topology with a normal polarity; and a helical topology with an inverse polarity. In all cases the important parameter is the variation of the horizontal component of the prominence field with height. Adding a line current external to the prominence eliminates the nonsupport problem which plagues virtually all previous prominence models with inverse polarity.
A model for studying the initiation of normal calcification in vivo
Alcock, Nancy W.; Reid, Judith A.
1969-01-01
1. Rat costal cartilage was found to begin to calcify normally when the rats weigh 35–45g. 2. The cartilage is suggested as a model for the study in vivo of mechanisms concerned with normal calcification. 3. The model was tested by studying the incorporation of fluoride into the mineral deposited in the tissue. 4. The percentage of inorganic material in cartilage rose from approx. 3% of the dry weight in the uncalcified tissue to 62% in the tissue from rats weighing 300g. 5. Mineral deposited had a calcium/phosphorus molar ratio of 1·65. 6. After the oral administration of sodium fluoride to rats, fluoride was incorporated into cartilage mineral. 7. The concentration of fluoride in cartilage ash increased rapidly with calcification and the mineral became more highly fluoridated than the corresponding rib bone. 8. Fluoridated mineral showed a marked decrease in citrate concentration. PMID:5801672
NASA Technical Reports Server (NTRS)
1995-01-01
The success of any solution methodology for studying gas-turbine combustor flows depends a great deal on how well it can model various complex, rate-controlling processes associated with turbulent transport, mixing, chemical kinetics, evaporation and spreading rates of the spray, convective and radiative heat transfer, and other phenomena. These phenomena often strongly interact with each other at disparate time and length scales. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and evaporation in many practical combustion devices. Turbulence manifests its influence in a diffusion flame in several forms depending on how turbulence interacts with various flame scales. These forms range from the so-called wrinkled, or stretched, flamelets regime, to the distributed combustion regime. Conventional turbulence closure models have difficulty in treating highly nonlinear reaction rates. A solution procedure based on the joint composition probability density function (PDF) approach holds the promise of modeling various important combustion phenomena relevant to practical combustion devices such as extinction, blowoff limits, and emissions predictions because it can handle the nonlinear chemical reaction rates without any approximation. In this approach, mean and turbulence gas-phase velocity fields are determined from a standard turbulence model; the joint composition field of species and enthalpy are determined from the solution of a modeled PDF transport equation; and a Lagrangian-based dilute spray model is used for the liquid-phase representation with appropriate consideration of the exchanges of mass, momentum, and energy between the two phases. The PDF transport equation is solved by a Monte Carlo method, and existing state-of-the-art numerical representations are used to solve the mean gasphase velocity and turbulence fields together with the liquid-phase equations. The joint composition PDF
Recognition of sine wave modeled consonants by normal hearing and hearing-impaired individuals
NASA Astrophysics Data System (ADS)
Balachandran, Rupa
Sine wave modeling is a parametric tool for representing the speech signal with a limited number of sine waves. It involves replacing the peaks of the speech spectrum with sine waves and discarding the rest of the lower amplitude components during synthesis. It has the potential to be used as a speech enhancement technique for hearing-impaired adults. The present study answers the following basic questions: (1) Are sine wave synthesized speech tokens more intelligible than natural speech tokens? (2) What is the effect of varying the number of sine waves on consonant recognition in quiet? (3) What is the effect of varying the number of sine waves on consonant recognition in noise? (4) How does sine wave modeling affect the transmission of speech feature in quiet and in noise? (5) Are there differences in recognition performance between normal hearing and hearing-impaired listeners? VCV syllables representing 20 consonants (/p/, /t/, /k/, /b/, /d/, /g/, /f/, /theta/, /s/, /∫/, /v/, /z/, /t∫/, /dy/, /j/, /w/, /r/, /l/, /m/, /n/) in three vowel contexts (/a/, /i/, /u/) were modeled with 4, 8, 12, and 16 sine waves. A consonant recognition task was performed in quiet, and in background noise (+10 dB and 0 dB SNR). Twenty hearing-impaired listeners and six normal hearing listeners were tested under headphones at their most comfortable listening level. The main findings were: (1) Recognition of unprocessed speech was better that of sine wave modeled speech. (2) Asymptotic performance was reached with 8 sine waves in quiet for both normal hearing and hearing-impaired listeners. (3) Consonant recognition performance in noise improved with increasing number of sine waves. (4) As the number of sine waves was decreased, place information was lost first, followed by manner, and finally voicing. (5) Hearing-impaired listeners made more errors then normal hearing listeners, but there were no differences in the error patterns made by both groups.
NASA Astrophysics Data System (ADS)
Friedson, Andrew James; Ding, Leon
2015-11-01
We have developed a numerical model to calculate the frequencies and eigenfunctions of adiabatic, non-radial normal-mode oscillations in the gas giants and Titan. The model solves the linearized momentum, energy, and continuity equations for the perturbation displacement, pressure, and density fields and solves Poisson’s equation for the perturbation gravitational potential. The response to effects associated with planetary rotation, including the Coriolis force, centrifugal force, and deformation of the equilibrium structure, is calculated numerically. This provides the capability to accurately compute the influence of rotation on the modes, even in the limit where mode frequency approaches the rotation rate, when analytical estimates based on functional perturbation analysis become inaccurate. This aspect of the model makes it ideal for studying the potential role of low-frequency modes for driving spiral density waves in the C ring that possess relatively low pattern speeds (Hedman, M.M and P.D. Nicholson, MNRAS 444, 1369-1388). In addition, the model can be used to explore the effect of internal differential rotation on the eigenfrequencies. We will (1) present examples of applying the model to calculate the properties of normal modes in Saturn and their relationship to observed spiral density waves in the C ring, and (2) discuss how the model is used to examine the response of the superrotating atmosphere of Titan to the gravitational tide exerted by Saturn. This research was supported by a grant from the NASA Planetary Atmosphere Program.
NASA Astrophysics Data System (ADS)
Gupta, N.; Callaghan, S.; Graves, R.; Mehta, G.; Zhao, L.; Deelman, E.; Jordan, T. H.; Kesselman, C.; Okaya, D.; Cui, Y.; Field, E.; Gupta, V.; Vahi, K.; Maechling, P. J.
2006-12-01
Researchers from the SCEC Community Modeling Environment (SCEC/CME) project are utilizing the CyberShake computational platform and a distributed high performance computing environment that includes USC High Performance Computer Center and the NSF TeraGrid facilities to calculate physics-based probabilistic seismic hazard curves for several sites in the Southern California area. Traditionally, probabilistic seismic hazard analysis (PSHA) is conducted using intensity measure relationships based on empirical attenuation relationships. However, a more physics-based approach using waveform modeling could lead to significant improvements in seismic hazard analysis. Members of the SCEC/CME Project have integrated leading-edge PSHA software tools, SCEC-developed geophysical models, validated anelastic wave modeling software, and state-of-the-art computational technologies on the TeraGrid to calculate probabilistic seismic hazard curves using 3D waveform-based modeling. The CyberShake calculations for a single probablistic seismic hazard curve require tens of thousands of CPU hours and multiple terabytes of disk storage. The CyberShake workflows are run on high performance computing systems including multiple TeraGrid sites (currently SDSC and NCSA), and the USC Center for High Performance Computing and Communications. To manage the extensive job scheduling and data requirements, CyberShake utilizes a grid-based scientific workflow system based on the Virtual Data System (VDS), the Pegasus meta-scheduler system, and the Globus toolkit. Probabilistic seismic hazard curves for spectral acceleration at 3.0 seconds have been produced for eleven sites in the Southern California region, including rock and basin sites. At low ground motion levels, there is little difference between the CyberShake and attenuation relationship curves. At higher ground motion (lower probability) levels, the curves are similar for some sites (downtown LA, I-5/SR-14 interchange) but different for
A log-normal distribution model for the molecular weight of aquatic fulvic acids
Cabaniss, S.E.; Zhou, Q.; Maurice, P.A.; Chin, Y.-P.; Aiken, G.R.
2000-01-01
The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a lognormal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured M(n) and M(w) and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several types of molecular weight data, including the shapes of high- pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a log-normal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured Mn and Mw and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several type's of molecular weight data, including the shapes of high-pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.
Human Normal Bronchial Epithelial Cells: A Novel In Vitro Cell Model for Toxicity Evaluation
Huang, Haiyan; Xia, Bo; Liu, Hongya; Li, Jie; Lin, Shaolin; Li, Tiyuan; Liu, Jianjun; Li, Hui
2015-01-01
Human normal cell-based systems are needed for drug discovery and toxicity evaluation. hTERT or viral genes transduced human cells are currently widely used for these studies, while these cells exhibited abnormal differentiation potential or response to biological and chemical signals. In this study, we established human normal bronchial epithelial cells (HNBEC) using a defined primary epithelial cell culture medium without transduction of exogenous genes. This system may involve decreased IL-1 signaling and enhanced Wnt signaling in cells. Our data demonstrated that HNBEC exhibited a normal diploid karyotype. They formed well-defined spheres in matrigel 3D culture while cancer cells (HeLa) formed disorganized aggregates. HNBEC cells possessed a normal cellular response to DNA damage and did not induce tumor formation in vivo by xenograft assays. Importantly, we assessed the potential of these cells in toxicity evaluation of the common occupational toxicants that may affect human respiratory system. Our results demonstrated that HNBEC cells are more sensitive to exposure of 10~20 nm-sized SiO2, Cr(VI) and B(a)P compared to 16HBE cells (a SV40-immortalized human bronchial epithelial cells). This study provides a novel in vitro human cells-based model for toxicity evaluation, may also be facilitating studies in basic cell biology, cancer biology and drug discovery. PMID:25861018
Human normal bronchial epithelial cells: a novel in vitro cell model for toxicity evaluation.
Feng, Wenqiang; Guo, Juanjuan; Huang, Haiyan; Xia, Bo; Liu, Hongya; Li, Jie; Lin, Shaolin; Li, Tiyuan; Liu, Jianjun; Li, Hui
2015-01-01
Human normal cell-based systems are needed for drug discovery and toxicity evaluation. hTERT or viral genes transduced human cells are currently widely used for these studies, while these cells exhibited abnormal differentiation potential or response to biological and chemical signals. In this study, we established human normal bronchial epithelial cells (HNBEC) using a defined primary epithelial cell culture medium without transduction of exogenous genes. This system may involve decreased IL-1 signaling and enhanced Wnt signaling in cells. Our data demonstrated that HNBEC exhibited a normal diploid karyotype. They formed well-defined spheres in matrigel 3D culture while cancer cells (HeLa) formed disorganized aggregates. HNBEC cells possessed a normal cellular response to DNA damage and did not induce tumor formation in vivo by xenograft assays. Importantly, we assessed the potential of these cells in toxicity evaluation of the common occupational toxicants that may affect human respiratory system. Our results demonstrated that HNBEC cells are more sensitive to exposure of 10~20 nm-sized SiO2, Cr(VI) and B(a)P compared to 16HBE cells (a SV40-immortalized human bronchial epithelial cells). This study provides a novel in vitro human cells-based model for toxicity evaluation, may also be facilitating studies in basic cell biology, cancer biology and drug discovery. PMID:25861018
NASA Technical Reports Server (NTRS)
Ruggier, C. J.
1992-01-01
The probability of exceeding interference power levels and the duration of interference at the Deep Space Network (DSN) antenna is calculated parametrically when the state vector of an Earth-orbiting satellite over the DSN station view area is not known. A conditional probability distribution function is derived, transformed, and then convolved with the interference signal uncertainties to yield the probability distribution of interference at any given instant during the orbiter's mission period. The analysis is applicable to orbiting satellites having circular orbits with known altitude and inclination angle.
FLUID-STRUCTURE INTERACTION MODELS OF THE MITRAL VALVE: FUNCTION IN NORMAL AND PATHOLOGIC STATES
Kunzelman, K. S.; Einstein, Daniel R.; Cochran, R. P.
2007-08-29
Successful mitral valve repair is dependent upon a full understanding of normal and abnormal mitral valve anatomy and function. Computational analysis is one such method that can be applied to simulate mitral valve function in order to analyze the roles of individual components, and evaluate proposed surgical repair. We developed the first three-dimensional, finite element (FE) computer model of the mitral valve including leaflets and chordae tendineae, however, one critical aspect that has been missing until the last few years was the evaluation of fluid flow, as coupled to the function of the mitral valve structure. We present here our latest results for normal function and specific pathologic changes using a fluid-structure interaction (FSI) model. Normal valve function was first assessed, followed by pathologic material changes in collagen fiber volume fraction, fiber stiffness, fiber splay, and isotropic stiffness. Leaflet and chordal stress and strain, and papillary muscle force was determined. In addition, transmitral flow, time to leaflet closure, and heart valve sound were assessed. Model predictions in the normal state agreed well with a wide range of available in-vivo and in-vitro data. Further, pathologic material changes that preserved the anisotropy of the valve leaflets were found to preserve valve function. By contrast, material changes that altered the anisotropy of the valve were found to profoundly alter valve function. The addition of blood flow and an experimentally driven microstructural description of mitral tissue represent significant advances in computational studies of the mitral valve, which allow further insight to be gained. This work is another building block in the foundation of a computational framework to aid in the refinement and development of a truly noninvasive diagnostic evaluation of the mitral valve. Ultimately, it represents the basis for simulation of surgical repair of pathologic valves in a clinical and educational
Dubois, Rémi; Maison-Blanche, Pierre; Quenet, Brigitte; Dreyfus, Gérard
2007-12-01
This paper describes the automatic extraction of the P, Q, R, S and T waves of electrocardiographic recordings (ECGs), through the combined use of a new machine-learning algorithm termed generalized orthogonal forward regression (GOFR) and of a specific parameterized function termed Gaussian mesa function (GMF). GOFR breaks up the heartbeat signal into Gaussian mesa functions, in such a way that each wave is modeled by a single GMF; the model thus generated is easily interpretable by the physician. GOFR is an essential ingredient in a global procedure that locates the R wave after some simple pre-processing, extracts the characteristic shape of each heart beat, assigns P, Q, R, S and T labels through automatic classification, discriminates normal beats (NB) from abnormal beats (AB), and extracts features for diagnosis. The efficiency of the detection of the QRS complex, and of the discrimination of NB from AB, is assessed on the MIT and AHA databases; the labeling of the P and T wave is validated on the QTDB database. PMID:17997186
Thorwart, Anna; Livesey, Evan J; Harris, Justin A
2012-09-01
Harris and Livesey. Learning & Behavior, 38, 1-26, (2010) described an elemental model of associative learning that implements a simple learning rule that produces results equivalent to those proposed by Rescorla and Wagner (1972), and additionally modifies in "real time" the strength of the associative connections between elements. The novel feature of this model is that stimulus elements interact by suppressively normalizing one another's activation. Because of the normalization process, element activity is a nonlinear function of sensory input strength, and the shape of the function changes depending on the number and saliences of all stimuli that are present. The model can solve a range of complex discriminations and account for related empirical findings that have been taken as evidence for configural learning processes. Here we evaluate the model's performance against the host of conditioning phenomena that are outlined in the companion article, and we present a freely available computer program for use by other researchers to simulate the model's behavior in a variety of conditioning paradigms. PMID:22927005
Acceptance, values, and probability.
Steel, Daniel
2015-10-01
This essay makes a case for regarding personal probabilities used in Bayesian analyses of confirmation as objects of acceptance and rejection. That in turn entails that personal probabilities are subject to the argument from inductive risk, which aims to show non-epistemic values can legitimately influence scientific decisions about which hypotheses to accept. In a Bayesian context, the argument from inductive risk suggests that value judgments can influence decisions about which probability models to accept for likelihoods and priors. As a consequence, if the argument from inductive risk is sound, then non-epistemic values can affect not only the level of evidence deemed necessary to accept a hypothesis but also degrees of confirmation themselves. PMID:26386533
NASA Astrophysics Data System (ADS)
Mahdyiar, M.; Galgana, G.; Shen-Tu, B.; Klein, E.; Pontbriand, C. W.
2014-12-01
Most time dependent rupture probability (TDRP) models are basically designed for a single-mode rupture, i.e. a single characteristic earthquake on a fault. However, most subduction zones rupture in complex patterns that create overlapping earthquakes of different magnitudes. Additionally, the limited historic earthquake data does not provide sufficient information to estimate reliable mean recurrence intervals for earthquakes. This makes it difficult to identify a single characteristic earthquake for TDRP analysis. Physical models based on geodetic data have been successfully used to obtain information on the state of coupling and slip deficit rates for subduction zones. Coupling information provides valuable insight into the complexity of subduction zone rupture processes. In this study we present a TDRP model that is formulated based on subduction zone slip deficit rate distribution. A subduction zone is represented by an integrated network of cells. Each cell ruptures multiple times from numerous earthquakes that have overlapping rupture areas. The rate of rupture for each cell is calculated using a moment balance concept that is calibrated based on historic earthquake data. The information in conjunction with estimates of coseismic slip from past earthquakes is used to formulate time dependent rupture probability models for cells. Earthquakes on the subduction zone and their rupture probabilities are calculated by integrating different combinations of cells. The resulting rupture probability estimates are fully consistent with the state of coupling of the subduction zone and the regional and local earthquake history as the model takes into account the impact of all large (M>7.5) earthquakes on the subduction zone. The granular rupture model as developed in this study allows estimating rupture probabilities for large earthquakes other than just a single characteristic magnitude earthquake. This provides a general framework for formulating physically
Zupančič, Daša; Kreft, Mateja Erdani; Romih, Rok
2014-01-01
Bladder cancer adjuvant intravesical therapy could be optimized by more selective targeting of neoplastic tissue via specific binding of lectins to plasma membrane carbohydrates. Our aim was to establish rat and mouse models of bladder carcinogenesis to investigate in vivo and ex vivo binding of selected lectins to the luminal surface of normal and neoplastic urothelium. Male rats and mice were treated with 0.05 % N-butyl-N-(4-hydroxybutyl)nitrosamine (BBN) in drinking water and used for ex vivo and in vivo lectin binding experiments. Urinary bladder samples were also used for paraffin embedding, scanning electron microscopy and immunofluorescence labelling of uroplakins. During carcinogenesis, the structure of the urinary bladder luminal surface changed from microridges to microvilli and ropy ridges and the expression of urothelial-specific glycoproteins uroplakins was decreased. Ex vivo and in vivo lectin binding experiments gave comparable results. Jacalin (lectin from Artocarpus integrifolia) exhibited the highest selectivity for neoplastic compared to normal urothelium of rats and mice. The binding of lectin from Amaranthus caudatus decreased in rat model and increased in mouse carcinogenesis model, indicating interspecies variations of plasma membrane glycosylation. Lectin from Datura stramonium showed higher affinity for neoplastic urothelium compared to the normal in rat and mouse model. The BBN-induced animal models of bladder carcinogenesis offer a promising approach for lectin binding experiments and further lectin-mediated targeted drug delivery research. Moreover, in vivo lectin binding experiments are comparable to ex vivo experiments, which should be considered when planning and optimizing future research. PMID:23828036
A neural model of basal ganglia-thalamocortical relations in normal and parkinsonian movement.
Contreras-Vidal, J L; Stelmach, G E
1995-10-01
Anatomical, neurophysiological, and neurochemical evidence supports the notion of parallel basal ganglia-thalamocortical motor systems. We developed a neural network model for the functioning of these systems during normal and parkinsonian movement. Parkinson's disease (PD), which results predominantly from nigrostriatal pathway damage, is used as a window to examine basal ganglia function. Simulations of dopamine depletion produce motor impairments consistent with motor deficits observed in PD that suggest the basal ganglia play a role in motor initiation and execution, and sequencing of motor programs. Stereotaxic lesions in the model's globus pallidus and subthalamic nucleus suggest that these lesions, although reducing some PD symptoms, may constrain the repertoire of available movements. It is proposed that paradoxical observations of basal ganglia responses reported in the literature may result from regional functional neuronal specialization, and the non-uniform distributions of neurochemicals in the basal ganglia. It is hypothesized that dopamine depletion produces smaller-than-normal pallidothalamic gating signals that prevent rescalability of these signals to control variable movement speed, and that in PD can produce smaller-than-normal movement amplitudes. PMID:7578481
Schirrmann, Kerstin; Mertens, Michael; Kertzscher, Ulrich; Kuebler, Wolfgang M; Affeld, Klaus
2010-04-19
Alveolar recruitment is a central strategy in the ventilation of patients with acute lung injury and other lung diseases associated with alveolar collapse and atelectasis. However, biomechanical insights into the opening and collapse of individual alveoli are still limited. A better understanding of alveolar recruitment and the interaction between alveoli in intact and injured lungs is of crucial relevance for the evaluation of the potential efficacy of ventilation strategies. We simulated human alveolar biomechanics in normal and injured lungs. We used a basic simulation model for the biomechanical behavior of virtual single alveoli to compute parameterized pressure-volume curves. Based on these curves, we analyzed the interaction and stability in a system composed of two alveoli. We introduced different values for surface tension and tissue properties to simulate different forms of lung injury. The data obtained predict that alveoli with identical properties can coexist with both different volumes and with equal volumes depending on the pressure. Alveoli in injured lungs with increased surface tension will collapse at normal breathing pressures. However, recruitment maneuvers and positive endexpiratory pressure can stabilize those alveoli, but coexisting unaffected alveoli might be overdistended. In injured alveoli with reduced compliance collapse is less likely, alveoli are expected to remain open, but with a smaller volume. Expanding them to normal size would overdistend coexisting unaffected alveoli. The present simulation model yields novel insights into the interaction between alveoli and may thus increase our understanding of the prospects of recruitment maneuvers in different forms of lung injury. PMID:20031137
Implementation of Combined Feather and Surface-Normal Ice Growth Models in LEWICE/X
NASA Technical Reports Server (NTRS)
Velazquez, M. T.; Hansman, R. J., Jr.
1995-01-01
Experimental observations have shown that discrete rime ice growths called feathers, which grow in approximately the direction of water droplet impingement, play an important role in the growth of ice on accreting surfaces for some thermodynamic conditions. An improved physical model of ice accretion has been implemented in the LEWICE 2D panel-based ice accretion code maintained by the NASA Lewis Research Center. The LEWICE/X model of ice accretion explicitly simulates regions of feather growth within the framework of the LEWICE model. Water droplets impinging on an accreting surface are withheld from the normal LEWICE mass/energy balance and handled in a separate routine; ice growth resulting from these droplets is performed with enhanced convective heat transfer approximately along droplet impingement directions. An independent underlying ice shape is grown along surface normals using the unmodified LEWICE method. The resulting dual-surface ice shape models roughness-induced feather growth observed in icing wind tunnel tests. Experiments indicate that the exact direction of feather growth is dependent on external conditions. Data is presented to support a linear variation of growth direction with temperature and cloud water content. Test runs of LEWICE/X indicate that the sizes of surface regions containing feathers are influenced by initial roughness element height. This suggests that a previous argument that feather region size is determined by boundary layer transition may be incorrect. Simulation results for two typical test cases give improved shape agreement over unmodified LEWICE.
NASA Astrophysics Data System (ADS)
Li, Zhenglin; Zhang, Renhe; Li, Fenghua
2010-09-01
Ocean reverberation in shallow water is often the predominant background interference in active sonar applications. It is still an open problem in underwater acoustics. In recent years, an oscillation phenomenon of the reverberation intensity, due to the interference of the normal modes, has been observed in many experiments. A coherent reverberation theory has been developed and used to explain this oscillation phenomenon [F. Li et al., Journal of Sound and Vibration, 252(3), 457-468, 2002]. However, the published coherent reverberation theory is for the range independent environment. Following the derivations by F. Li and Ellis [D. D. Ellis, J. Acoust. Soc. Am., 97(5), 2804-2814, 1995], a general reverberation model based on the adiabatic normal mode theory in a range dependent shallow water environment is presented. From this theory the coherent or incoherent reverberation field caused by sediment inhomogeneity and surface roughness can be predicted. Observations of reverberation from the 2001 Asian Sea International Acoustic Experiment (ASIAEX) in the East China Sea are used to test the model. Model/data comparison shows that the coherent reverberation model can predict the experimental oscillation phenomenon of reverberation intensity and the vertical correlation of reverberation very well.
Kinetic modeling of hyperpolarized 13C 1-pyruvate metabolism in normal rats and TRAMP mice
NASA Astrophysics Data System (ADS)
Zierhut, Matthew L.; Yen, Yi-Fen; Chen, Albert P.; Bok, Robert; Albers, Mark J.; Zhang, Vickie; Tropp, Jim; Park, Ilwoo; Vigneron, Daniel B.; Kurhanewicz, John; Hurd, Ralph E.; Nelson, Sarah J.
2010-01-01
PurposeTo investigate metabolic exchange between 13C 1-pyruvate, 13C 1-lactate, and 13C 1-alanine in pre-clinical model systems using kinetic modeling of dynamic hyperpolarized 13C spectroscopic data and to examine the relationship between fitted parameters and dose-response. Materials and methodsDynamic 13C spectroscopy data were acquired in normal rats, wild type mice, and mice with transgenic prostate tumors (TRAMP) either within a single slice or using a one-dimensional echo-planar spectroscopic imaging (1D-EPSI) encoding technique. Rate constants were estimated by fitting a set of exponential equations to the dynamic data. Variations in fitted parameters were used to determine model robustness in 15 mm slices centered on normal rat kidneys. Parameter values were used to investigate differences in metabolism between and within TRAMP and wild type mice. ResultsThe kinetic model was shown here to be robust when fitting data from a rat given similar doses. In normal rats, Michaelis-Menten kinetics were able to describe the dose-response of the fitted exchange rate constants with a 13.65% and 16.75% scaled fitting error (SFE) for kpyr→lac and kpyr→ala, respectively. In TRAMP mice, kpyr→lac increased an average of 94% after up to 23 days of disease progression, whether the mice were untreated or treated with casodex. Parameters estimated from dynamic 13C 1D-EPSI data were able to differentiate anatomical structures within both wild type and TRAMP mice. ConclusionsThe metabolic parameters estimated using this approach may be useful for in vivo monitoring of tumor progression and treatment efficacy, as well as to distinguish between various tissues based on metabolic activity.
Non-Gaussian Photon Probability Distribution
NASA Astrophysics Data System (ADS)
Solomon, Benjamin T.
2010-01-01
This paper investigates the axiom that the photon's probability distribution is a Gaussian distribution. The Airy disc empirical evidence shows that the best fit, if not exact, distribution is a modified Gamma mΓ distribution (whose parameters are α = r, βr/√u ) in the plane orthogonal to the motion of the photon. This modified Gamma distribution is then used to reconstruct the probability distributions along the hypotenuse from the pinhole, arc from the pinhole, and a line parallel to photon motion. This reconstruction shows that the photon's probability distribution is not a Gaussian function. However, under certain conditions, the distribution can appear to be Normal, thereby accounting for the success of quantum mechanics. This modified Gamma distribution changes with the shape of objects around it and thus explains how the observer alters the observation. This property therefore places additional constraints to quantum entanglement experiments. This paper shows that photon interaction is a multi-phenomena effect consisting of the probability to interact Pi, the probabilistic function and the ability to interact Ai, the electromagnetic function. Splitting the probability function Pi from the electromagnetic function Ai enables the investigation of the photon behavior from a purely probabilistic Pi perspective. The Probabilistic Interaction Hypothesis is proposed as a consistent method for handling the two different phenomena, the probability function Pi and the ability to interact Ai, thus redefining radiation shielding, stealth or cloaking, and invisibility as different effects of a single phenomenon Pi of the photon probability distribution. Sub wavelength photon behavior is successfully modeled as a multi-phenomena behavior. The Probabilistic Interaction Hypothesis provides a good fit to Otoshi's (1972) microwave shielding, Schurig et al. (2006) microwave cloaking, and Oulton et al. (2008) sub wavelength confinement; thereby providing a strong case that
Martens-Kuin models of normal and inverse polarity filament eruptions and coronal mass ejections
NASA Technical Reports Server (NTRS)
Smith, D. F.; Hildner, E.; Kuin, N. P. M.
1992-01-01
An analysis is made of the Martens-Kuin filament eruption model in relation to observations of coronal mass ejections (CMEs). The field lines of this model are plotted in the vacuum or infinite resistivity approximation with two background fields. The first is the dipole background field of the model and the second is the potential streamer model of Low. The Martens-Kuin model predicts that, as the filament erupts, the overlying coronal magnetic field lines rise in a manner inconsistent with observations of CMEs associated with eruptive filaments. This model and, by generalization the whole class of so-called Kuperus-Raadu configurations in which a neutral point occurs below the filament, are of questionable utility for CME modeling. An alternate case is considered in which the directions of currents in the Martens-Kuin model are reversed resulting in a so-called normal polarity configuration of the filament magnetic field. The background field lines now distort to support the filament and help eject it. While the vacuum field results make this configuration appear very promising, a full two- or more-dimensional MHD simulations is required to properly analyze the dynamics resulting from this configuration.
Titus, J.G.; Narayanan, V.K.
1995-10-01
The report develops probability-based projections that can be added to local tide-gage trends to estimate future sea level at particular locations. It uses the same models employed by previous assessments of sea level rise. The key coefficients in those models are based on subjective probability distributions supplied by a cross-section of climatologists, oceanographers, and glaciologists.
Modeling the Redshift Evolution of the Normal Galaxy X-Ray Luminosity Function
NASA Technical Reports Server (NTRS)
Tremmel, M.; Fragos, T.; Lehmer, B. D.; Tzanavaris, P.; Belczynski, K.; Kalogera, V.; Basu-Zych, A. R.; Farr, W. M.; Hornschemeier, A.; Jenkins, L.; Ptak, A.; Zezas, A.
2013-01-01
Emission from X-ray binaries (XRBs) is a major component of the total X-ray luminosity of normal galaxies, so X-ray studies of high-redshift galaxies allow us to probe the formation and evolution of XRBs on very long timescales (approximately 10 Gyr). In this paper, we present results from large-scale population synthesis models of binary populations in galaxies from z = 0 to approximately 20. We use as input into our modeling the Millennium II Cosmological Simulation and the updated semi-analytic galaxy catalog by Guo et al. to self-consistently account for the star formation history (SFH) and metallicity evolution of each galaxy. We run a grid of 192 models, varying all the parameters known from previous studies to affect the evolution of XRBs. We use our models and observationally derived prescriptions for hot gas emission to create theoretical galaxy X-ray luminosity functions (XLFs) for several redshift bins. Models with low common envelope efficiencies, a 50% twins mass ratio distribution, a steeper initial mass function exponent, and high stellar wind mass-loss rates best match observational results from Tzanavaris & Georgantopoulos, though they significantly underproduce bright early-type and very bright (L(sub x) greater than 10(exp 41)) late-type galaxies. These discrepancies are likely caused by uncertainties in hot gas emission and SFHs, active galactic nucleus contamination, and a lack of dynamically formed low-mass XRBs. In our highest likelihood models, we find that hot gas emission dominates the emission for most bright galaxies. We also find that the evolution of the normal galaxy X-ray luminosity density out to z = 4 is driven largely by XRBs in galaxies with X-ray luminosities between 10(exp 40) and 10(exp 41) erg s(exp -1).
Safaeian, Navid; David, Tim
2013-01-01
The oxygen exchange and correlation between the cerebral blood flow (CBF) and cerebral metabolic rate of oxygen consumption (CMRO2) in the cortical capillary levels for normal and pathologic brain functions remain the subject of debate. A 3D realistic mesoscale model of the cortical capillary network (non-tree like) is constructed using a random Voronoi tessellation in which each edge represents a capillary segment. The hemodynamics and oxygen transport are numerically simulated in the model, which involves rheological laws in the capillaries, oxygen diffusion, and non-linear binding of oxygen to hemoglobin, respectively. The findings show that the cerebral hypoxia due to a significant decreased perfusion (as can occur in stroke) can be avoided by a moderate reduction in oxygen demand. Oxygen extraction fraction (OEF) can be an important indicator for the brain oxygen metabolism under normal perfusion and misery-perfusion syndrome (leading to ischemia). The results demonstrated that a disproportionately large increase in blood supply is required for a small increase in the oxygen demand, which, in turn, is strongly dependent on the resting OEF. The predicted flow-metabolism coupling in the model supports the experimental studies of spatiotemporal stimulations in humans by positron emission tomography and functional magnetic resonance imaging. PMID:23921901
NASA Astrophysics Data System (ADS)
Qin, Dong-Mei; Guo, Ping; Hu, Zhan-Yi; Zhao, Yong-Heng
2003-06-01
For LAMOST, the largest sky survey program in China, the solution of the problem of automatic discrimination of stars from galaxies by spectra has shown that the results of the PSF test can be significantly refined. However, the problem is made worse when the redshifts of galaxies are not available. We present a new automatic method of star/(normal) galaxy separation, which is based on Statistical Mixture Modeling with Radial Basis Function Neural Networks (SMM-RBFNN). This work is a continuation of our previous one, where active and non-active celestial objects were successfully segregated. By combining the method in this paper and the previous one, stars can now be effectively separated from galaxies and AGNs by their spectra---a major goal of LAMOST, and an indispensable step in any automatic spectrum classification system. In our work, the training set includes standard stellar spectra from Jacoby's spectrum library and simulated galaxy spectra of E0, S0, Sa, Sb types with redshift ranging from 0 to 1.2, and the test set of stellar spectra from Pickles' atlas and SDSS spectra of normal galaxies with SNR of 13. Experiments show that our SMM-RBFNN is more efficient in both the training and testing stages than the BPNN (back propagation neural networks), and more importantly, it can achieve a good classification accuracy of 99.22% and 96.52%, respectively for stars and normal galaxies.
NASA Astrophysics Data System (ADS)
Verleysdonk, Sarah; Flores-Orozco, Adrian; Krautblatter, Michael; Kemna, Andreas
2010-05-01
Electrical resistivity tomography (ERT) has been used for the monitoring of permafrost-affected rock walls for some years now. To further enhance the interpretation of ERT measurements a deeper insight into error sources and the influence of error model parameters on the imaging results is necessary. Here, we present the effect of different statistical schemes for the determination of error parameters from the discrepancies between normal and reciprocal measurements - bin analysis and histogram analysis - using a smoothness-constrained inversion code (CRTomo) with an incorporated appropriate error model. The study site is located in galleries adjacent to the Zugspitze North Face (2800 m a.s.l.) at the border between Austria and Germany. A 20 m * 40 m rock permafrost body and its surroundings have been monitored along permanently installed transects - with electrode spacings of 1.5 m and 4.6 m - from 2007 to 2009. For data acquisition, a conventional Wenner survey was conducted as this array has proven to be the most robust array in frozen rock walls. Normal and reciprocal data were collected directly one after another to ensure identical conditions. The ERT inversion results depend strongly on the chosen parameters of the employed error model, i.e., the absolute resistance error and the relative resistance error. These parameters were derived (1) for large normal/reciprocal data sets by means of bin analyses and (2) for small normal/reciprocal data sets by means of histogram analyses. Error parameters were calculated independently for each data set of a monthly monitoring sequence to avoid the creation of artefacts (over-fitting of the data) or unnecessary loss of contrast (under-fitting of the data) in the images. The inversion results are assessed with respect to (1) raw data quality as described by the error model parameters, (2) validation via available (rock) temperature data and (3) the interpretation of the images from a geophysical as well as a
Nabawy, Mostafa R A; Crowther, William J
2014-05-01
This paper introduces a generic, transparent and compact model for the evaluation of the aerodynamic performance of insect-like flapping wings in hovering flight. The model is generic in that it can be applied to wings of arbitrary morphology and kinematics without the use of experimental data, is transparent in that the aerodynamic components of the model are linked directly to morphology and kinematics via physical relationships and is compact in the sense that it can be efficiently evaluated for use within a design optimization environment. An important aspect of the model is the method by which translational force coefficients for the aerodynamic model are obtained from first principles; however important insights are also provided for the morphological and kinematic treatments that improve the clarity and efficiency of the overall model. A thorough analysis of the leading-edge suction analogy model is provided and comparison of the aerodynamic model with results from application of the leading-edge suction analogy shows good agreement. The full model is evaluated against experimental data for revolving wings and good agreement is obtained for lift and drag up to 90° incidence. Comparison of the model output with data from computational fluid dynamics studies on a range of different insect species also shows good agreement with predicted weight support ratio and specific power. The validated model is used to evaluate the relative impact of different contributors to the induced power factor for the hoverfly and fruitfly. It is shown that the assumption of an ideal induced power factor (k = 1) for a normal hovering hoverfly leads to a 23% overestimation of the generated force owing to flapping. PMID:24554578
Comparing tests appear in model-check for normal regression with spatially correlated observations
NASA Astrophysics Data System (ADS)
Somayasa, Wayan; Wibawa, Gusti A.
2016-06-01
The problem of investigating the appropriateness of an assumed model in regression analysis was traditionally handled by means of F test under independent observations. In this work we propose a more modern method based on the so-called set-indexed partial sums processes of the least squares residuals of the observations. We consider throughout this work univariate and multivariate regression models with spatially correlated observations, which are frequently encountered in the statistical modelling in geosciences as well as in mining. The decision is drawn by performing asymptotic test of statistical hypothesis based on the Kolmogorov-Smirnov and Cramér-von Misses functionals of the processes. We compare the two tests by investigating the power functions of the test. The finite sample size behavior of the tests are studied by simulating the empirical probability of rejections of H 0. It is shown that for univariate model the KS test seems to be more powerful. Conversely the Cramér-von Mises test tends to be more powerful than the KS test in the multivariate case.
NASA Astrophysics Data System (ADS)
Mannucci, F.; Basile, F.; Poggianti, B. M.; Cimatti, A.; Daddi, E.; Pozzetti, L.; Vanzi, L.
2001-09-01
We have observed 28 local galaxies in the wavelength range between 1 and 2.4μm in order to define template spectra of the normal galaxies along the Hubble sequence. Five galaxies per morphological type were observed in most cases, and the resulting rms spread of the normalized spectra of each class, including both intrinsic differences and observational uncertainties, is about 1 per cent in K, 2 per cent in H and 3 per cent in J. Many absorption features can be accurately measured. The target galaxies and the spectroscopic aperture (7×53arcsec2) were chosen to be similar to those used by Kinney et al. to define template UV and optical spectra. The two data sets are matched in order to build representative spectra between 0.1 and 2.4μm. The continuum shape of the optical spectra and the relative normalization of the near-IR ones were set to fit the average effective colours of the galaxies of the various Hubble classes. The resulting spectra are used to compute the k-corrections of the normal galaxies in the near-IR bands, and to check the predictions of various spectral synthesis models: while the shape of the continuum is generally well predicted, large discrepancies are found in the absorption lines. Among the other possible applications, here we also show how these spectra can be used to place constraints on the dominant stellar population in local galaxies. Spectra and k-corrections are publicly available and can be downloaded from the web site
Comparison of model estimated and measured direct-normal solar irradiance
Halthore, R.N.; Schwartz, S.E.; Michalsky, J.J.; Anderson, G.P.; Holben, B.N.; Ten Brink, H.M.
1997-12-01
Direct-normal solar irradiance (DNSI), the energy in the solar spectrum incident in unit time at the Earth{close_quote}s surface on a unit area perpendicular to the direction to the Sun, depends only on atmospheric extinction of solar energy without regard to the details of the extinction, whether absorption or scattering. Here we report a set of closure experiments performed in north central Oklahoma in April 1996 under cloud-free conditions, wherein measured atmospheric composition and aerosol optical thickness are input to a radiative transfer model, MODTRAN 3, to estimate DNSI, which is then compared with measured values obtained with normal incidence pyrheliometers and absolute cavity radiometers. Uncertainty in aerosol optical thickness (AOT) dominates the uncertainty in DNSI calculation. AOT measured by an independently calibrated Sun photometer and a rotating shadow-band radiometer agree to within the uncertainties of each measurement. For 36 independent comparisons the agreement between measured and model-estimated values of DNSI falls within the combined uncertainties in the measurement (0.3{endash}0.7{percent}) and model calculation (1.8{percent}), albeit with a slight average model underestimate ({minus}0.18{plus_minus}0.94){percent}; for a DNSI of 839Wm{sup {minus}2} this corresponds to {minus}1.5{plus_minus}7.9Wm{sup {minus}2}. The agreement is nearly independent of air mass and water-vapor path abundance. These results thus establish the accuracy of the current knowledge of the solar spectrum, its integrated power, and the atmospheric extinction as a function of wavelength as represented in MODTRAN 3. An important consequence is that atmospheric absorption of short-wave energy is accurately parametrized in the model to within the above uncertainties. {copyright} 1997 American Geophysical Union
Normal D-region models for weapon-effects code. Technical report, 1 January-24 August 1985
Gambill
1985-09-18
This report examines several normal D-region models and their application to vlf/f propagation predictions. Special emphasis is placed on defining models that reproduce measured normal propagation data and also provide reasonable departure/recovery conditions after an ionospheric disturbance. An interim numerical model is described that provides for selection of a range of normal D-region electron profiles and also provides for a smooth transition to disturbed profiles. Requirements are also examined for defining prescribed D-region profiles using complex aero-chemistry models.
Brake, M. R. W.
2015-02-17
Impact between metallic surfaces is a phenomenon that is ubiquitous in the design and analysis of mechanical systems. We found that to model this phenomenon, a new formulation for frictional elastic–plastic contact between two surfaces is developed. The formulation is developed to consider both frictional, oblique contact (of which normal, frictionless contact is a limiting case) and strain hardening effects. The constitutive model for normal contact is developed as two contiguous loading domains: the elastic regime and a transitionary region in which the plastic response of the materials develops and the elastic response abates. For unloading, the constitutive model ismore » based on an elastic process. Moreover, the normal contact model is assumed to only couple one-way with the frictional/tangential contact model, which results in the normal contact model being independent of the frictional effects. Frictional, tangential contact is modeled using a microslip model that is developed to consider the pressure distribution that develops from the elastic–plastic normal contact. This model is validated through comparisons with experimental results reported in the literature, and is demonstrated to be significantly more accurate than 10 other normal contact models and three other tangential contact models found in the literature.« less
Brake, M. R. W.
2015-02-17
Impact between metallic surfaces is a phenomenon that is ubiquitous in the design and analysis of mechanical systems. We found that to model this phenomenon, a new formulation for frictional elastic–plastic contact between two surfaces is developed. The formulation is developed to consider both frictional, oblique contact (of which normal, frictionless contact is a limiting case) and strain hardening effects. The constitutive model for normal contact is developed as two contiguous loading domains: the elastic regime and a transitionary region in which the plastic response of the materials develops and the elastic response abates. For unloading, the constitutive model is based on an elastic process. Moreover, the normal contact model is assumed to only couple one-way with the frictional/tangential contact model, which results in the normal contact model being independent of the frictional effects. Frictional, tangential contact is modeled using a microslip model that is developed to consider the pressure distribution that develops from the elastic–plastic normal contact. This model is validated through comparisons with experimental results reported in the literature, and is demonstrated to be significantly more accurate than 10 other normal contact models and three other tangential contact models found in the literature.
Cooper, Emily A
2016-04-01
Biological sensory systems share a number of organizing principles. One such principle is the formation of parallel streams. In the visual system, information about bright and dark features is largely conveyed via two separate streams: theONandOFFpathways. While brightness and darkness can be considered symmetric and opposite forms of visual contrast, the response properties of cells in theONandOFFpathways are decidedly asymmetric. Here, we ask whether a simple contrast-encoding model predicts asymmetries for brights and darks that are similar to the asymmetries found in theONandOFFpathways. Importantly, this model does not include any explicit differences in how the visual system represents brights and darks, but it does include a common normalization mechanism. The phenomena captured by the model include (1) nonlinear contrast response functions, (2) greater nonlinearities in the responses to darks, and (3) larger responses to dark contrasts. We report a direct, quantitative comparison between these model predictions and previously published electrophysiological measurements from the retina and thalamus (guinea pig and cat, respectively). This work suggests that the simple computation of visual contrast may account for a range of early visual processing nonlinearities. Assessing explicit models of sensory representations is essential for understanding which features of neuronal activity these models can and cannot predict, and for investigating how early computations may reverberate through the sensory pathways. PMID:27044852
A Spherical Chandrasekhar-Mass Delayed-Detonation Model for a Normal Type Ia Supernova
NASA Astrophysics Data System (ADS)
Blondin, Stéphane; Dessart, Luc; Hillier, D. John
2015-06-01
The most widely-accepted model for Type Ia supernovae (SNe Ia) is the thermonuclear disruption of a White Dwarf (WD) star in a binary system, although there is ongoing discussion about the combustion mode, the progenitor mass, and the nature of the binary companion. Observational evidence for diversity in the SN Ia population seems to require multiple progenitor channels or explosion mechanisms. In the standard single-degenerate (SD) scenario, the WD grows in mass through accretion of H-rich or He-rich material from a non-degenerate donor (e.g., a main-sequence star, a subgiant, a He star, or a red giant). When the WD is sufficiently close to the Chandrasekhar limit (˜1.4 M⊙), a subsonic deflagration front forms near the WD center which eventually transitions to a supersonic detonation (the so-called “delayed-detonation” model) and unbinds the star. The efficiency of the WD growth in mass remains uncertain, as repeated nova outbursts during the accretion process result in mass ejection from the WD surface. Moreover, the lack of observational signatures of the binary companion has cast some doubts on the SD scenario, and recent hydrodynamical simulations have put forward WD-WD mergers and collisions as viable alternatives. However, as shown here, the standard Chandrasekhar-mass delayed-detonation model remains adequate to explain many normal SNe Ia, in particular those displaying broad Si II 6355 Å lines. We present non-local-thermodynamic-equilibrium time-dependent radiative transfer simulations performed with CMFGEN of a spherically-symmetric delayed-detonation model from a Chandrasekhar-mass WD progenitor with 0.51 M⊙ of 56Ni (Fig. 1 and Table 1), and confront our results to the observed light curves and spectra of the normal Type Ia SN 2002bo over the first 100 days of its evolution. With no fine tuning, the model reproduces well the bolometric (Fig. 2) and multi-band light curves, the secondary near-infrared maxima (Fig. 3), and the spectroscopic
Sustained normalization of neurological disease after intracranial gene therapy in a feline model**
McCurdy, Victoria J.; Johnson, Aime K.; Gray-Edwards, Heather; Randle, Ashley N.; Brunson, Brandon L.; Morrison, Nancy E.; Salibi, Nouha; Johnson, Jacob A.; Hwang, Misako; Beyers, Ronald J.; Leroy, Stanley G.; Maitland, Stacy; Denney, Thomas S.; Cox, Nancy R.; Baker, Henry J.; Sena-Esteves, Miguel; Martin, Douglas R.
2015-01-01
Progressive debilitating neurological defects characterize feline GM1 gangliosidosis, a lysosomal storage disease caused by deficiency of lysosomal β-galactosidase. No effective therapy exists for affected children, who often die before age 5. In the current study, an adeno-associated viral vector carrying the therapeutic gene was injected bilaterally into two brain targets (thalamus and deep cerebellar nuclei) of a feline model of GM1 gangliosidosis. Gene therapy normalized β-galactosidase activity and storage throughout the brain and spinal cord. The mean survival of 12 treated GM1 animals was >38 months compared to 8 months for untreated animals. Seven of the 8 treated animals remaining alive demonstrated normalization of disease, with abrogation of many symptoms including gait deficits and postural imbalance. Sustained correction of the GM1 gangliosidosis disease phenotype after limited intracranial targeting by gene therapy in a large animal model suggests that this approach may be useful for treating the human version of this lysosomal storage disorder. PMID:24718858
NASA Technical Reports Server (NTRS)
Watson, Willie R.; Jones, Michael G.; Tanner, Sharon E.; Parrott, Tony L.
1995-01-01
A propagation model method for extracting the normal incidence impedance of an acoustic material installed as a finite length segment in a wall of a duct carrying a nonprogressive wave field is presented. The method recasts the determination of the unknown impedance as the minimization of the normalized wall pressure error function. A finite element propagation model is combined with a coarse/fine grid impedance plane search technique to extract the impedance of the material. Results are presented for three different materials for which the impedance is known. For each material, the input data required for the prediction scheme was computed from modal theory and then contaminated by random error. The finite element method reproduces the known impedance of each material almost exactly for random errors typical of those found in many measurement environments. Thus, the method developed here provides a means for determining the impedance of materials in a nonprogressirve wave environment such as that usually encountered in a commercial aircraft engine and most laboratory settings.
One Dimension Analytical Model of Normal Ballistic Impact on Ceramic/Metal Gradient Armor
Liu Lisheng; Zhang Qingjie; Zhai Pengcheng; Cao Dongfeng
2008-02-15
An analytical model of normal ballistic impact on the ceramic/metal gradient armor, which is based on modified Alekseevskii-Tate equations, has been developed. The process of gradient armour impacted by the long rod can be divided into four stages in this model. First stage is projectile's mass erosion or flowing phase, mushrooming phase and rigid phase; second one is the formation of comminuted ceramic conoid; third one is the penetration of gradient layer and last one is the penetration of metal back-up plate. The equations of third stage have been advanced by assuming the behavior of gradient layer as rigid-plastic and considering the effect of strain rate on the dynamic yield strength.
Rotational normal modes of triaxial two-layered anelastic Earth model
NASA Astrophysics Data System (ADS)
Yang, Zhuo; Shen, WenBin
2016-04-01
This study focuses on providing rotational normal modes of a triaxial two-layered anelastic Earth model with considering the electromagnetic coupling. We formulate the rotation equation of the triaxial two-layered anelastic Earth model and then provide solution of that equation. We obtain four mathematically possible solutions which might exist in reality. Based on present choice of the conventional reference systems, only two of these four solutions correspond to the real existing prograde Chandler wobble (CW) and the retrograde free core nutation (FCN). We provide the periods of CW and FCN as well as their quality factors based on various experiments and observations. This study is supported by National 973 Project China (grant No. 2013CB733305) and NSFC (grant Nos. 41174011, 41210006, 41429401).
Numerical modeling of normal turbulent plane jet impingement on solid wall
NASA Astrophysics Data System (ADS)
Guo, C.-Y.; Maxwell, W. H. C.
1984-10-01
Attention is given to a numerical turbulence model for the impingement of a well developed normal plane jet on a solid wall, by means of which it is possible to express different jet impingement geometries in terms of different boundary conditions. Examples of these jets include those issuing from VTOL aircraft, chemical combustors, etc. The two-equation, turbulent kinetic energy-turbulent dissipation rate model is combined with the continuity equation and the transport equation of vorticity, using an iterative finite difference technique in the computations. Peak levels of turbulent kinetic energy occur not only in the impingement zone, but also in the intermingling zone between the edges of the free jet and the wall jet.
Numerical modeling of normal turbulent plane jet impingement on solid wall
Guo, C.Y.; Maxwell, W.H.C.
1984-10-01
Attention is given to a numerical turbulence model for the impingement of a well developed normal plane jet on a solid wall, by means of which it is possible to express different jet impingement geometries in terms of different boundary conditions. Examples of these jets include those issuing from VTOL aircraft, chemical combustors, etc. The two-equation, turbulent kinetic energy-turbulent dissipation rate model is combined with the continuity equation and the transport equation of vorticity, using an iterative finite difference technique in the computations. Peak levels of turbulent kinetic energy occur not only in the impingement zone, but also in the intermingling zone between the edges of the free jet and the wall jet. 20 references.
Assessment of IEC 61400-1 Normal Turbulence Model for Wind Conditions in Taiwan West Coast Areas
NASA Astrophysics Data System (ADS)
Leu, Tzong-Shyng; Yo, Jui-Ming; Tsai, Yi-Ting; Miau, Jiu-Jih; Wang, Ta-Chung; Tseng, Chien-Chou
2014-11-01
This paper studies the applicability of Normal Turbulence Model (NTM) in IEC61400-1 for wind conditions in Taiwan west coast area where future offshore wind farms are planning in the nearby areas. The parameters for the standard deviation of wind fluctuating (\\bar σ / Iref) are presented and compared with IEC Normal Turbulence Model. It is found that the trend of turbulence standard deviation (\\bar σ /Iref) based on the observation data agreed qualitatively well with IEC Normal Turbulence Model. However, IEC Normal Turbulence Model results in rather small (σ /{Iref}) compared to surveillance wind data in Taiwan. In this paper, model parameters for (\\bar σ / {Iref)} and (σ /{Iref}) based on the two-year observation wind data are proposed. The proposed model parameters a, b, α and β are 0.9125, 2.4345, 0.097 and 2.1875.
Normal contact and friction of rubber with model randomly rough surfaces.
Yashima, S; Romero, V; Wandersman, E; Frétigny, C; Chaudhury, M K; Chateauminois, A; Prevost, A M
2015-02-01
We report on normal contact and friction measurements of model multicontact interfaces formed between smooth surfaces and substrates textured with a statistical distribution of spherical micro-asperities. Contacts are either formed between a rigid textured lens and a smooth rubber, or a flat textured rubber and a smooth rigid lens. Measurements of the real area of contact A versus normal load P are performed by imaging the light transmitted at the microcontacts. For both interfaces, A(P) is found to be sub-linear with a power law behavior. Comparison with two multi-asperity contact models, which extend the Greenwood-Williamson (J. Greenwood and J. Williamson, Proc. Royal Soc. London Ser. A, 295, 300 (1966)) model by taking into account the elastic interaction between asperities at different length scales, is performed, and allows their validation for the first time. We find that long range elastic interactions arising from the curvature of the nominal surfaces are the main source of the non-linearity of A(P). At a shorter range, and except for very low pressures, the pressure dependence of both density and area of microcontacts remains well described by Greenwood-Williamson's model, which neglects any interaction between asperities. In addition, in steady sliding, friction measurements reveal that the mean shear stress at the scale of the asperities is systematically larger than that found for a macroscopic contact between a smooth lens and a rubber. This suggests that frictional stresses measured at macroscopic length scales may not be simply transposed to microscopic multicontact interfaces. PMID:25514137
Hallquist, Michael N.; Wright, Aidan G. C.
2015-01-01
Over the past 75 years, the study of personality and personality disorders has been informed considerably by an impressive array of psychometric instruments. Many of these tests draw on the perspective that personality features can be conceptualized in terms of latent traits that vary dimensionally across the population. A purely trait-oriented approach to personality, however, may overlook heterogeneity that is related to similarities among subgroups of people. This paper describes how factor mixture modeling (FMM), which incorporates both categories and dimensions, can be used to represent person-oriented and trait-oriented variability in the latent structure of personality. We provide an overview of different forms of FMM that vary in the degree to which they emphasize trait- versus person-oriented variability. We also provide practical guidelines for applying FMMs to personality data, and we illustrate model fitting and interpretation using an empirical analysis of general personality dysfunction. PMID:24134433
NASA Astrophysics Data System (ADS)
Naliboff, J. B.; Billen, M. I.
2010-12-01
A characteristic feature of global subduction zones is normal faulting in the outer rise region, which reflects flexure of the downgoing plate in response to the slab pull force. Variations in the patterns of outer rise normal faulting between different subduction zones likely reflects both the magnitude of flexural induced topography and the strength of the downgoing plate. In particular, the rheology of the uppermost oceanic lithosphere is likely to strongly control the faulting patterns, which have been well documented recently in both the Middle and South American trenches. These recent observations of outer rise faulting provide a unique opportunity to test different rheological models of the oceanic lithosphere using geodynamic numerical experiments. Here, we develop a new approach for modeling deformation in the outer rise and trench regions of downgoing slabs, and discuss preliminary 2-D numerical models examining the relationship between faulting patterns and the rheology of the oceanic lithosphere. To model viscous and brittle deformation within the oceanic lithosphere we use the CIG (Computational Infrastructure for Geodynamics) finite element code Gale, which is designed to solve long-term tectonic problems. In order to resolve deformation features on geologically realistic scales (< 1 km), we model only the portion of the subduction system seaward of the trench. Horizontal and vertical stress boundary conditions on the side walls drive subduction and reflect, respectively, the ridge-push and slab-pull plate-driving forces. The initial viscosity structure of the oceanic lithosphere and underlying asthenosphere follow a composite viscosity law that takes into account both Newtonian and non-Newtonian deformation. The viscosity structure is consequently governed primarily by the strain rate and thermal structure, which follows a half-space cooling model. Modification of the viscosity structure and development of discrete shear zones occurs during yielding
CATALAN - LASHERAS,N.; COUSINEAU,S.; GALAMBOS,J.; HOLTKAMP,N.; RAPARIA,D.; SHAFER,R.; STAPLES,J.; STOVALL,J.; TANKE,E.; WANGLER,T.; WEI,J.
2002-06-03
The most demanding requirement in the design of the SNS accelerator chain is to keep the accelerator complex under hands-on maintenance. This requirement implies a hard limit for residual radiation below 100 mrem/hr at one feet from the vacuum pipe and four hours after shutdown for hundred days of normal operation. It has been shown by measurements as well as simulation [l] that this limit corresponds to 1-2 Watts/meter average beam losses. This loss level is achievable all around the machine except in specific areas where remote handling will be necessary. These areas have been identified and correspond to collimation sections and dumps where a larger amount of controlled beam loss is foreseen. Even if the average level of loss is kept under 1 W/m, there are circumstances under which transient losses occur in the machine. The prompt radiation or potential damage in the accelerator components can not be deduced from an average beam loss of 1 W/m. At the same time, controlled loss areas require a dedicated study to clarify the magnitude and distribution of the beam loss. From the front end to the target, we have estimated the most probable locations for transient losses and given an estimate of their magnitude and frequency. This information is essential to calculate the necessary shielding or determine the safety procedures during machine operation. Losses in controlled areas, and the cleaning systems are the subject of Section 2. The inefficiency of each system will be taken into account for the discussion on Section 3 where n controlled loss is estimated. Section 4 summarizes our findings and presents a global view of the losses along the accelerator chain.
NASA Astrophysics Data System (ADS)
Warrell, K. F.; Withjack, M. O.; Schlische, R. W.
2014-12-01
Field- and seismic-reflection-based studies have documented the influence of pre-existing thrust faults on normal-fault development during subsequent extension. Published experimental (analog) models of shortening followed by extension with dry sand as the modeling medium show limited extensional reactivation of moderate-angle thrust faults (dipping > 40º). These dry sand models provide insight into the influence of pre-existing thrusts on normal-fault development, but these models have not reactivated low-angle (< 35º) thrust faults as seen in nature. New experimental (analog) models, using wet clay over silicone polymer to simulate brittle upper crust over ductile lower crust, suggest that low-angle thrust faults from an older shortening phase can reactivate as normal faults. In two-phase models of shortening followed by extension, normal faults nucleate above pre-existing thrust faults and likely link with thrusts at depth to create listric faults, movement on which produces rollover folds. Faults grow and link more rapidly in two-phase than in single-phase (extension-only) models. Fewer faults with higher displacements form in two-phase models, likely because, for a given displacement magnitude, a low-angle normal fault accommodates more horizontal extension than a high-angle normal fault. The resulting rift basins are wider and shallower than those forming along high-angle normal faults. Features in these models are similar to natural examples. Seismic-reflection profiles from the outer Hebrides, offshore Scotland, show listric faults partially reactivating pre-existing thrust faults with a rollover fold in the hanging wall; in crystalline basement, the thrust is reactivated, and in overlying sedimentary strata, a new, high-angle normal fault forms. Profiles from the Chignecto subbasin of the Fundy basin, offshore Canada, show full reactivation of thrust faults as low-angle normal faults where crystalline basement rocks make up the footwall.
ERIC Educational Resources Information Center
Campos, Jose Alejandro Gonzalez; Moraga, Paulina Saavedra; Del Pozo, Manuel Freire
2013-01-01
This paper introduces the generalized beta (GB) model as a new modeling tool in the educational assessment area and evaluation analysis, specifically. Unlike normal model, GB model allows us to capture some real characteristics of data and it is an important tool for understanding the phenomenon of learning. This paper develops a contrast with the…
NASA Technical Reports Server (NTRS)
Courey, Karim; Wright, Clara; Asfour, Shihab; Bayliss, Jon; Ludwig, Larry
2008-01-01
Existing risk simulations make the assumption that when a free tin whisker has bridged two adjacent exposed electrical conductors, the result is an electrical short circuit. This conservative assumption is made because shorting is a random event that has a currently unknown probability associated with it. Due to contact resistance, electrical shorts may not occur at lower voltage levels. In this experiment, we study the effect of varying voltage on the breakdown of the contact resistance which leads to a short circuit. From this data, we can estimate the probability of an electrical short, as a function of voltage, given that a free tin whisker has bridged two adjacent exposed electrical conductors. In addition, three tin whiskers grown from the same Space Shuttle Orbiter card guide used in the aforementioned experiment were cross sectioned and studied using a focused ion beam (FIB).
A normal stress subgrid-scale eddy viscosity model in large eddy simulation
NASA Technical Reports Server (NTRS)
Horiuti, K.; Mansour, N. N.; Kim, John J.
1993-01-01
The Smagorinsky subgrid-scale eddy viscosity model (SGS-EVM) is commonly used in large eddy simulations (LES) to represent the effects of the unresolved scales on the resolved scales. This model is known to be limited because its constant must be optimized in different flows, and it must be modified with a damping function to account for near-wall effects. The recent dynamic model is designed to overcome these limitations but is compositionally intensive as compared to the traditional SGS-EVM. In a recent study using direct numerical simulation data, Horiuti has shown that these drawbacks are due mainly to the use of an improper velocity scale in the SGS-EVM. He also proposed the use of the subgrid-scale normal stress as a new velocity scale that was inspired by a high-order anisotropic representation model. The testing of Horiuti, however, was conducted using DNS data from a low Reynolds number channel flow simulation. It was felt that further testing at higher Reynolds numbers and also using different flows (other than wall-bounded shear flows) were necessary steps needed to establish the validity of the new model. This is the primary motivation of the present study. The objective is to test the new model using DNS databases of high Reynolds number channel and fully developed turbulent mixing layer flows. The use of both channel (wall-bounded) and mixing layer flows is important for the development of accurate LES models because these two flows encompass many characteristic features of complex turbulent flows.
ERIC Educational Resources Information Center
Doerann-George, Judith
The Integrated Moving Average (IMA) model of time series, and the analysis of intervention effects based on it, assume random shocks which are normally distributed. To determine the robustness of the analysis to violations of this assumption, empirical sampling methods were employed. Samples were generated from three populations; normal,…
NASA Astrophysics Data System (ADS)
Stein, David W. J.
2003-09-01
The normal compositional model (NCM) simultaneously models subpixel mixing and intra-class variation in multidimensional imagery. It may be used as the foundation for the derivation of supervised and unsupervised classification and detection algorithms. Results from applying the algorithm to AVIRIS SWIR data collected over Cuprite, Nevada are described. The NCM class means are compared with library spectra using the Tetracorder algorithm. Of the eighteen classes used to model the data, eleven are associated with minerals that are known to be in the scene and are distinguishable in the SWIR, five are identified with Fe-bearing minerals that are not further classifiable using SWIR data, and the remaining two are spatially diffuse mixtures. The NCM classes distinguish (1) high and moderate temperature alunites, (2) dickite and kaolinite, and (3) high and moderate aluminum concentration muscovite. Estimated abundance maps locate many of the known mineral features. Furthermore, the NCM class means are compared with corresponding endmembers estimated using a linear mixture model (LMM). Of the eleven identifiable (NCM class mean, LMM endmember) pairs, ten are consistently identified, while the NCM estimation procedure reveals a diagnostic feature of the eleventh that is more obscure in the corresponding endmember and results in conflicting identifications.
Menon, Shakti N; Flegg, Jennifer A; McCue, Scott W; Schugart, Richard C; Dawson, Rebecca A; McElwain, D L Sean
2012-08-22
The crosstalk between fibroblasts and keratinocytes is a vital component of the wound healing process, and involves the activity of a number of growth factors and cytokines. In this work, we develop a mathematical model of this crosstalk in order to elucidate the effects of these interactions on the regeneration of collagen in a wound that heals by second intention. We consider the role of four components that strongly affect this process: transforming growth factor-β, platelet-derived growth factor, interleukin-1 and keratinocyte growth factor. The impact of this network of interactions on the degradation of an initial fibrin clot, as well as its subsequent replacement by a matrix that is mainly composed of collagen, is described through an eight-component system of nonlinear partial differential equations. Numerical results, obtained in a two-dimensional domain, highlight key aspects of this multifarious process, such as re-epithelialization. The model is shown to reproduce many of the important features of normal wound healing. In addition, we use the model to simulate the treatment of two pathological cases: chronic hypoxia, which can lead to chronic wounds; and prolonged inflammation, which has been shown to lead to hypertrophic scarring. We find that our model predictions are qualitatively in agreement with previously reported observations and provide an alternative pathway for gaining insight into this complex biological process. PMID:22628464
Des proprietes de l'etat normal du modele de Hubbard bidimensionnel
NASA Astrophysics Data System (ADS)
Lemay, Francois
Depuis leur decouverte, les etudes experimentales ont demontre que les supra-conducteurs a haute temperature ont une phase normale tres etrange. Les proprietes de ces materiaux ne sont pas bien decrites par la theorie du liquide de Fermi. Le modele de Hubbard bidimensionnel, bien qu'il ne soit pas encore resolu, est toujours considere comme un candidat pour expliquer la physique de ces composes. Dans cet ouvrage, nous mettons en evidence plusieurs proprietes electroniques du modele qui sont incompatibles avec l'existence de quasi-particules. Nous montrons notamment que la susceptibilite des electrons libres sur reseau contient des singularites logarithmiques qui influencent de facon determinante les proprietes de la self-energie a basse frequence. Ces singularites sont responsables de la destruction des quasi-particules. En l'absence de fluctuations antiferromagnetiques, elles sont aussi responsables de l'existence d'un petit pseudogap dans le poids spectral au niveau de Fermi. Les proprietes du modele sont egalement etudiees pour une surface de Fermi similaire a celle des supraconducteurs a haute temperature. Un parallele est etabli entre certaines caracteristiques du modele et celles de ces materiaux.
NASA Astrophysics Data System (ADS)
Long, Yongjun; Wei, Xiaohui; Wang, Chunlei; Dai, Xin; Wang, Shigang
2014-05-01
A new rotary normal stress electromagnetic actuator for fast steering mirror (FSM) is presented. The study includes concept design, actuating torque modeling, actuator design, and validation with numerical simulation. To achieve an FSM with compact structure and high bandwidth, the actuator is designed with a cross armature magnetic topology. By introducing bias flux generated by four permanent magnets (PMs), the actuator has high-force density similar to a solenoid but also has essentially linear characteristics similar to a voice coil actuator, leading to a simply control algorithm. The actuating torque output is a linear function of both driving current and rotation angle and is formulated with equivalent magnetic circuit method. To improve modeling accuracy, both the PM flux and coil flux leakages are taken into consideration through finite element simulation. Based on the established actuator model, optimal design of the actuator is presented to meet the requirement of our FSM. Numerical simulation is then presented to validate the concept design, established actuator model, and designed actuator. It is shown that the calculated results are in a good agreement with the simulation results.
Shankle, William R.; Pooley, James P.; Steyvers, Mark; Hara, Junko; Mangrola, Tushar; Reisberg, Barry; Lee, Michael D.
2012-01-01
Determining how cognition affects functional abilities is important in Alzheimer’s disease and related disorders (ADRD). 280 patients (normal or ADRD) received a total of 1,514 assessments using the Functional Assessment Staging Test (FAST) procedure and the MCI Screen (MCIS). A hierarchical Bayesian cognitive processing (HBCP) model was created by embedding a signal detection theory (SDT) model of the MCIS delayed recognition memory task into a hierarchical Bayesian framework. The SDT model used latent parameters of discriminability (memory process) and response bias (executive function) to predict, simultaneously, recognition memory performance for each patient and each FAST severity group. The observed recognition memory data did not distinguish the six FAST severity stages, but the latent parameters completely separated them. The latent parameters were also used successfully to transform the ordinal FAST measure into a continuous measure reflecting the underlying continuum of functional severity. HBCP models applied to recognition memory data from clinical practice settings accurately translated a latent measure of cognition to a continuous measure of functional severity for both individuals and FAST groups. Such a translation links two levels of brain information processing, and may enable more accurate correlations with other levels, such as those characterized by biomarkers. PMID:22407225
NASA Astrophysics Data System (ADS)
Urrutia, Jackie D.; Tampis, Razzcelle L.; Mercado, Joseph; Baygan, Aaron Vito M.; Baccay, Edcon B.
2016-02-01
The objective of this research is to formulate a mathematical model for the Philippines' Real Gross Domestic Product (Real GDP). The following factors are considered: Consumers' Spending (x1), Government's Spending (x2), Capital Formation (x3) and Imports (x4) as the Independent Variables that can actually influence in the Real GDP in the Philippines (y). The researchers used a Normal Estimation Equation using Matrices to create the model for Real GDP and used α = 0.01.The researchers analyzed quarterly data from 1990 to 2013. The data were acquired from the National Statistical Coordination Board (NSCB) resulting to a total of 96 observations for each variable. The data have undergone a logarithmic transformation particularly the Dependent Variable (y) to satisfy all the assumptions of the Multiple Linear Regression Analysis. The mathematical model for Real GDP was formulated using Matrices through MATLAB. Based on the results, only three of the Independent Variables are significant to the Dependent Variable namely: Consumers' Spending (x1), Capital Formation (x3) and Imports (x4), hence, can actually predict Real GDP (y). The regression analysis displays that 98.7% (coefficient of determination) of the Independent Variables can actually predict the Dependent Variable. With 97.6% of the result in Paired T-Test, the Predicted Values obtained from the model showed no significant difference from the Actual Values of Real GDP. This research will be essential in appraising the forthcoming changes to aid the Government in implementing policies for the development of the economy.
Modeling the response of normal and ischemic cardiac tissue to electrical stimulation
NASA Astrophysics Data System (ADS)
Kandel, Sunil Mani
Heart disease, the leading cause of death worldwide, is often caused by ventricular fibrillation. A common treatment for this lethal arrhythmia is defibrillation: a strong electrical shock that resets the heart to its normal rhythm. To design better defibrillators, we need a better understanding of both fibrillation and defibrillation. Fundamental mysteries remain regarding the mechanism of how the heart responds to a shock, particularly anodal shocks and the resultant hyperpolarization. Virtual anodes play critical roles in defibrillation, and one cannot build better defibrillators until these mechanisms are understood. We are using mathematical modeling to numerically simulate observed phenomena, and are exploring fundamental mechanisms responsible for the heart's electrical behavior. Such simulations clarify mechanisms and identify key parameters. We investigate how systolic tissue responds to an anodal shock and how refractory tissue reacts to hyperpolarization by studying the dip in the anodal strength-interval curve. This dip is due to electrotonic interaction between regions of depolarization and hyperpolarization following a shock. The dominance of the electrotonic mechanism over calcium interactions implies the importance of the spatial distribution of virtual electrodes. We also investigate the response of localized ischemic tissue to an anodal shock by modeling a regional elevation of extracellular potassium concentration. This heterogeneity leads to action potential instability, 2:1 conduction block (alternans), and reflection-like reentry at the boarder of the normal and ischemic regions. This kind of reflection (reentry) occurs due to the delay between proximal and distal segments to re-excite the proximal segment. Our numerical simulations are based on the bidomain model, the state-of-the-art mathematical description of how cardiac tissue responds to shocks. The dynamic LuoRudy model describes the active properties of the membrane. To model ischemia
Are current models for normal fault array evolution applicable to natural rifts?
NASA Astrophysics Data System (ADS)
Bell, R. E.; Jackson, C. A. L.
2015-12-01
Conceptual models predicting the geometry and evolution of normal fault arrays are vital to assess rift physiography, syn-rift sediment dispersal and seismic hazard. Observations from data-rich rifts and numerical and physical models underpin widely used fault array models predicting: i) during rift initiation, arrays are defined by multiple, small, isolated faults; ii) as rifting progresses, strain localises onto fewer larger structures; and iii) with continued strain, faulting migrates toward the rift axis, resulting in rift narrowing. Some rifts display these characteristics whereas others do not. Here we present several case studies documenting fault migration patterns that do not fit this ideal. In this presentation we will begin by reviewing existing fault array models before presenting a series of case studies (including from the northern North Sea and the Gulf of Corinth), which document fault migration patterns that are not predicted by current fault evolution models. We show that strain migration onto a few, large faults is common in many rifts but that, rather than localising onto these structures until the cessation of rifting, strain may 'sweep' across the basin. Furthermore, crustal weaknesses developed in early tectonic events can cause faults during subsequent phases of extension to grow relatively quickly and accommodate the majority if not all of the rift-related strain; in these cases, strain migration does not and need not occur. Finally, in salt-influenced rifts, strain localisation may not occur at all; rather, strain may become progressively more diffuse due to tilting of the basement and intrastratal salt décollements, thus leading to superimposition of thin-skinned, gravity-driven and thick-skinned, plate-driven, basement-involved extension. We call for the community to unite to develop the next-generation of normal fault array models that include complexities such as the thermal and rheological properties of the lithosphere, specific
Tierney, M.S.
1990-12-01
A five-step procedure was used in the 1990 performance simulations to construct probability distributions of the uncertain variables appearing in the mathematical models used to simulate the Waste Isolation Pilot Plant's (WIPP's) performance. This procedure provides a consistent approach to the construction of probability distributions in cases where empirical data concerning a variable are sparse or absent and minimizes the amount of spurious information that is often introduced into a distribution by assumptions of nonspecialist. The procedure gives first priority to the professional judgment of subject-matter experts and emphasizes the use of site-specific empirical data for the construction of the probability distributions when such data are available. In the absence of sufficient empirical data, the procedure employs the Maximum Entropy Formalism and the subject-matter experts' subjective estimates of the parameters of the distribution to construct a distribution that can be used in a performance simulation. 23 refs., 4 figs., 1 tab.
Linear quadratic modeling of increased late normal-tissue effects in special clinical situations
Jones, Bleddyn . E-mail: b.jones.1@bham.ac.uk; Dale, Roger G.; Gaya, Andrew M.
2006-03-01
Purpose: To extend linear quadratic theory to allow changes in normal-tissue radiation tolerance after exposure to cytotoxic chemotherapy, after surgery, and in elderly patients. Methods: Examples of these situations are analyzed by use of the biologic effective dose (BED) concept. Changes in tolerance can be allowed for by: estimation of either the contribution of the additional factor as an equivalent BED or the equivalent dose in 2-Gy fractions or by the degree of radiosensitization by a mean dose-modifying factor (x). Results: The estimated x value is 1.063 (95% confidence limits for the mean, 1.056 to 1.070) for subcutaneous fibrosis after cyclophosphamide, methotrexate, and fluorouracil (CMF) chemotherapy and radiotherapy in breast cancer. The point estimate of x is 1.18 for the additional risk of gastrointestinal late-radiation effects after abdominal surgery in lymphoma patients (or 10.62 Gy at 2 Gy per fraction). For shoulder fibrosis in patients older than 60 years after breast and nodal irradiation, x is estimated to be 1.033 (95% confidence limits for the mean, 1.028 to 1.0385). The equivalent BED values were CMF chemotherapy (6.48 Gy{sub 3}), surgery (17.73 Gy{sub 3}), and age (3.61 Gy{sub 3}). Conclusions: The LQ model can, in principle, be extended to quantify reduced normal-tissue tolerance in special clinical situations.
NASA Astrophysics Data System (ADS)
Lyu, Jhen-Yi; Chang, Yu-Yi; Lee, Chung-Jung; Lin, Ming-Lang
2015-04-01
The depth and character of the overlying earth deposit contribute to fault rupture path. For cohesive soil, for instance, clay, tension cracks on the ground happen during faulting, limiting the propagation of fracture in soil mass. The cracks propagate downwards while the fracture induced by initial displacement of faulting propagates upwards. The connection of cracks and fracture will form a plane that is related to tri-shear zone. However the mechanism of the connection has not been discussed thoroughly. By obtaining the evolution of ground deformation zone we can understand mechanism of fault propagation and crack-fracture connection. A series of centrifuge tests and numerical modeling are conducted at this study with acceleration conditions of 40g, 50g, 80g and dip angle of 60° on normal faulting. The model is with total overburden thick, H, 0.2m, vertical displacement of moving wall, ∆H. At the beginning, hanging wall and the left-boundary wall moves along the plane of fault. When ∆H/H equals to 25%, both of the walls stop moving. We then can calculate the width of ground deformation in different depth of each model by a logic method. Models of this study consist of two different type overburden material to simulate sand and clay in situ. Different from finite element method, with application of distinct element method the mechanism of fault propagation in soil mass and the development of ground deformation zone can be observed directly in numerical analysis of faulting. The information of force and deformation in the numerical model are also easier to be obtained than centrifuge modeling. Therefore, we take the results of centrifuge modeling as the field outcrop then modify the micro-parameter of numerical analysis to make sure both of them have the same attitude. The results show that in centrifuge modeling narrower ground deformation zone appears in clayey overburden model as that of sandy overburden model is wider on footwall. Increasing the strength
Minimizing the probable maximum flood
Woodbury, M.S.; Pansic, N. ); Eberlein, D.T. )
1994-06-01
This article examines Wisconsin Electric Power Company's efforts to determine an economical way to comply with Federal Energy Regulatory Commission requirements at two hydroelectric developments on the Michigamme River. Their efforts included refinement of the area's probable maximum flood model based, in part, on a newly developed probable maximum precipitation estimate.
ERIC Educational Resources Information Center
Koo, Reginald; Jones, Martin L.
2011-01-01
Quite a number of interesting problems in probability feature an event with probability equal to 1/e. This article discusses three such problems and attempts to explain why this probability occurs with such frequency.
Source models of great earthquakes from ultra low-frequency normal mode data
NASA Astrophysics Data System (ADS)
Lentas, Konstantinos; Ferreira, Ana; Clévédé, Eric
2014-05-01
We present a new earthquake source inversion technique based on normal mode data for the simultaneous determination of the rupture duration, length and moment tensor of large earthquakes with unilateral rupture. We use ultra low-frequency (f < 1 mHz) normal mode spheroidal multiplets and the phases of split free oscillations, which are modelled using Higher Order Perturbation Theory (HOPT), taking into account the Earth's rotation, ellipticity and lateral heterogeneities. A Monte Carlo exploration of the model space is carried out, enabling the assessment of source parameter tradeoffs and uncertainties. We carry out synthetic tests for four different realistic artificial earthquakes with different faulting mechanisms and magnitudes (Mw 8.1-9.3) to investigate errors in the source inversions due to: (i) unmodelled 3-D Earth structure; (ii) noise in the data; (iii) uncertainties in spatio-temporal earthquake location; and, (iv) neglecting the source finiteness in point source moment tensor inversions. We find that unmodelled 3-D structure is the most serious source of errors for rupture duration and length determinations especially for the lowest magnitude artificial events. The errors in moment magnitude and fault mechanism are generally small, with the rake angle showing systematically larger errors (up to 20 degrees). We then carry out source inversions of five giant thrust earthquakes (Mw ≥ 8.5): (i) the 26 December 2004 Sumatra-Andaman earthquake; (ii) the 28 March 2005 Nias, Sumatra earthquake; (iii) the 12 September 2007 Bengkulu earthquake; (iv) the Tohoku, Japan earthquake of 11 March 2011; (v) the Maule, Chile earthquake of 27 February 2010; and (vi) the recent 24 May 2013 Mw 8.3 Okhotsk Sea, Russia, deep (607 km) earthquake. While finite source inversions for rupture length, duration, magnitude and fault mechanism are possible for the Sumatra-Andaman and Tohoku events, for all the other events their lower magnitudes do not allow stable inversions of mode
Symmetric structure of field algebra of G-spin models determined by a normal subgroup
Xin, Qiaoling Jiang, Lining
2014-09-15
Let G be a finite group and H a normal subgroup. D(H; G) is the crossed product of C(H) and CG which is only a subalgebra of D(G), the double algebra of G. One can construct a C*-subalgebra F{sub H} of the field algebra F of G-spin models, so that F{sub H} is a D(H; G)-module algebra, whereas F is not. Then the observable algebra A{sub (H,G)} is obtained as the D(H; G)-invariant subalgebra of F{sub H}, and there exists a unique C*-representation of D(H; G) such that D(H; G) and A{sub (H,G)} are commutants with each other.
Models of ensemble firing of muscle spindle afferents recorded during normal locomotion in cats
Prochazka, Arthur; Gorassini, Monica
1998-01-01
The aim of this work was to compare the ability of several mathematical models to predict the firing characteristics of muscle spindle primary afferents recorded chronically during normal stepping in cats. Ensemble firing profiles of nine hamstring spindle primary (presumed group Ia) afferents were compiled from stored data from 132 step cycles. Three sets of profiles corresponding to slow, medium and fast steps were generated by averaging groups of step cycles aligned to peak muscle length in each cycle. Five models obtained from the literature were compared. Each of these models was used to predict the spindle firing profiles from the averaged muscle length signals. The models were also used in the reverse direction, namely to predict muscle length from the firing profiles. A sixth model incorporating some key aspects of the other models was also included in the comparisons. Five of the models predicted spindle firing well, with root mean square (r.m.s.) errors lower than 14% of the modulation depth of the target profiles. The key variable in achieving good predictions was muscle velocity, the best fits being obtained with power-law functions of velocity, with an exponent of 0.5 or 0.6 (i.e. spindle firing rate is approximately proportional to the square root of muscle velocity). The fits were slightly improved by adding small components of EMG signal to mimic fusimotor action linked to muscle activation. The modest relative size of EMG-linked fusimotor action may be related to the fact that hamstring muscles are not strongly recruited in stepping. Length was predicted very accurately from firing profiles with the inverse of the above models, indicating that the nervous system could in principle process spindle firing in a relatively simple way to give accurate information on muscle length. The responses of the models to standard ramp-and-hold displacements at 10 mm s−1 were also studied (i.e. velocities that were an order of magnitude lower than that during
The probability distribution of the predicted CFM-induced ozone depletion. [Chlorofluoromethane
NASA Technical Reports Server (NTRS)
Ehhalt, D. H.; Chang, J. S.; Bulter, D. M.
1979-01-01
It is argued from the central limit theorem that the uncertainty in model predicted changes of the ozone column density is best represented by a normal probability density distribution. This conclusion is validated by comparison with a probability distribution generated by a Monte Carlo technique. In the case of the CFM-induced ozone depletion, and based on the estimated uncertainties in the reaction rate coefficients alone the relative mean standard deviation of this normal distribution is estimated to be 0.29.
A PHYSICAL MODEL FOR SN 2001ay, A NORMAL, BRIGHT, EXTREMELY SLOW DECLINING TYPE Ia SUPERNOVA
Baron, E.; Hoeflich, P.; Krisciunas, K.; Suntzeff, N.; Wang, L.; Dominguez, I.; Phillips, M. M. E-mail: pah@astro.physics.fsu.edu E-mail: suntzeff@physics.tamu.edu E-mail: inma@ugr.es E-mail: mmp@lcoeps1.lco.cl
2012-07-10
We present a study of the peculiar Type Ia supernova 2001ay (SN 2001ay). The defining features of its peculiarity are high velocity, broad lines, and a fast rising light curve, combined with the slowest known rate of decline. It is one magnitude dimmer than would be predicted from its observed {Delta}m{sub 15}, and shows broad spectral features. We base our analysis on detailed calculations for the explosion, light curves, and spectra. We demonstrate that consistency is key for both validating the models and probing the underlying physics. We show that this SN can be understood within the physics underlying the {Delta}m{sub 15} relation, and in the framework of pulsating delayed detonation models originating from a Chandrasekhar mass, M{sub Ch}, white dwarf, but with a progenitor core composed of 80% carbon. We suggest a possible scenario for stellar evolution which leads to such a progenitor. We show that the unusual light curve decline can be understood with the same physics as has been used to understand the {Delta}m{sub 15} relation for normal SNe Ia. The decline relation can be explained by a combination of the temperature dependence of the opacity and excess or deficit of the peak luminosity, {alpha}, measured relative to the instantaneous rate of radiative decay energy generation. What differentiates SN 2001ay from normal SNe Ia is a higher explosion energy which leads to a shift of the {sup 56}Ni distribution toward higher velocity and {alpha} < 1. This result is responsible for the fast rise and slow decline. We define a class of SN 2001ay-like SNe Ia, which will show an anti-Phillips relation.
A novel distributed model of the heart under normal and congestive heart failure conditions.
Ravanshadi, Samin; Jahed, Mehran
2013-04-01
Conventional models of cardiovascular system frequently lack required detail and focus primarily on the overall relationship between pressure, flow and volume. This study proposes a localized and regional model of the cardiovascular system. It utilizes noninvasive blood flow and pressure seed data and temporal cardiac muscle regional activity to predict the operation of the heart under normal and congestive heart failure conditions. The analysis considers specific regions of the heart, namely, base, mid and apex of left ventricle. The proposed method of parameter estimation for hydraulic electric analogy model is recursive least squares algorithm. Based on simulation results and comparison to clinical data, effect of congestive heart failure in the heart is quantified. Accumulated results for simulated ejection fraction percentage of the apex, mid and base regions of the left ventricle in congestive heart failure condition were 39 ± 6, 36 ± 9 and 38 ± 8, respectively. These results are shown to satisfactorily match those found through clinical measurements. The proposed analytical method can in effect be utilized as a preclinical and predictive tool for high-risk heart patients and candidates for heart transplant, assistive device and total artificial heart. PMID:23637212
A quasi-normal scale elimination model of turbulence and its application to stably stratified flows
NASA Astrophysics Data System (ADS)
Sukoriansky, S.; Galperin, B.; Perov, V.
2006-02-01
Models of planetary, atmospheric and oceanic circulation involve eddy viscosity and eddy diffusivity, KM and KH, that account for unresolved turbulent mixing and diffusion. The most sophisticated turbulent closure models used today for geophysical applications belong in the family of the Reynolds stress models. These models are formulated for the physical space variables; they consider a hierarchy of turbulent correlations and employ a rational way of its truncation. In the process, unknown correlations are related to the known ones via "closure assumptions'' that are based upon physical plausibility, preservation of tensorial properties, and the principle of the invariant modeling according to which the constants in the closure relationships are universal. Although a great deal of progress has been achieved with Reynolds stress closure models over the years, there are still situations in which these models fail. The most difficult flows for the Reynolds stress modeling are those with anisotropy and waves because these processes are scale-dependent and cannot be included in the closure assumptions that pertain to ensemble-averaged quantities. Here, we develop an alternative approach of deriving expressions for KM and KH using the spectral space representation and employing a self-consistent, quasi-normal scale elimination (QNSE) algorithm. More specifically, the QNSE procedure is based upon the quasi-Gaussian mapping of the velocity and temperature fields using the Langevin equations. Turbulence and waves are treated as one entity and the effect of the internal waves is easily identifiable. This model implies partial averaging and, thus, is scale-dependent; it allows one to easily introduce into consideration such parameters as the grid resolution, the degree of the anisotropy, and spectral characteristics, among others. Applied to turbulent flows affected by anisotropy and waves, the method traces turbulence anisotropization and shows how the dispersion
Source models of great earthquakes from ultra low-frequency normal mode data
NASA Astrophysics Data System (ADS)
Lentas, K.; Ferreira, A. M. G.; Clévédé, E.; Roch, J.
2014-08-01
We present a new earthquake source inversion technique based on normal mode data for the simultaneous determination of the rupture duration, length and moment tensor of large earthquakes with unilateral rupture. We use ultra low-frequency (f <1 mHz) mode singlets and multiplets which are modelled using Higher Order Perturbation Theory (HOPT), taking into account the Earth’s rotation, ellipticity and lateral heterogeneities. A Monte Carlo exploration of the model space is carried out, enabling the assessment of source parameter tradeoffs and uncertainties. We carry out synthetic tests to investigate errors in the source inversions due to: (i) unmodelled 3-D Earth structure; (ii) noise in the data; (iii) uncertainties in spatio-temporal earthquake location; and, (iv) neglecting the source finiteness in point source inversions. We find that unmodelled 3-D structure is the most serious source of errors for rupture duration and length determinations especially for the lowest magnitude events. The errors in moment magnitude and fault mechanism are generally small, with the rake angle showing systematically larger errors (up to 20°). We then investigate five real thrust earthquakes (Mw⩾8.5): (i) Sumatra-Andaman (26th December 2004); (ii) Nias, Sumatra (28th March 2005); (iii) Bengkulu (12th September 2007); (iv) Tohoku, Japan (11th March 2011); (v) Maule, Chile (27th February 2010); and, (vi) the 24 May 2013 Mw 8.3 Okhotsk Sea, Russia, deep (607 km) event. While finite source inversions for rupture length, duration, magnitude and fault mechanism are possible for the Sumatra-Andaman and Tohoku events, for all the other events their lower magnitudes only allow stable point source inversions of mode multiplets. We obtain the first normal mode finite source model for the 2011 Tohoku earthquake, which yields a fault length of 461 km, a rupture duration of 151 s, and hence an average rupture velocity of 3.05 km/s, giving an independent confirmation of the compact nature of
Conflict Probability Estimation for Free Flight
NASA Technical Reports Server (NTRS)
Paielli, Russell A.; Erzberger, Heinz
1996-01-01
The safety and efficiency of free flight will benefit from automated conflict prediction and resolution advisories. Conflict prediction is based on trajectory prediction and is less certain the farther in advance the prediction, however. An estimate is therefore needed of the probability that a conflict will occur, given a pair of predicted trajectories and their levels of uncertainty. A method is developed in this paper to estimate that conflict probability. The trajectory prediction errors are modeled as normally distributed, and the two error covariances for an aircraft pair are combined into a single equivalent covariance of the relative position. A coordinate transformation is then used to derive an analytical solution. Numerical examples and Monte Carlo validation are presented.
Estimation of transition probabilities of credit ratings
NASA Astrophysics Data System (ADS)
Peng, Gan Chew; Hin, Pooi Ah
2015-12-01
The present research is based on the quarterly credit ratings of ten companies over 15 years taken from the database of the Taiwan Economic Journal. The components in the vector mi (mi1, mi2,⋯, mi10) may first be used to denote the credit ratings of the ten companies in the i-th quarter. The vector mi+1 in the next quarter is modelled to be dependent on the vector mi via a conditional distribution which is derived from a 20-dimensional power-normal mixture distribution. The transition probability Pkl (i ,j ) for getting mi+1,j = l given that mi, j = k is then computed from the conditional distribution. It is found that the variation of the transition probability Pkl (i ,j ) as i varies is able to give indication for the possible transition of the credit rating of the j-th company in the near future.
NASA Technical Reports Server (NTRS)
Pierce, R. B.; Johnson, Donald R.; Reames, Fred M.; Zapotocny, Tom H.; Wolf, Bart J.
1991-01-01
The normal-mode characteristics of baroclinically amplifying disturbances were numerically investigated in a series of adiabatic simulations by a hybrid isentropic-sigma model, demonstrating the effect of coupling an isentropic-coordinate free atmospheric domain with a sigma-coordinate PBL on the normal-mode characteristics. Next, the normal-mode model was modified by including a transport equation for water vapor and adiabatic heating by condensation. Simulations with and without a hydrological component showed that the overall effect of latent heat release is to markedly enhance cyclogenesis and frontogenesis.
Predicting loss exceedance probabilities for US hurricane landfalls
NASA Astrophysics Data System (ADS)
Murnane, R.
2003-04-01
The extreme winds, rains, and floods produced by landfalling hurricanes kill and injure many people and cause severe economic losses. Many business, planning, and emergency management decisions are based on the probability of hurricane landfall and associated emergency management considerations; however, the expected total economic and insured losses also are important. Insured losses generally are assumed to be half the total economic loss from hurricanes in the United States. Here I describe a simple model that can be used to estimate deterministic and probabilistic exceedance probabilities for insured losses associated with landfalling hurricanes along the US coastline. The model combines wind speed exceedance probabilities with loss records from historical hurricanes striking land. The wind speed exceedance probabilities are based on the HURDAT best track data and use the storm’s maximum sustained wind just prior to landfall. The loss records are normalized to present-day values using a risk model and account for historical changes in inflation, population, housing stock, and other factors. Analysis of the correlation between normalized losses and a number of storm-related parameters suggests that the most relevant, statistically-significant predictor for insured loss is the storm’s maximum sustained wind at landfall. Insured loss exceedance probabilities thus are estimated using a linear relationship between the log of the maximum sustained winds and normalized insured loss. Model estimates for insured losses from Hurricanes Isidore (US45 million) and Lili (US275 million) compare well with loss estimates from more sophisticated risk models and recorded losses. The model can also be used to estimate how exceedance probabilities for insured loss vary as a function of the North Atlantic Oscillation and the El Niño-Southern Oscillation.
NASA Technical Reports Server (NTRS)
Crutcher, H. L.; Falls, L. W.
1976-01-01
Sets of experimentally determined or routinely observed data provide information about the past, present and, hopefully, future sets of similarly produced data. An infinite set of statistical models exists which may be used to describe the data sets. The normal distribution is one model. If it serves at all, it serves well. If a data set, or a transformation of the set, representative of a larger population can be described by the normal distribution, then valid statistical inferences can be drawn. There are several tests which may be applied to a data set to determine whether the univariate normal model adequately describes the set. The chi-square test based on Pearson's work in the late nineteenth and early twentieth centuries is often used. Like all tests, it has some weaknesses which are discussed in elementary texts. Extension of the chi-square test to the multivariate normal model is provided. Tables and graphs permit easier application of the test in the higher dimensions. Several examples, using recorded data, illustrate the procedures. Tests of maximum absolute differences, mean sum of squares of residuals, runs and changes of sign are included in these tests. Dimensions one through five with selected sample sizes 11 to 101 are used to illustrate the statistical tests developed.
Kalinich, Donald A.; Sallaberry, Cedric M.; Mattie, Patrick D.
2010-12-01
For the U.S. Nuclear Regulatory Commission (NRC) Extremely Low Probability of Rupture (xLPR) pilot study, Sandia National Laboratories (SNL) was tasked to develop and evaluate a probabilistic framework using a commercial software package for Version 1.0 of the xLPR Code. Version 1.0 of the xLPR code is focused assessing the probability of rupture due to primary water stress corrosion cracking in dissimilar metal welds in pressurizer surge nozzles. Future versions of this framework will expand the capabilities to other cracking mechanisms, and other piping systems for both pressurized water reactors and boiling water reactors. The goal of the pilot study project is to plan the xLPR framework transition from Version 1.0 to Version 2.0; hence the initial Version 1.0 framework and code development will be used to define the requirements for Version 2.0. The software documented in this report has been developed and tested solely for this purpose. This framework and demonstration problem will be used to evaluate the commercial software's capabilities and applicability for use in creating the final version of the xLPR framework. This report details the design, system requirements, and the steps necessary to use the commercial-code based xLPR framework developed by SNL.
NASA Astrophysics Data System (ADS)
Korb, Andrew R.; Grossman, Stanley I.
2015-05-01
A model was developed to understand the effects of spatial resolution and Signal to Noise ratio on the detection and tracking performance of wide-field, diffraction-limited electro-optic and infrared motion imagery systems. False positive detection probability and false positive rate per frame were calculated as a function of target-to-background contrast and object size. Results showed that moving objects are fundamentally more difficult to detect than stationary objects because SNR for fixed objects increases and false positive probability detection rates diminish rapidly with successive frames whereas for moving objects the false detection rate remains constant or increases with successive frames. The model specifies that the desired performance of a detection system, measured by the false positive detection rate, can be achieved by image system designs with different combinations of SNR and spatial resolution, usually requiring several pixels resolving the object; this capability to tradeoff resolution and SNR enables system design trades and cost optimization. For operational use, detection thresholds required to achieve a particular false detection rate can be calculated. Interestingly, for moderate size images the model converges to the Johnson Criteria. Johnson found that an imaging system with an SNR >3.5 has a probability of detection >50% when the resolution on the object is 4 pixels or more. Under these conditions our model finds the false positive rate is less than one per hundred image frames, and the ratio of the probability of object detection to false positive detection is much greater than one. The model was programmed into Matlab to generate simulated images frames for visualization.
Hays, M.T.; Broome, M.R.; Turrel, J.M.
1988-06-01
A comprehensive multicompartmental kinetic model was developed to account for the distribution and metabolism of simultaneously injected radioactive iodide (iodide*), T3 (T3*), and T4 (T4*) in six normal and seven spontaneously hyperthyroid cats. Data from plasma samples (analyzed by HPLC), urine, feces, and thyroid accumulation were incorporated into the model. The submodels for iodide*, T3*, and T4* all included both a fast and a slow exchange compartment connecting with the plasma compartment. The best-fit iodide* model also included a delay compartment, presumed to be pooling of gastrosalivary secretions. This delay was 62% longer in the hyperthyroid cats than in the euthyroid cats. Unexpectedly, all of the exchange parameters for both T4 and T3 were significantly slowed in hyperthyroidism, possibly because the hyperthyroid cats were older. None of the plasma equivalent volumes of the exchange compartments of iodide*, T3*, or T4* was significantly different in the hyperthyroid cats, although the plasma equivalent volume of the fast T4 exchange compartments were reduced. Secretion of recycled T4* from the thyroid into the plasma T4* compartment was essential to model fit, but its quantity could not be uniquely identified in the absence of multiple thyroid data points. Thyroid secretion of T3* was not detectable. Comparing the fast and slow compartments, there was a shift of T4* deiodination into the fast exchange compartment in hyperthyroidism. Total body mean residence times (MRTs) of iodide* and T3* were not affected by hyperthyroidism, but mean T4* MRT was decreased 23%. Total fractional T4 to T3 conversion was unchanged in hyperthyroidism, although the amount of T3 produced by this route was increased nearly 5-fold because of higher concentrations of donor stable T4.
NASA Technical Reports Server (NTRS)
Olorenshaw, Lex; Trawick, David
1991-01-01
The purpose was to develop a speech recognition system to be able to detect speech which is pronounced incorrectly, given that the text of the spoken speech is known to the recognizer. Better mechanisms are provided for using speech recognition in a literacy tutor application. Using a combination of scoring normalization techniques and cheater-mode decoding, a reasonable acceptance/rejection threshold was provided. In continuous speech, the system was tested to be able to provide above 80 pct. correct acceptance of words, while correctly rejecting over 80 pct. of incorrectly pronounced words.
Modeling of band-3 protein diffusion in the normal and defective red blood cell membrane.
Li, He; Zhang, Yihao; Ha, Vi; Lykotrafitis, George
2016-04-13
We employ a two-component red blood cell (RBC) membrane model to simulate lateral diffusion of band-3 proteins in the normal RBC and in the RBC with defective membrane proteins. The defects reduce the connectivity between the lipid bilayer and the membrane skeleton (vertical connectivity), or the connectivity of the membrane skeleton itself (horizontal connectivity), and are associated with the blood disorders of hereditary spherocytosis (HS) and hereditary elliptocytosis (HE) respectively. Initially, we demonstrate that the cytoskeleton limits band-3 lateral mobility by measuring the band-3 macroscopic diffusion coefficients in the normal RBC membrane and in a lipid bilayer without the cytoskeleton. Then, we study band-3 diffusion in the defective RBC membrane and quantify the relation between band-3 diffusion coefficients and percentage of protein defects in HE RBCs. In addition, we illustrate that at low spectrin network connectivity (horizontal connectivity) band-3 subdiffusion can be approximated as anomalous diffusion, while at high horizontal connectivity band-3 diffusion is characterized as confined diffusion. Our simulations show that the band-3 anomalous diffusion exponent depends on the percentage of protein defects in the membrane cytoskeleton. We also confirm that the introduction of attraction between the lipid bilayer and the spectrin network reduces band-3 diffusion, but we show that this reduction is lower than predicted by the percolation theory. Furthermore, we predict that the attractive force between the spectrin filament and the lipid bilayer is at least 20 times smaller than the binding forces at band-3 and glycophorin C, the two major membrane binding sites. Finally, we explore diffusion of band-3 particles in the RBC membrane with defects related to vertical connectivity. We demonstrate that in this case band-3 diffusion can be approximated as confined diffusion for all attraction levels between the spectrin network and the lipid bilayer
Sivagnanalingam, Umayal; Balys, Marlene; Eberhardt, Allison; Wang, Nancy; Myers, Jason R.; Ashton, John M.; Becker, Michael W.; Calvi, Laura M.; Mendler, Jason H.
2015-01-01
Cytogenetically normal acute myeloid leukemia (CN-AML) patients harboring RUNX1 mutations have a dismal prognosis with anthracycline/cytarabine-based chemotherapy. We aimed to develop an in vivo model of RUNX1-mutated, CN-AML in which the nature of residual disease in this molecular disease subset could be explored. We utilized a well-characterized patient-derived, RUNX1-mutated CN-AML line (CG-SH). Tail vein injection of CG-SH into NOD scid gamma mice led to leukemic engraftment in the bone marrow, spleen, and peripheral blood within 6 weeks. Treatment of leukemic mice with anthracycline/cytarabine-based chemotherapy resulted in clearance of disease from the spleen and peripheral blood, but persistence of disease in the bone marrow as assessed by flow cytometry and secondary transplantation. Whole exome sequencing of CG-SH revealed mutations in ASXL1, CEBPA, GATA2, and SETBP1, not previously reported. We conclude that CG-SH xenografts are a robust, reproducible in vivo model of CN-AML in which to explore mechanisms of chemotherapy resistance and novel therapeutic approaches. PMID:26177509
Parsons, Matthew P; Vanni, Matthieu P; Woodard, Cameron L; Kang, Rujun; Murphy, Timothy H; Raymond, Lynn A
2016-01-01
It has become well accepted that Huntington disease (HD) is associated with impaired glutamate uptake, resulting in a prolonged time-course of extracellular glutamate that contributes to excitotoxicity. However, the data supporting this view come largely from work in synaptosomes, which may overrepresent nerve-terminal uptake over astrocytic uptake. Here, we quantify real-time glutamate dynamics in HD mouse models by high-speed imaging of an intensity-based glutamate-sensing fluorescent reporter (iGluSnFR) and electrophysiological recordings of synaptically activated transporter currents in astrocytes. These techniques reveal a disconnect between the results obtained in synaptosomes and those in situ. Exogenous glutamate uptake is impaired in synaptosomes, whereas real-time measures of glutamate clearance in the HD striatum are normal or even accelerated, particularly in the aggressive R6/2 model. Our results highlight the importance of quantifying glutamate dynamics under endogenous release conditions, and suggest that the widely cited uptake impairment in HD does not contribute to pathogenesis. PMID:27052848
Parsons, Matthew P.; Vanni, Matthieu P.; Woodard, Cameron L.; Kang, Rujun; Murphy, Timothy H.; Raymond, Lynn A.
2016-01-01
It has become well accepted that Huntington disease (HD) is associated with impaired glutamate uptake, resulting in a prolonged time-course of extracellular glutamate that contributes to excitotoxicity. However, the data supporting this view come largely from work in synaptosomes, which may overrepresent nerve-terminal uptake over astrocytic uptake. Here, we quantify real-time glutamate dynamics in HD mouse models by high-speed imaging of an intensity-based glutamate-sensing fluorescent reporter (iGluSnFR) and electrophysiological recordings of synaptically activated transporter currents in astrocytes. These techniques reveal a disconnect between the results obtained in synaptosomes and those in situ. Exogenous glutamate uptake is impaired in synaptosomes, whereas real-time measures of glutamate clearance in the HD striatum are normal or even accelerated, particularly in the aggressive R6/2 model. Our results highlight the importance of quantifying glutamate dynamics under endogenous release conditions, and suggest that the widely cited uptake impairment in HD does not contribute to pathogenesis. PMID:27052848
Approach for Text Classification Based on the Similarity Measurement between Normal Cloud Models
Dai, Jin; Liu, Xin
2014-01-01
The similarity between objects is the core research area of data mining. In order to reduce the interference of the uncertainty of nature language, a similarity measurement between normal cloud models is adopted to text classification research. On this basis, a novel text classifier based on cloud concept jumping up (CCJU-TC) is proposed. It can efficiently accomplish conversion between qualitative concept and quantitative data. Through the conversion from text set to text information table based on VSM model, the text qualitative concept, which is extraction from the same category, is jumping up as a whole category concept. According to the cloud similarity between the test text and each category concept, the test text is assigned to the most similar category. By the comparison among different text classifiers in different feature selection set, it fully proves that not only does CCJU-TC have a strong ability to adapt to the different text features, but also the classification performance is also better than the traditional classifiers. PMID:24711737
NASA Astrophysics Data System (ADS)
Frič, Roman; Papčo, Martin
2015-12-01
Domains of generalized probability have been introduced in order to provide a general construction of random events, observables and states. It is based on the notion of a cogenerator and the properties of product. We continue our previous study and show how some other quantum structures fit our categorical approach. We discuss how various epireflections implicitly used in the classical probability theory are related to the transition to fuzzy probability theory and describe the latter probability theory as a genuine categorical extension of the former. We show that the IF-probability can be studied via the fuzzy probability theory. We outline a "tensor modification" of the fuzzy probability theory.
Not Available
1988-01-01
The workshop on Models to Estimate Military System P/sub E/ (probability of effect) due to Incident Radio Frequency (RF) Energy was convened by Dr. John M. MacCallum, OUSDA (RandAT/EST), to assess the current state of the art and to evaluate the adequacy of ongoing effects assessment efforts to estimate P/sub E/. Approximately fifty people from government, industry, and academia attended the meeting. Specifically, the workshop addressed the following: (1) current status of operations research models for assessing probability of effect (P/sub E/) for red and blue mission analyses; (2) the main overall approaches for evaluating P/sub E/'s; (3) sources of uncertainty and ways P/sub E/'s could be credibly derived from the existing data base; and (4) the adequacy of the present framework of a national HPM assessment methodology for evaluation of P/sub E/'s credibility for future systems. 9 figs.
Ballard, P G; Bean, N G; Ross, J V
2016-03-21
Epidemic fade-out refers to infection elimination in the trough between the first and second waves of an outbreak. The number of infectious individuals drops to a relatively low level between these waves of infection, and if elimination does not occur at this stage, then the disease is likely to become endemic. For this reason, it appears to be an ideal target for control efforts. Despite this obvious public health importance, the probability of epidemic fade-out is not well understood. Here we present new algorithms for approximating the probability of epidemic fade-out for the Markovian SIR model with demography. These algorithms are more accurate than previously published formulae, and one of them scales well to large population sizes. This method allows us to investigate the probability of epidemic fade-out as a function of the effective transmission rate, recovery rate, population turnover rate, and population size. We identify an interesting feature: the probability of epidemic fade-out is very often greatest when the basic reproduction number, R0, is approximately 2 (restricting consideration to cases where a major outbreak is possible, i.e., R0>1). The public health implication is that there may be instances where a non-lethal infection should be allowed to spread, or antiviral usage should be moderated, to maximise the chance of the infection being eliminated before it becomes endemic. PMID:26796227
Cichlid fishes as a model to understand normal and clinical craniofacial variation.
Powder, Kara E; Albertson, R Craig
2016-07-15
We have made great strides towards understanding the etiology of craniofacial disorders, especially for 'simple' Mendelian traits. However, the facial skeleton is a complex trait, and the full spectrum of genetic, developmental, and environmental factors that contribute to its final geometry remain unresolved. Forward genetic screens are constrained with respect to complex traits due to the types of genes and alleles commonly identified, developmental pleiotropy, and limited information about the impact of environmental interactions. Here, we discuss how studies in an evolutionary model - African cichlid fishes - can complement traditional approaches to understand the genetic and developmental origins of complex shape. Cichlids exhibit an unparalleled range of natural craniofacial morphologies that model normal human variation, and in certain instances mimic human facial dysmorphologies. Moreover, the evolutionary history and genomic architecture of cichlids make them an ideal system to identify the genetic basis of these phenotypes via quantitative trait loci (QTL) mapping and population genomics. Given the molecular conservation of developmental genes and pathways, insights from cichlids are applicable to human facial variation and disease. We review recent work in this system, which has identified lbh as a novel regulator of neural crest cell migration, determined the Wnt and Hedgehog pathways mediate species-specific bone morphologies, and examined how plastic responses to diet modulate adult facial shapes. These studies have not only revealed new roles for existing pathways in craniofacial development, but have identified new genes and mechanisms involved in shaping the craniofacial skeleton. In all, we suggest that combining work in traditional laboratory and evolutionary models offers significant potential to provide a more complete and comprehensive picture of the myriad factors that are involved in the development of complex traits. PMID:26719128
Aldars-García, Laila; Ramos, Antonio J; Sanchis, Vicente; Marín, Sonia
2015-10-01
Human exposure to aflatoxins in foods is of great concern. The aim of this work was to use predictive mycology as a strategy to mitigate the aflatoxin burden in pistachio nuts postharvest. The probability of growth and aflatoxin B1 (AFB1) production of aflatoxigenic Aspergillus flavus, isolated from pistachio nuts, under static and non-isothermal conditions was studied. Four theoretical temperature scenarios, including temperature levels observed in pistachio nuts during shipping and storage, were used. Two types of inoculum were included: a cocktail of 25 A. flavus isolates and a single isolate inoculum. Initial water activity was adjusted to 0.87. Logistic models, with temperature and time as explanatory variables, were fitted to the probability of growth and AFB1 production under a constant temperature. Subsequently, they were used to predict probabilities under non-isothermal scenarios, with levels of concordance from 90 to 100% in most of the cases. Furthermore, the presence of AFB1 in pistachio nuts could be correctly predicted in 70-81 % of the cases from a growth model developed in pistachio nuts, and in 67-81% of the cases from an AFB1 model developed in pistachio agar. The information obtained in the present work could be used by producers and processors to predict the time for AFB1 production by A. flavus on pistachio nuts during transport and storage. PMID:26187836
Thoreson, Wallace B; Van Hook, Matthew J; Parmelee, Caitlyn; Curto, Carina
2016-01-01
Postsynaptic responses are a product of quantal amplitude (Q), size of the releasable vesicle pool (N), and release probability (P). Voltage-dependent changes in presynaptic Ca(2+) entry alter postsynaptic responses primarily by changing P but have also been shown to influence N. With simultaneous whole cell recordings from cone photoreceptors and horizontal cells in tiger salamander retinal slices, we measured N and P at cone ribbon synapses by using a train of depolarizing pulses to stimulate release and deplete the pool. We developed an analytical model that calculates the total pool size contributing to release under different stimulus conditions by taking into account the prior history of release and empirically determined properties of replenishment. The model provided a formula that calculates vesicle pool size from measurements of the initial postsynaptic response and limiting rate of release evoked by a train of pulses, the fraction of release sites available for replenishment, and the time constant for replenishment. Results of the model showed that weak and strong depolarizing stimuli evoked release with differing probabilities but the same size vesicle pool. Enhancing intraterminal Ca(2+) spread by lowering Ca(2+) buffering or applying BayK8644 did not increase PSCs evoked with strong test steps, showing there is a fixed upper limit to pool size. Together, these results suggest that light-evoked changes in cone membrane potential alter synaptic release solely by changing release probability. PMID:26541100
Exploring the Overestimation of Conjunctive Probabilities
Nilsson, Håkan; Rieskamp, Jörg; Jenny, Mirjam A.
2013-01-01
People often overestimate probabilities of conjunctive events. The authors explored whether the accuracy of conjunctive probability estimates can be improved by increased experience with relevant constituent events and by using memory aids. The first experiment showed that increased experience with constituent events increased the correlation between the estimated and the objective conjunctive probabilities, but that it did not reduce overestimation of conjunctive probabilities. The second experiment showed that reducing cognitive load with memory aids for the constituent probabilities led to improved estimates of the conjunctive probabilities and to decreased overestimation of conjunctive probabilities. To explain the cognitive process underlying people’s probability estimates, the configural weighted average model was tested against the normative multiplicative model. The configural weighted average model generates conjunctive probabilities that systematically overestimate objective probabilities although the generated probabilities still correlate strongly with the objective probabilities. For the majority of participants this model was better than the multiplicative model in predicting the probability estimates. However, when memory aids were provided