Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van't
2012-03-15
Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.
Normal probability plots with confidence.
Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang
2015-01-01
Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods.
Cella, Laura; Liuzzi, Raffaele; Conson, Manuel; D’Avino, Vittoria; Salvatore, Marco; Pacelli, Roberto
2013-10-01
Purpose: To establish a multivariate normal tissue complication probability (NTCP) model for radiation-induced asymptomatic heart valvular defects (RVD). Methods and Materials: Fifty-six patients treated with sequential chemoradiation therapy for Hodgkin lymphoma (HL) were retrospectively reviewed for RVD events. Clinical information along with whole heart, cardiac chambers, and lung dose distribution parameters was collected, and the correlations to RVD were analyzed by means of Spearman's rank correlation coefficient (Rs). For the selection of the model order and parameters for NTCP modeling, a multivariate logistic regression method using resampling techniques (bootstrapping) was applied. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC). Results: When we analyzed the whole heart, a 3-variable NTCP model including the maximum dose, whole heart volume, and lung volume was shown to be the optimal predictive model for RVD (Rs = 0.573, P<.001, AUC = 0.83). When we analyzed the cardiac chambers individually, for the left atrium and for the left ventricle, an NTCP model based on 3 variables including the percentage volume exceeding 30 Gy (V30), cardiac chamber volume, and lung volume was selected as the most predictive model (Rs = 0.539, P<.001, AUC = 0.83; and Rs = 0.557, P<.001, AUC = 0.82, respectively). The NTCP values increase as heart maximum dose or cardiac chambers V30 increase. They also increase with larger volumes of the heart or cardiac chambers and decrease when lung volume is larger. Conclusions: We propose logistic NTCP models for RVD considering not only heart irradiation dose but also the combined effects of lung and heart volumes. Our study establishes the statistical evidence of the indirect effect of lung size on radio-induced heart toxicity.
Image-based modeling of normal tissue complication probability for radiation therapy.
Deasy, Joseph O; El Naqa, Issam
2008-01-01
We therefore conclude that NTCP models, at least in some cases, are definitely tools not toys. However, like any good tool, they can be abused and in fact could lead to injury with misuse. In particular, we have pointed out that it is risky indeed to apply NTCP models to dose distributions which are very dissimilar to the dose distributions for which the NTCP model has been validated. While this warning is somewhat fuzzy, it is clear that more research needs to be done in this area. We believe that ultimately for NTCP models to be used routinely in treatment planning in a safe and effective way, the actual application will need to be closely related to the characteristics of the data sets and the uncertainties of the treatment parameters in the models under consideration. Another sign that NTCP models are becoming tools rather than toys is that there is often good agreement as to what constitutes a correct direction of improving the reduced risk for that particular complication endpoint. Thus, for example, mean dose to normal lung almost always comes out as being the most predictive or nearly most predictive factor in the analysis of radiation pneumonitis.
Bakhshandeh, Mohsen; Hashemi, Bijan; Mahdavi, Seied Rabi Mehdi; Nikoofar, Alireza; Vasheghani, Maryam; Kazemnejad, Anoshirvan
2013-02-01
Purpose: To determine the dose-response relationship of the thyroid for radiation-induced hypothyroidism in head-and-neck radiation therapy, according to 6 normal tissue complication probability models, and to find the best-fit parameters of the models. Methods and Materials: Sixty-five patients treated with primary or postoperative radiation therapy for various cancers in the head-and-neck region were prospectively evaluated. Patient serum samples (tri-iodothyronine, thyroxine, thyroid-stimulating hormone [TSH], free tri-iodothyronine, and free thyroxine) were measured before and at regular time intervals until 1 year after the completion of radiation therapy. Dose-volume histograms (DVHs) of the patients' thyroid gland were derived from their computed tomography (CT)-based treatment planning data. Hypothyroidism was defined as increased TSH (subclinical hypothyroidism) or increased TSH in combination with decreased free thyroxine and thyroxine (clinical hypothyroidism). Thyroid DVHs were converted to 2 Gy/fraction equivalent doses using the linear-quadratic formula with {alpha}/{beta} = 3 Gy. The evaluated models included the following: Lyman with the DVH reduced to the equivalent uniform dose (EUD), known as LEUD; Logit-EUD; mean dose; relative seriality; individual critical volume; and population critical volume models. The parameters of the models were obtained by fitting the patients' data using a maximum likelihood analysis method. The goodness of fit of the models was determined by the 2-sample Kolmogorov-Smirnov test. Ranking of the models was made according to Akaike's information criterion. Results: Twenty-nine patients (44.6%) experienced hypothyroidism. None of the models was rejected according to the evaluation of the goodness of fit. The mean dose model was ranked as the best model on the basis of its Akaike's information criterion value. The D{sub 50} estimated from the models was approximately 44 Gy. Conclusions: The implemented normal tissue
Defraene, Gilles; Van den Bergh, Laura; Al-Mamgani, Abrahim; Haustermans, Karin; Heemsbergen, Wilma; Van den Heuvel, Frank; Lebesque, Joos V.
2012-03-01
Purpose: To study the impact of clinical predisposing factors on rectal normal tissue complication probability modeling using the updated results of the Dutch prostate dose-escalation trial. Methods and Materials: Toxicity data of 512 patients (conformally treated to 68 Gy [n = 284] and 78 Gy [n = 228]) with complete follow-up at 3 years after radiotherapy were studied. Scored end points were rectal bleeding, high stool frequency, and fecal incontinence. Two traditional dose-based models (Lyman-Kutcher-Burman (LKB) and Relative Seriality (RS) and a logistic model were fitted using a maximum likelihood approach. Furthermore, these model fits were improved by including the most significant clinical factors. The area under the receiver operating characteristic curve (AUC) was used to compare the discriminating ability of all fits. Results: Including clinical factors significantly increased the predictive power of the models for all end points. In the optimal LKB, RS, and logistic models for rectal bleeding and fecal incontinence, the first significant (p = 0.011-0.013) clinical factor was 'previous abdominal surgery.' As second significant (p = 0.012-0.016) factor, 'cardiac history' was included in all three rectal bleeding fits, whereas including 'diabetes' was significant (p = 0.039-0.048) in fecal incontinence modeling but only in the LKB and logistic models. High stool frequency fits only benefitted significantly (p = 0.003-0.006) from the inclusion of the baseline toxicity score. For all models rectal bleeding fits had the highest AUC (0.77) where it was 0.63 and 0.68 for high stool frequency and fecal incontinence, respectively. LKB and logistic model fits resulted in similar values for the volume parameter. The steepness parameter was somewhat higher in the logistic model, also resulting in a slightly lower D{sub 50}. Anal wall DVHs were used for fecal incontinence, whereas anorectal wall dose best described the other two endpoints. Conclusions: Comparable
2013-01-01
Background The risk of radio-induced gastrointestinal (GI) complications is affected by several factors other than the dose to the rectum such as patient characteristics, hormonal or antihypertensive therapy, and acute rectal toxicity. Purpose of this work is to study clinical and dosimetric parameters impacting on late GI toxicity after prostate external beam radiotherapy (RT) and to establish multivariate normal tissue complication probability (NTCP) model for radiation-induced GI complications. Methods A total of 57 men who had undergone definitive RT for prostate cancer were evaluated for GI events classified using the RTOG/EORTC scoring system. Their median age was 73 years (range 53–85). The patients were assessed for GI toxicity before, during, and periodically after RT completion. Several clinical variables along with rectum dose-volume parameters (Vx) were collected and their correlation to GI toxicity was analyzed by Spearman’s rank correlation coefficient (Rs). Multivariate logistic regression method using resampling techniques was applied to select model order and parameters for NTCP modeling. Model performance was evaluated through the area under the receiver operating characteristic curve (AUC). Results At a median follow-up of 30 months, 37% (21/57) patients developed G1-2 acute GI events while 33% (19/57) were diagnosed with G1-2 late GI events. An NTCP model for late mild/moderate GI toxicity based on three variables including V65 (OR = 1.03), antihypertensive and/or anticoagulant (AH/AC) drugs (OR = 0.24), and acute GI toxicity (OR = 4.3) was selected as the most predictive model (Rs = 0.47, p < 0.001; AUC = 0.79). This three-variable model outperforms the logistic model based on V65 only (Rs = 0.28, p < 0.001; AUC = 0.69). Conclusions We propose a logistic NTCP model for late GI toxicity considering not only rectal irradiation dose but also clinical patient-specific factors. Accordingly, the risk of G1
Probability state modeling theory.
Bagwell, C Bruce; Hunsberger, Benjamin C; Herbert, Donald J; Munson, Mark E; Hill, Beth L; Bray, Chris M; Preffer, Frederic I
2015-07-01
As the technology of cytometry matures, there is mounting pressure to address two major issues with data analyses. The first issue is to develop new analysis methods for high-dimensional data that can directly reveal and quantify important characteristics associated with complex cellular biology. The other issue is to replace subjective and inaccurate gating with automated methods that objectively define subpopulations and account for population overlap due to measurement uncertainty. Probability state modeling (PSM) is a technique that addresses both of these issues. The theory and important algorithms associated with PSM are presented along with simple examples and general strategies for autonomous analyses. PSM is leveraged to better understand B-cell ontogeny in bone marrow in a companion Cytometry Part B manuscript. Three short relevant videos are available in the online supporting information for both of these papers. PSM avoids the dimensionality barrier normally associated with high-dimensionality modeling by using broadened quantile functions instead of frequency functions to represent the modulation of cellular epitopes as cells differentiate. Since modeling programs ultimately minimize or maximize one or more objective functions, they are particularly amenable to automation and, therefore, represent a viable alternative to subjective and inaccurate gating approaches.
Xu ZhiYong; Liang Shixiong; Zhu Ji; Zhu Xiaodong; Zhao Jiandong; Lu Haijie; Yang Yunli; Chen Long; Wang Anyu; Fu Xiaolong; Jiang Guoliang . E-mail: jianggl@21cn.com
2006-05-01
Purpose: To describe the probability of RILD by application of the Lyman-Kutcher-Burman normal-tissue complication (NTCP) model for primary liver carcinoma (PLC) treated with hypofractionated three-dimensional conformal radiotherapy (3D-CRT). Methods and Materials: A total of 109 PLC patients treated by 3D-CRT were followed for RILD. Of these patients, 93 were in liver cirrhosis of Child-Pugh Grade A, and 16 were in Child-Pugh Grade B. The Michigan NTCP model was used to predict the probability of RILD, and then the modified Lyman NTCP model was generated for Child-Pugh A and Child-Pugh B patients by maximum-likelihood analysis. Results: Of all patients, 17 developed RILD in which 8 were of Child-Pugh Grade A, and 9 were of Child-Pugh Grade B. The prediction of RILD by the Michigan model was underestimated for PLC patients. The modified n, m, TD{sub 5} (1) were 1.1, 0.28, and 40.5 Gy and 0.7, 0.43, and 23 Gy for patients with Child-Pugh A and B, respectively, which yielded better estimations of RILD probability. The hepatic tolerable doses (TD{sub 5}) would be MDTNL of 21 Gy and 6 Gy, respectively, for Child-Pugh A and B patients. Conclusions: The Michigan model was probably not fit to predict RILD in PLC patients. A modified Lyman NTCP model for RILD was recommended.
Robertson, John M.; Soehn, Matthias; Yan Di
2010-05-01
Purpose: Understanding the dose-volume relationship of small bowel irradiation and severe acute diarrhea may help reduce the incidence of this side effect during adjuvant treatment for rectal cancer. Methods and Materials: Consecutive patients treated curatively for rectal cancer were reviewed, and the maximum grade of acute diarrhea was determined. The small bowel was outlined on the treatment planning CT scan, and a dose-volume histogram was calculated for the initial pelvic treatment (45 Gy). Logistic regression models were fitted for varying cutoff-dose levels from 5 to 45 Gy in 5-Gy increments. The model with the highest LogLikelihood was used to develop a cutoff-dose normal tissue complication probability (NTCP) model. Results: There were a total of 152 patients (48% preoperative, 47% postoperative, 5% other), predominantly treated prone (95%) with a three-field technique (94%) and a protracted venous infusion of 5-fluorouracil (78%). Acute Grade 3 diarrhea occurred in 21%. The largest LogLikelihood was found for the cutoff-dose logistic regression model with 15 Gy as the cutoff-dose, although the models for 20 Gy and 25 Gy had similar significance. According to this model, highly significant correlations (p <0.001) between small bowel volumes receiving at least 15 Gy and toxicity exist in the considered patient population. Similar findings applied to both the preoperatively (p = 0.001) and postoperatively irradiated groups (p = 0.001). Conclusion: The incidence of Grade 3 diarrhea was significantly correlated with the volume of small bowel receiving at least 15 Gy using a cutoff-dose NTCP model.
Bazan, Jose G.; Luxton, Gary; Kozak, Margaret M.; Anderson, Eric M.; Hancock, Steven L.; Kapp, Daniel S.; Kidd, Elizabeth A.; Koong, Albert C.; Chang, Daniel T.
2013-12-01
Purpose: To determine how chemotherapy agents affect radiation dose parameters that correlate with acute hematologic toxicity (HT) in patients treated with pelvic intensity modulated radiation therapy (P-IMRT) and concurrent chemotherapy. Methods and Materials: We assessed HT in 141 patients who received P-IMRT for anal, gynecologic, rectal, or prostate cancers, 95 of whom received concurrent chemotherapy. Patients were separated into 4 groups: mitomycin (MMC) + 5-fluorouracil (5FU, 37 of 141), platinum ± 5FU (Cis, 32 of 141), 5FU (26 of 141), and P-IMRT alone (46 of 141). The pelvic bone was contoured as a surrogate for pelvic bone marrow (PBM) and divided into subsites: ilium, lower pelvis, and lumbosacral spine (LSS). The volumes of each region receiving 5-40 Gy were calculated. The endpoint for HT was grade ≥3 (HT3+) leukopenia, neutropenia or thrombocytopenia. Normal tissue complication probability was calculated using the Lyman-Kutcher-Burman model. Logistic regression was used to analyze association between HT3+ and dosimetric parameters. Results: Twenty-six patients experienced HT3+: 10 of 37 (27%) MMC, 14 of 32 (44%) Cis, 2 of 26 (8%) 5FU, and 0 of 46 P-IMRT. PBM dosimetric parameters were correlated with HT3+ in the MMC group but not in the Cis group. LSS dosimetric parameters were well correlated with HT3+ in both the MMC and Cis groups. Constrained optimization (0
Multinomial mixture model with heterogeneous classification probabilities
Holland, M.D.; Gray, B.R.
2011-01-01
Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.
Docquière, Nicolas; Bondiau, Pierre-Yves; Balosso, Jacques
2016-01-01
Background The equivalent uniform dose (EUD) radiobiological model can be applied for lung cancer treatment plans to estimate the tumor control probability (TCP) and the normal tissue complication probability (NTCP) using different dose calculation models. Then, based on the different calculated doses, the quality adjusted life years (QALY) score can be assessed versus the uncomplicated tumor control probability (UTCP) concept in order to predict the overall outcome of the different treatment plans. Methods Nine lung cancer cases were included in this study. For the each patient, two treatments plans were generated. The doses were calculated respectively from pencil beam model, as pencil beam convolution (PBC) turning on 1D density correction with Modified Batho’s (MB) method, and point kernel model as anisotropic analytical algorithm (AAA) using exactly the same prescribed dose, normalized to 100% at isocentre point inside the target and beam arrangements. The radiotherapy outcomes and QALY were compared. The bootstrap method was used to improve the 95% confidence intervals (95% CI) estimation. Wilcoxon paired test was used to calculate P value. Results Compared to AAA considered as more realistic, the PBCMB overestimated the TCP while underestimating NTCP, P<0.05. Thus the UTCP and the QALY score were also overestimated. Conclusions To correlate measured QALY’s obtained from the follow-up of the patients with calculated QALY from DVH metrics, the more accurate dose calculation models should be first integrated in clinical use. Second, clinically measured outcomes are necessary to tune the parameters of the NTCP model used to link the treatment outcome with the QALY. Only after these two steps, the comparison and the ranking of different radiotherapy plans would be possible, avoiding over/under estimation of QALY and any other clinic-biological estimates. PMID:28149761
Modality, probability, and mental models.
Hinterecker, Thomas; Knauff, Markus; Johnson-Laird, P N
2016-10-01
We report 3 experiments investigating novel sorts of inference, such as: A or B or both. Therefore, possibly (A and B). Where the contents were sensible assertions, for example, Space tourism will achieve widespread popularity in the next 50 years or advances in material science will lead to the development of antigravity materials in the next 50 years, or both. Most participants accepted the inferences as valid, though they are invalid in modal logic and in probabilistic logic too. But, the theory of mental models predicts that individuals should accept them. In contrast, inferences of this sort—A or B but not both. Therefore, A or B or both—are both logically valid and probabilistically valid. Yet, as the model theory also predicts, most reasoners rejected them. The participants’ estimates of probabilities showed that their inferences tended not to be based on probabilistic validity, but that they did rate acceptable conclusions as more probable than unacceptable conclusions. We discuss the implications of the results for current theories of reasoning.
Dean, Jamie A; Wong, Kee H; Welsh, Liam C; Jones, Ann-Britt; Schick, Ulrike; Newbold, Kate L; Bhide, Shreerang A; Harrington, Kevin J; Nutting, Christopher M; Gulliford, Sarah L
2016-01-01
Background and Purpose Severe acute mucositis commonly results from head and neck (chemo)radiotherapy. A predictive model of mucositis could guide clinical decision-making and inform treatment planning. We aimed to generate such a model using spatial dose metrics and machine learning. Material and Methods Predictive models of severe acute mucositis were generated using radiotherapy dose (dose-volume and spatial dose metrics) and clinical data. Penalised logistic regression, support vector classification and random forest classification (RFC) models were generated and compared. Internal validation was performed (with 100-iteration cross-validation), using multiple metrics, including area under the receiver operating characteristic curve (AUC) and calibration slope, to assess performance. Associations between covariates and severe mucositis were explored using the models. Results The dose-volume-based models (standard) performed equally to those incorporating spatial information. Discrimination was similar between models, but the RFCstandard had the best calibration. The mean AUC and calibration slope for this model were 0.71 (s.d.=0.09) and 3.9 (s.d.=2.2), respectively. The volumes of oral cavity receiving intermediate and high doses were associated with severe mucositis. Conclusions The RFCstandard model performance is modest-to-good, but should be improved, and requires external validation. Reducing the volumes of oral cavity receiving intermediate and high doses may reduce mucositis incidence. PMID:27240717
Site occupancy models with heterogeneous detection probabilities
Royle, J. Andrew
2006-01-01
Models for estimating the probability of occurrence of a species in the presence of imperfect detection are important in many ecological disciplines. In these ?site occupancy? models, the possibility of heterogeneity in detection probabilities among sites must be considered because variation in abundance (and other factors) among sampled sites induces variation in detection probability (p). In this article, I develop occurrence probability models that allow for heterogeneous detection probabilities by considering several common classes of mixture distributions for p. For any mixing distribution, the likelihood has the general form of a zero-inflated binomial mixture for which inference based upon integrated likelihood is straightforward. A recent paper by Link (2003, Biometrics 59, 1123?1130) demonstrates that in closed population models used for estimating population size, different classes of mixture distributions are indistinguishable from data, yet can produce very different inferences about population size. I demonstrate that this problem can also arise in models for estimating site occupancy in the presence of heterogeneous detection probabilities. The implications of this are discussed in the context of an application to avian survey data and the development of animal monitoring programs.
Computational Modelling and Simulation Fostering New Approaches in Learning Probability
ERIC Educational Resources Information Center
Kuhn, Markus; Hoppe, Ulrich; Lingnau, Andreas; Wichmann, Astrid
2006-01-01
Discovery learning in mathematics in the domain of probability based on hands-on experiments is normally limited because of the difficulty in providing sufficient materials and data volume in terms of repetitions of the experiments. Our cooperative, computational modelling and simulation environment engages students and teachers in composing and…
A Quantum Probability Model of Causal Reasoning
Trueblood, Jennifer S.; Busemeyer, Jerome R.
2012-01-01
People can often outperform statistical methods and machine learning algorithms in situations that involve making inferences about the relationship between causes and effects. While people are remarkably good at causal reasoning in many situations, there are several instances where they deviate from expected responses. This paper examines three situations where judgments related to causal inference problems produce unexpected results and describes a quantum inference model based on the axiomatic principles of quantum probability theory that can explain these effects. Two of the three phenomena arise from the comparison of predictive judgments (i.e., the conditional probability of an effect given a cause) with diagnostic judgments (i.e., the conditional probability of a cause given an effect). The third phenomenon is a new finding examining order effects in predictive causal judgments. The quantum inference model uses the notion of incompatibility among different causes to account for all three phenomena. Psychologically, the model assumes that individuals adopt different points of view when thinking about different causes. The model provides good fits to the data and offers a coherent account for all three causal reasoning effects thus proving to be a viable new candidate for modeling human judgment. PMID:22593747
A quantum probability model of causal reasoning.
Trueblood, Jennifer S; Busemeyer, Jerome R
2012-01-01
People can often outperform statistical methods and machine learning algorithms in situations that involve making inferences about the relationship between causes and effects. While people are remarkably good at causal reasoning in many situations, there are several instances where they deviate from expected responses. This paper examines three situations where judgments related to causal inference problems produce unexpected results and describes a quantum inference model based on the axiomatic principles of quantum probability theory that can explain these effects. Two of the three phenomena arise from the comparison of predictive judgments (i.e., the conditional probability of an effect given a cause) with diagnostic judgments (i.e., the conditional probability of a cause given an effect). The third phenomenon is a new finding examining order effects in predictive causal judgments. The quantum inference model uses the notion of incompatibility among different causes to account for all three phenomena. Psychologically, the model assumes that individuals adopt different points of view when thinking about different causes. The model provides good fits to the data and offers a coherent account for all three causal reasoning effects thus proving to be a viable new candidate for modeling human judgment.
A probability distribution model for rain rate
NASA Technical Reports Server (NTRS)
Kedem, Benjamin; Pavlopoulos, Harry; Guan, Xiaodong; Short, David A.
1994-01-01
A systematic approach is suggested for modeling the probability distribution of rain rate. Rain rate, conditional on rain and averaged over a region, is modeled as a temporally homogeneous diffusion process with appropiate boundary conditions. The approach requires a drift coefficient-conditional average instantaneous rate of change of rain intensity-as well as a diffusion coefficient-the conditional average magnitude of the rate of growth and decay of rain rate about its drift. Under certain assumptions on the drift and diffusion coefficients compatible with rain rate, a new parametric family-containing the lognormal distribution-is obtained for the continuous part of the stationary limit probability distribution. The family is fitted to tropical rainfall from Darwin and Florida, and it is found that the lognormal distribution provides adequate fits as compared with other members of the family and also with the gamma distribution.
Tai An; Erickson, Beth; Li, X. Allen
2009-05-01
Purpose: The ability to predict normal tissue complication probability (NTCP) is essential for NTCP-based treatment planning. The purpose of this work is to estimate the Lyman NTCP model parameters for liver irradiation from published clinical data of different fractionation regimens. A new expression of normalized total dose (NTD) is proposed to convert NTCP data between different treatment schemes. Method and Materials: The NTCP data of radiation- induced liver disease (RILD) from external beam radiation therapy for primary liver cancer patients were selected for analysis. The data were collected from 4 institutions for tumor sizes in the range of of 8-10 cm. The dose per fraction ranged from 1.5 Gy to 6 Gy. A modified linear-quadratic model with two components corresponding to radiosensitive and radioresistant cells in the normal liver tissue was proposed to understand the new NTD formalism. Results: There are five parameters in the model: TD{sub 50}, m, n, {alpha}/{beta} and f. With two parameters n and {alpha}/{beta} fixed to be 1.0 and 2.0 Gy, respectively, the extracted parameters from the fitting are TD{sub 50}(1) = 40.3 {+-} 8.4Gy, m =0.36 {+-} 0.09, f = 0.156 {+-} 0.074 Gy and TD{sub 50}(1) = 23.9 {+-} 5.3Gy, m = 0.41 {+-} 0.15, f = 0.0 {+-} 0.04 Gy for patients with liver cirrhosis scores of Child-Pugh A and Child-Pugh B, respectively. The fitting results showed that the liver cirrhosis score significantly affects fractional dose dependence of NTD. Conclusion: The Lyman parameters generated presently and the new form of NTD may be used to predict NTCP for treatment planning of innovative liver irradiation with different fractionations, such as hypofractioned stereotactic body radiation therapy.
Statistical Physics of Pairwise Probability Models
Roudi, Yasser; Aurell, Erik; Hertz, John A.
2009-01-01
Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of data: knowledge of the mean values and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying and using pairwise models. We build on our previous work on the subject and study the relation between different methods for fitting these models and evaluating their quality. In particular, using data from simulated cortical networks we study how the quality of various approximate methods for inferring the parameters in a pairwise model depends on the time bin chosen for binning the data. We also study the effect of the size of the time bin on the model quality itself, again using simulated data. We show that using finer time bins increases the quality of the pairwise model. We offer new ways of deriving the expressions reported in our previous work for assessing the quality of pairwise models. PMID:19949460
Probabilities on cladograms: introduction to the alpha model
NASA Astrophysics Data System (ADS)
Ford, Daniel J.
2005-11-01
The alpha model, a parametrized family of probabilities on cladograms (rooted binary leaf labeled trees), is introduced. This model is Markovian self-similar, deletion-stable (sampling consistent), and passes through the Yule, Uniform and Comb models. An explicit formula is given to calculate the probability of any cladogram or tree shape under the alpha model. Sackin's and Colless' index are shown to be O(n^{1+α}) with asymptotic covariance equal to 1. Thus the expected depth of a random leaf with n leaves is O(n^α). The number of cherries on a random alpha tree is shown to be asymptotically normal with known mean and variance. Finally the shape of published phylogenies is examined, using trees from Treebase.
ERIC Educational Resources Information Center
Gibbons, Robert D.; And Others
The probability integral of the multivariate normal distribution (ND) has received considerable attention since W. F. Sheppard's (1900) and K. Pearson's (1901) seminal work on the bivariate ND. This paper evaluates the formula that represents the "n x n" correlation matrix of the "chi(sub i)" and the standardized multivariate…
PABS: A Computer Program to Normalize Emission Probabilities and Calculate Realistic Uncertainties
Caron, D. S.; Browne, E.; Norman, E. B.
2009-08-21
The program PABS normalizes relative particle emission probabilities to an absolute scale and calculates the relevant uncertainties on this scale. The program is written in Java using the JDK 1.6 library. For additional information about system requirements, the code itself, and compiling from source, see the README file distributed with this program. The mathematical procedures used are given below.
Dinov, Ivo D; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas
2013-01-01
Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students' understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference.
NASA Astrophysics Data System (ADS)
Ma, Lijun
2001-11-01
A recent multi-institutional clinical study suggested possible benefits of lowering the prescription isodose lines for stereotactic radiosurgery procedures. In this study, we investigate the dependence of the normal brain integral dose and the normal tissue complication probability (NTCP) on the prescription isodose values for γ-knife radiosurgery. An analytical dose model was developed for γ-knife treatment planning. The dose model was commissioned by fitting the measured dose profiles for each helmet size. The dose model was validated by comparing its results with the Leksell gamma plan (LGP, version 5.30) calculations. The normal brain integral dose and the NTCP were computed and analysed for an ensemble of treatment cases. The functional dependence of the normal brain integral dose and the NCTP versus the prescribing isodose values was studied for these cases. We found that the normal brain integral dose and the NTCP increase significantly when lowering the prescription isodose lines from 50% to 35% of the maximum tumour dose. Alternatively, the normal brain integral dose and the NTCP decrease significantly when raising the prescribing isodose lines from 50% to 65% of the maximum tumour dose. The results may be used as a guideline for designing future dose escalation studies for γ-knife applications.
Thompson, Sierra; Muzinic, Laura; Muzinic, Christopher; Niemiller, Matthew L.
2014-01-01
Abstract Multiple factors are thought to cause limb abnormalities in amphibian populations by altering processes of limb development and regeneration. We examined adult and juvenile axolotls (Ambystoma mexicanum) in the Ambystoma Genetic Stock Center (AGSC) for limb and digit abnormalities to investigate the probability of normal regeneration after bite injury. We observed that 80% of larval salamanders show evidence of bite injury at the time of transition from group housing to solitary housing. Among 717 adult axolotls that were surveyed, which included solitary‐housed males and group‐housed females, approximately half presented abnormalities, including examples of extra or missing digits and limbs, fused digits, and digits growing from atypical anatomical positions. Bite injury probably explains these limb defects, and not abnormal development, because limbs with normal anatomy regenerated after performing rostral amputations. We infer that only 43% of AGSC larvae will present four anatomically normal looking adult limbs after incurring a bite injury. Our results show regeneration of normal limb anatomy to be less than perfect after bite injury. PMID:25745564
Thompson, Sierra; Muzinic, Laura; Muzinic, Christopher; Niemiller, Matthew L; Voss, S Randal
2014-06-01
Multiple factors are thought to cause limb abnormalities in amphibian populations by altering processes of limb development and regeneration. We examined adult and juvenile axolotls (Ambystoma mexicanum) in the Ambystoma Genetic Stock Center (AGSC) for limb and digit abnormalities to investigate the probability of normal regeneration after bite injury. We observed that 80% of larval salamanders show evidence of bite injury at the time of transition from group housing to solitary housing. Among 717 adult axolotls that were surveyed, which included solitary-housed males and group-housed females, approximately half presented abnormalities, including examples of extra or missing digits and limbs, fused digits, and digits growing from atypical anatomical positions. Bite injury likely explains these limb defects, and not abnormal development, because limbs with normal anatomy regenerated after performing rostral amputations. We infer that only 43% of AGSC larvae will present four anatomically normal looking adult limbs after incurring a bite injury. Our results show regeneration of normal limb anatomy to be less than perfect after bite injury.
Probability and Statistics in Sensor Performance Modeling
2010-12-01
transformed Rice- Nakagami distribution ......................................................................... 49 Report Documentation Page...acoustic or electromagnetic waves are scattered by both objects and turbulent wind. A version of the Rice- Nakagami model (specifically with a...Gaussian, lognormal, exponential, gamma, and the 2XX → transformed Rice- Nakagami —as well as a discrete model. (Other examples of statistical models
Bis[aminoguanidinium(1+)] hexafluorozirconate(IV): redeterminations and normal probability analysis.
Ross, C R; Bauer, M R; Nielson, R M; Abrahams, S C
2004-01-01
The crystal structure of bis[aminoguanidinium(1+)] hexafluorozirconate(IV), (CH(7)N(4))(2)[ZrF(6)], originally reported by Bukvetskii, Gerasimenko & Davidovich [Koord. Khim. (1990), 16, 1479-1484], has been redetermined independently using two different samples. Normal probability analysis confirms the reliability of all refined parameter standard uncertainties in the new determinations, whereas systematic error detectable in the earlier work leads to a maximum difference of 0.069 (6) A in atomic positions between the previously reported and present values of an F-atom y coordinate. Radiation-induced structural damage in aminoguanidinium polyfluorozirconates may result from minor displacements of H atoms in weak N-H...F bonds to new potential minima and subsequent anionic realignment.
Probability Modeling and Thinking: What Can We Learn from Practice?
ERIC Educational Resources Information Center
Pfannkuch, Maxine; Budgett, Stephanie; Fewster, Rachel; Fitch, Marie; Pattenwise, Simeon; Wild, Chris; Ziedins, Ilze
2016-01-01
Because new learning technologies are enabling students to build and explore probability models, we believe that there is a need to determine the big enduring ideas that underpin probabilistic thinking and modeling. By uncovering the elements of the thinking modes of expert users of probability models we aim to provide a base for the setting of…
Estimating Prior Model Probabilities Using an Entropy Principle
NASA Astrophysics Data System (ADS)
Ye, M.; Meyer, P. D.; Neuman, S. P.; Pohlmann, K.
2004-12-01
Considering conceptual model uncertainty is an important process in environmental uncertainty/risk analyses. Bayesian Model Averaging (BMA) (Hoeting et al., 1999) and its Maximum Likelihood version, MLBMA, (Neuman, 2003) jointly assess predictive uncertainty of competing alternative models to avoid bias and underestimation of uncertainty caused by relying on one single model. These methods provide posterior distribution (or, equivalently, leading moments) of quantities of interests for decision-making. One important step of these methods is to specify prior probabilities of alternative models for the calculation of posterior model probabilities. This problem, however, has not been satisfactorily resolved and equally likely prior model probabilities are usually accepted as a neutral choice. Ye et al. (2004) have shown that whereas using equally likely prior model probabilities has led to acceptable geostatistical estimates of log air permeability data from fractured unsaturated tuff at the Apache Leap Research Site (ALRS) in Arizona, identifying more accurate prior probabilities can improve these estimates. In this paper we present a new methodology to evaluate prior model probabilities by maximizing Shannon's entropy with restrictions postulated a priori based on model plausibility relationships. It yields optimum prior model probabilities conditional on prior information used to postulate the restrictions. The restrictions and corresponding prior probabilities can be modified as more information becomes available. The proposed method is relatively easy to use in practice as it is generally less difficult for experts to postulate relationships between models than to specify numerical prior model probability values. Log score, mean square prediction error (MSPE) and mean absolute predictive error (MAPE) criteria consistently show that applying our new method to the ALRS data reduces geostatistical estimation errors provided relationships between models are
Probability of Future Observations Exceeding One-Sided, Normal, Upper Tolerance Limits
Edwards, Timothy S.
2014-10-29
Normal tolerance limits are frequently used in dynamic environments specifications of aerospace systems as a method to account for aleatory variability in the environments. Upper tolerance limits, when used in this way, are computed from records of the environment and used to enforce conservatism in the specification by describing upper extreme values the environment may take in the future. Components and systems are designed to withstand these extreme loads to ensure they do not fail under normal use conditions. The degree of conservatism in the upper tolerance limits is controlled by specifying the coverage and confidence level (usually written in “coverage/confidence” form). Moreover, in high-consequence systems it is common to specify tolerance limits at 95% or 99% coverage and confidence at the 50% or 90% level. Despite the ubiquity of upper tolerance limits in the aerospace community, analysts and decision-makers frequently misinterpret their meaning. The misinterpretation extends into the standards that govern much of the acceptance and qualification of commercial and government aerospace systems. As a result, the risk of a future observation of the environment exceeding the upper tolerance limit is sometimes significantly underestimated by decision makers. This note explains the meaning of upper tolerance limits and a related measure, the upper prediction limit. So, the objective of this work is to clarify the probability of exceeding these limits in flight so that decision-makers can better understand the risk associated with exceeding design and test levels during flight and balance the cost of design and development with that of mission failure.
Probability of Future Observations Exceeding One-Sided, Normal, Upper Tolerance Limits
Edwards, Timothy S.
2014-10-29
Normal tolerance limits are frequently used in dynamic environments specifications of aerospace systems as a method to account for aleatory variability in the environments. Upper tolerance limits, when used in this way, are computed from records of the environment and used to enforce conservatism in the specification by describing upper extreme values the environment may take in the future. Components and systems are designed to withstand these extreme loads to ensure they do not fail under normal use conditions. The degree of conservatism in the upper tolerance limits is controlled by specifying the coverage and confidence level (usually written inmore » “coverage/confidence” form). Moreover, in high-consequence systems it is common to specify tolerance limits at 95% or 99% coverage and confidence at the 50% or 90% level. Despite the ubiquity of upper tolerance limits in the aerospace community, analysts and decision-makers frequently misinterpret their meaning. The misinterpretation extends into the standards that govern much of the acceptance and qualification of commercial and government aerospace systems. As a result, the risk of a future observation of the environment exceeding the upper tolerance limit is sometimes significantly underestimated by decision makers. This note explains the meaning of upper tolerance limits and a related measure, the upper prediction limit. So, the objective of this work is to clarify the probability of exceeding these limits in flight so that decision-makers can better understand the risk associated with exceeding design and test levels during flight and balance the cost of design and development with that of mission failure.« less
Gendist: An R Package for Generated Probability Distribution Models
Abu Bakar, Shaiful Anuar; Nadarajah, Saralees; ABSL Kamarul Adzhar, Zahrul Azmir; Mohamed, Ibrahim
2016-01-01
In this paper, we introduce the R package gendist that computes the probability density function, the cumulative distribution function, the quantile function and generates random values for several generated probability distribution models including the mixture model, the composite model, the folded model, the skewed symmetric model and the arc tan model. These models are extensively used in the literature and the R functions provided here are flexible enough to accommodate various univariate distributions found in other R packages. We also show its applications in graphing, estimation, simulation and risk measurements. PMID:27272043
Gendist: An R Package for Generated Probability Distribution Models.
Abu Bakar, Shaiful Anuar; Nadarajah, Saralees; Absl Kamarul Adzhar, Zahrul Azmir; Mohamed, Ibrahim
2016-01-01
In this paper, we introduce the R package gendist that computes the probability density function, the cumulative distribution function, the quantile function and generates random values for several generated probability distribution models including the mixture model, the composite model, the folded model, the skewed symmetric model and the arc tan model. These models are extensively used in the literature and the R functions provided here are flexible enough to accommodate various univariate distributions found in other R packages. We also show its applications in graphing, estimation, simulation and risk measurements.
NASA Astrophysics Data System (ADS)
Gyenis, Balázs
2017-02-01
We investigate Maxwell's attempt to justify the mathematical assumptions behind his 1860 Proposition IV according to which the velocity components of colliding particles follow the normal distribution. Contrary to the commonly held view we find that his molecular collision model plays a crucial role in reaching this conclusion, and that his model assumptions also permit inference to equalization of mean kinetic energies (temperatures), which is what he intended to prove in his discredited and widely ignored Proposition VI. If we take a charitable reading of his own proof of Proposition VI then it was Maxwell, and not Boltzmann, who gave the first proof of a tendency towards equilibrium, a sort of H-theorem. We also call attention to a potential conflation of notions of probabilistic and value independence in relevant prior works of his contemporaries and of his own, and argue that this conflation might have impacted his adoption of the suspect independence assumption of Proposition IV.
Review of Literature for Model Assisted Probability of Detection
Meyer, Ryan M.; Crawford, Susan L.; Lareau, John P.; Anderson, Michael T.
2014-09-30
This is a draft technical letter report for NRC client documenting a literature review of model assisted probability of detection (MAPOD) for potential application to nuclear power plant components for improvement of field NDE performance estimations.
Normalization of Gravitational Acceleration Models
NASA Technical Reports Server (NTRS)
Eckman, Randy A.; Brown, Aaron J.; Adamo, Daniel R.
2011-01-01
Unlike the uniform density spherical shell approximations of Newton, the con- sequence of spaceflight in the real universe is that gravitational fields are sensitive to the nonsphericity of their generating central bodies. The gravitational potential of a nonspherical central body is typically resolved using spherical harmonic approximations. However, attempting to directly calculate the spherical harmonic approximations results in at least two singularities which must be removed in order to generalize the method and solve for any possible orbit, including polar orbits. Three unique algorithms have been developed to eliminate these singularities by Samuel Pines [1], Bill Lear [2], and Robert Gottlieb [3]. This paper documents the methodical normalization of two1 of the three known formulations for singularity-free gravitational acceleration (namely, the Lear [2] and Gottlieb [3] algorithms) and formulates a general method for defining normalization parameters used to generate normalized Legendre Polynomials and ALFs for any algorithm. A treatment of the conventional formulation of the gravitational potential and acceleration is also provided, in addition to a brief overview of the philosophical differences between the three known singularity-free algorithms.
Fixation probability for lytic viruses: the attachment-lysis model.
Patwa, Z; Wahl, L M
2008-09-01
The fixation probability of a beneficial mutation is extremely sensitive to assumptions regarding the organism's life history. In this article we compute the fixation probability using a life-history model for lytic viruses, a key model organism in experimental studies of adaptation. The model assumes that attachment times are exponentially distributed, but that the lysis time, the time between attachment and host cell lysis, is constant. We assume that the growth of the wild-type viral population is controlled by periodic sampling (population bottlenecks) and also include the possibility that clearance may occur at a constant rate, for example, through washout in a chemostat. We then compute the fixation probability for mutations that increase the attachment rate, decrease the lysis time, increase the burst size, or reduce the probability of clearance. The fixation probability of these four types of beneficial mutations can be vastly different and depends critically on the time between population bottlenecks. We also explore mutations that affect lysis time, assuming that the burst size is constrained by the lysis time, for experimental protocols that sample either free phage or free phage and artificially lysed infected cells. In all cases we predict that the fixation probability of beneficial alleles is remarkably sensitive to the time between population bottlenecks.
Naive Probability: Model-Based Estimates of Unique Events.
Khemlani, Sangeet S; Lotstein, Max; Johnson-Laird, Philip N
2015-08-01
We describe a dual-process theory of how individuals estimate the probabilities of unique events, such as Hillary Clinton becoming U.S. President. It postulates that uncertainty is a guide to improbability. In its computer implementation, an intuitive system 1 simulates evidence in mental models and forms analog non-numerical representations of the magnitude of degrees of belief. This system has minimal computational power and combines evidence using a small repertoire of primitive operations. It resolves the uncertainty of divergent evidence for single events, for conjunctions of events, and for inclusive disjunctions of events, by taking a primitive average of non-numerical probabilities. It computes conditional probabilities in a tractable way, treating the given event as evidence that may be relevant to the probability of the dependent event. A deliberative system 2 maps the resulting representations into numerical probabilities. With access to working memory, it carries out arithmetical operations in combining numerical estimates. Experiments corroborated the theory's predictions. Participants concurred in estimates of real possibilities. They violated the complete joint probability distribution in the predicted ways, when they made estimates about conjunctions: P(A), P(B), P(A and B), disjunctions: P(A), P(B), P(A or B or both), and conditional probabilities P(A), P(B), P(B|A). They were faster to estimate the probabilities of compound propositions when they had already estimated the probabilities of each of their components. We discuss the implications of these results for theories of probabilistic reasoning.
Maximum parsimony, substitution model, and probability phylogenetic trees.
Weng, J F; Thomas, D A; Mareels, I
2011-01-01
The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.
Gap probability - Measurements and models of a pecan orchard
NASA Technical Reports Server (NTRS)
Strahler, Alan H.; Li, Xiaowen; Moody, Aaron; Liu, YI
1992-01-01
Measurements and models are compared for gap probability in a pecan orchard. Measurements are based on panoramic photographs of 50* by 135 view angle made under the canopy looking upwards at regular positions along transects between orchard trees. The gap probability model is driven by geometric parameters at two levels-crown and leaf. Crown level parameters include the shape of the crown envelope and spacing of crowns; leaf level parameters include leaf size and shape, leaf area index, and leaf angle, all as functions of canopy position.
Simulation modeling of the probability of magmatic disruption of the potential Yucca Mountain Site
Crowe, B.M.; Perry, F.V.; Valentine, G.A.; Wallmann, P.C.; Kossik, R.
1993-11-01
The first phase of risk simulation modeling was completed for the probability of magmatic disruption of a potential repository at Yucca Mountain. E1, the recurrence rate of volcanic events, is modeled using bounds from active basaltic volcanic fields and midpoint estimates of E1. The cumulative probability curves for El are generated by simulation modeling using a form of a triangular distribution. The 50% estimates are about 5 to 8 {times} 10{sup 8} events yr{sup {minus}1}. The simulation modeling shows that the cumulative probability distribution for E1 is more sensitive to the probability bounds then the midpoint estimates. The E2 (disruption probability) is modeled through risk simulation using a normal distribution and midpoint estimates from multiple alternative stochastic and structural models. The 50% estimate of E2 is 4.3 {times} 10{sup {minus}3} The probability of magmatic disruption of the potential Yucca Mountain site is 2.5 {times} 10{sup {minus}8} yr{sup {minus}1}. This median estimate decreases to 9.6 {times} 10{sup {minus}9} yr{sup {minus}1} if E1 is modified for the structural models used to define E2. The Repository Integration Program was tested to compare releases of a simulated repository (without volcanic events) to releases from time histories which may include volcanic disruptive events. Results show that the performance modeling can be used for sensitivity studies of volcanic effects.
Scene text detection based on probability map and hierarchical model
NASA Astrophysics Data System (ADS)
Zhou, Gang; Liu, Yuehu
2012-06-01
Scene text detection is an important step for the text-based information extraction system. This problem is challenging due to the variations of size, unknown colors, and background complexity. We present a novel algorithm to robustly detect text in scene images. To segment text candidate connected components (CC) from images, a text probability map consisting of the text position and scale information is estimated by a text region detector. To filter out the non-text CCs, a hierarchical model consisting of two classifiers in cascade is utilized. The first stage of the model estimates text probabilities with unary component features. The second stage classifier is trained with both probability features and similarity features. Since the proposed method is learning-based, there are very few manual parameters required. Experimental results on the public benchmark ICDAR dataset show that our algorithm outperforms other state-of-the-art methods.
Camera-Model Identification Using Markovian Transition Probability Matrix
NASA Astrophysics Data System (ADS)
Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei
Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.
Defining Predictive Probability Functions for Species Sampling Models.
Lee, Jaeyong; Quintana, Fernando A; Müller, Peter; Trippa, Lorenzo
2013-01-01
We review the class of species sampling models (SSM). In particular, we investigate the relation between the exchangeable partition probability function (EPPF) and the predictive probability function (PPF). It is straightforward to define a PPF from an EPPF, but the converse is not necessarily true. In this paper we introduce the notion of putative PPFs and show novel conditions for a putative PPF to define an EPPF. We show that all possible PPFs in a certain class have to define (unnormalized) probabilities for cluster membership that are linear in cluster size. We give a new necessary and sufficient condition for arbitrary putative PPFs to define an EPPF. Finally, we show posterior inference for a large class of SSMs with a PPF that is not linear in cluster size and discuss a numerical method to derive its PPF.
A Skew-Normal Mixture Regression Model
ERIC Educational Resources Information Center
Liu, Min; Lin, Tsung-I
2014-01-01
A challenge associated with traditional mixture regression models (MRMs), which rest on the assumption of normally distributed errors, is determining the number of unobserved groups. Specifically, even slight deviations from normality can lead to the detection of spurious classes. The current work aims to (a) examine how sensitive the commonly…
Modeling highway travel time distribution with conditional probability models
Oliveira Neto, Francisco Moraes; Chin, Shih-Miao; Hwang, Ho-Ling; Han, Lee
2014-01-01
ABSTRACT Under the sponsorship of the Federal Highway Administration's Office of Freight Management and Operations, the American Transportation Research Institute (ATRI) has developed performance measures through the Freight Performance Measures (FPM) initiative. Under this program, travel speed information is derived from data collected using wireless based global positioning systems. These telemetric data systems are subscribed and used by trucking industry as an operations management tool. More than one telemetric operator submits their data dumps to ATRI on a regular basis. Each data transmission contains truck location, its travel time, and a clock time/date stamp. Data from the FPM program provides a unique opportunity for studying the upstream-downstream speed distributions at different locations, as well as different time of the day and day of the week. This research is focused on the stochastic nature of successive link travel speed data on the continental United States Interstates network. Specifically, a method to estimate route probability distributions of travel time is proposed. This method uses the concepts of convolution of probability distributions and bivariate, link-to-link, conditional probability to estimate the expected distributions for the route travel time. Major contribution of this study is the consideration of speed correlation between upstream and downstream contiguous Interstate segments through conditional probability. The established conditional probability distributions, between successive segments, can be used to provide travel time reliability measures. This study also suggests an adaptive method for calculating and updating route travel time distribution as new data or information is added. This methodology can be useful to estimate performance measures as required by the recent Moving Ahead for Progress in the 21st Century Act (MAP 21).
The development of posterior probability models in risk-based integrity modeling.
Thodi, Premkumar N; Khan, Faisal I; Haddara, Mahmoud R
2010-03-01
There is a need for accurate modeling of mechanisms causing material degradation of equipment in process installation, to ensure safety and reliability of the equipment. Degradation mechanisms are stochastic processes. They can be best described using risk-based approaches. Risk-based integrity assessment quantifies the level of risk to which the individual components are subjected and provides means to mitigate them in a safe and cost-effective manner. The uncertainty and variability in structural degradations can be best modeled by probability distributions. Prior probability models provide initial description of the degradation mechanisms. As more inspection data become available, these prior probability models can be revised to obtain posterior probability models, which represent the current system and can be used to predict future failures. In this article, a rejection sampling-based Metropolis-Hastings (M-H) algorithm is used to develop posterior distributions. The M-H algorithm is a Markov chain Monte Carlo algorithm used to generate a sequence of posterior samples without actually knowing the normalizing constant. Ignoring the transient samples in the generated Markov chain, the steady state samples are rejected or accepted based on an acceptance criterion. To validate the estimated parameters of posterior models, analytical Laplace approximation method is used to compute the integrals involved in the posterior function. Results of the M-H algorithm and Laplace approximations are compared with conjugate pair estimations of known prior and likelihood combinations. The M-H algorithm provides better results and hence it is used for posterior development of the selected priors for corrosion and cracking.
Probability theory for 3-layer remote sensing radiative transfer model: univariate case.
Ben-David, Avishai; Davidson, Charles E
2012-04-23
A probability model for a 3-layer radiative transfer model (foreground layer, cloud layer, background layer, and an external source at the end of line of sight) has been developed. The 3-layer model is fundamentally important as the primary physical model in passive infrared remote sensing. The probability model is described by the Johnson family of distributions that are used as a fit for theoretically computed moments of the radiative transfer model. From the Johnson family we use the SU distribution that can address a wide range of skewness and kurtosis values (in addition to addressing the first two moments, mean and variance). In the limit, SU can also describe lognormal and normal distributions. With the probability model one can evaluate the potential for detecting a target (vapor cloud layer), the probability of observing thermal contrast, and evaluate performance (receiver operating characteristics curves) in clutter-noise limited scenarios. This is (to our knowledge) the first probability model for the 3-layer remote sensing geometry that treats all parameters as random variables and includes higher-order statistics.
Fixation probability in a two-locus intersexual selection model.
Durand, Guillermo; Lessard, Sabin
2016-06-01
We study a two-locus model of intersexual selection in a finite haploid population reproducing according to a discrete-time Moran model with a trait locus expressed in males and a preference locus expressed in females. We show that the probability of ultimate fixation of a single mutant allele for a male ornament introduced at random at the trait locus given any initial frequency state at the preference locus is increased by weak intersexual selection and recombination, weak or strong. Moreover, this probability exceeds the initial frequency of the mutant allele even in the case of a costly male ornament if intersexual selection is not too weak. On the other hand, the probability of ultimate fixation of a single mutant allele for a female preference towards a male ornament introduced at random at the preference locus is increased by weak intersexual selection and weak recombination if the female preference is not costly, and is strong enough in the case of a costly male ornament. The analysis relies on an extension of the ancestral recombination-selection graph for samples of haplotypes to take into account events of intersexual selection, while the symbolic calculation of the fixation probabilities is made possible in a reasonable time by an optimizing algorithm.
A joint-probability approach to crash prediction models.
Pei, Xin; Wong, S C; Sze, N N
2011-05-01
Many road safety researchers have used crash prediction models, such as Poisson and negative binomial regression models, to investigate the associations between crash occurrence and explanatory factors. Typically, they have attempted to separately model the crash frequencies of different severity levels. However, this method may suffer from serious correlations between the model estimates among different levels of crash severity. Despite efforts to improve the statistical fit of crash prediction models by modifying the data structure and model estimation method, little work has addressed the appropriate interpretation of the effects of explanatory factors on crash occurrence among different levels of crash severity. In this paper, a joint probability model is developed to integrate the predictions of both crash occurrence and crash severity into a single framework. For instance, the Markov chain Monte Carlo (MCMC) approach full Bayesian method is applied to estimate the effects of explanatory factors. As an illustration of the appropriateness of the proposed joint probability model, a case study is conducted on crash risk at signalized intersections in Hong Kong. The results of the case study indicate that the proposed model demonstrates a good statistical fit and provides an appropriate analysis of the influences of explanatory factors.
NASA Astrophysics Data System (ADS)
Jaynes, E. T.; Bretthorst, G. Larry
2003-04-01
Foreword; Preface; Part I. Principles and Elementary Applications: 1. Plausible reasoning; 2. The quantitative rules; 3. Elementary sampling theory; 4. Elementary hypothesis testing; 5. Queer uses for probability theory; 6. Elementary parameter estimation; 7. The central, Gaussian or normal distribution; 8. Sufficiency, ancillarity, and all that; 9. Repetitive experiments, probability and frequency; 10. Physics of 'random experiments'; Part II. Advanced Applications: 11. Discrete prior probabilities, the entropy principle; 12. Ignorance priors and transformation groups; 13. Decision theory: historical background; 14. Simple applications of decision theory; 15. Paradoxes of probability theory; 16. Orthodox methods: historical background; 17. Principles and pathology of orthodox statistics; 18. The Ap distribution and rule of succession; 19. Physical measurements; 20. Model comparison; 21. Outliers and robustness; 22. Introduction to communication theory; References; Appendix A. Other approaches to probability theory; Appendix B. Mathematical formalities and style; Appendix C. Convolutions and cumulants.
A propagation model of computer virus with nonlinear vaccination probability
NASA Astrophysics Data System (ADS)
Gan, Chenquan; Yang, Xiaofan; Liu, Wanping; Zhu, Qingyi
2014-01-01
This paper is intended to examine the effect of vaccination on the spread of computer viruses. For that purpose, a novel computer virus propagation model, which incorporates a nonlinear vaccination probability, is proposed. A qualitative analysis of this model reveals that, depending on the value of the basic reproduction number, either the virus-free equilibrium or the viral equilibrium is globally asymptotically stable. The results of simulation experiments not only demonstrate the validity of our model, but also show the effectiveness of nonlinear vaccination strategies. Through parameter analysis, some effective strategies for eradicating viruses are suggested.
An Integrated Modeling Framework for Probable Maximum Precipitation and Flood
NASA Astrophysics Data System (ADS)
Gangrade, S.; Rastogi, D.; Kao, S. C.; Ashfaq, M.; Naz, B. S.; Kabela, E.; Anantharaj, V. G.; Singh, N.; Preston, B. L.; Mei, R.
2015-12-01
With the increasing frequency and magnitude of extreme precipitation and flood events projected in the future climate, there is a strong need to enhance our modeling capabilities to assess the potential risks on critical energy-water infrastructures such as major dams and nuclear power plants. In this study, an integrated modeling framework is developed through high performance computing to investigate the climate change effects on probable maximum precipitation (PMP) and probable maximum flood (PMF). Multiple historical storms from 1981-2012 over the Alabama-Coosa-Tallapoosa River Basin near the Atlanta metropolitan area are simulated by the Weather Research and Forecasting (WRF) model using the Climate Forecast System Reanalysis (CFSR) forcings. After further WRF model tuning, these storms are used to simulate PMP through moisture maximization at initial and lateral boundaries. A high resolution hydrological model, Distributed Hydrology-Soil-Vegetation Model, implemented at 90m resolution and calibrated by the U.S. Geological Survey streamflow observations, is then used to simulate the corresponding PMF. In addition to the control simulation that is driven by CFSR, multiple storms from the Community Climate System Model version 4 under the Representative Concentrations Pathway 8.5 emission scenario are used to simulate PMP and PMF in the projected future climate conditions. The multiple PMF scenarios developed through this integrated modeling framework may be utilized to evaluate the vulnerability of existing energy-water infrastructures with various aspects associated PMP and PMF.
On the probability summation model for laser-damage thresholds
NASA Astrophysics Data System (ADS)
Clark, Clifton D.; Buffington, Gavin D.
2016-01-01
This paper explores the probability summation model in an attempt to provide insight to the model's utility and ultimately its validity. The model is a statistical description of multiple-pulse (MP) damage trends. It computes the probability of n pulses causing damage from knowledge of the single-pulse dose-response curve. Recently, the model has been used to make a connection between the observed n trends in MP damage thresholds for short pulses (<10 μs) and experimental uncertainties, suggesting that the observed trend is an artifact of experimental methods. We will consider the correct application of the model in this case. We also apply this model to the spot-size dependence of short pulse damage thresholds, which has not been done previously. Our results predict that the damage threshold trends with respect to the irradiated area should be similar to the MP damage threshold trends, and that observed spot-size dependence for short pulses seems to display this trend, which cannot be accounted for by the thermal models.
A model to assess dust explosion occurrence probability.
Hassan, Junaid; Khan, Faisal; Amyotte, Paul; Ferdous, Refaul
2014-03-15
Dust handling poses a potential explosion hazard in many industrial facilities. The consequences of a dust explosion are often severe and similar to a gas explosion; however, its occurrence is conditional to the presence of five elements: combustible dust, ignition source, oxidant, mixing and confinement. Dust explosion researchers have conducted experiments to study the characteristics of these elements and generate data on explosibility. These experiments are often costly but the generated data has a significant scope in estimating the probability of a dust explosion occurrence. This paper attempts to use existing information (experimental data) to develop a predictive model to assess the probability of a dust explosion occurrence in a given environment. The pro-posed model considers six key parameters of a dust explosion: dust particle diameter (PD), minimum ignition energy (MIE), minimum explosible concentration (MEC), minimum ignition temperature (MIT), limiting oxygen concentration (LOC) and explosion pressure (Pmax). A conditional probabilistic approach has been developed and embedded in the proposed model to generate a nomograph for assessing dust explosion occurrence. The generated nomograph provides a quick assessment technique to map the occurrence probability of a dust explosion for a given environment defined with the six parameters.
Mortality Probability Model III and Simplified Acute Physiology Score II
Vasilevskis, Eduard E.; Kuzniewicz, Michael W.; Cason, Brian A.; Lane, Rondall K.; Dean, Mitzi L.; Clay, Ted; Rennie, Deborah J.; Vittinghoff, Eric; Dudley, R. Adams
2009-01-01
Background: To develop and compare ICU length-of-stay (LOS) risk-adjustment models using three commonly used mortality or LOS prediction models. Methods: Between 2001 and 2004, we performed a retrospective, observational study of 11,295 ICU patients from 35 hospitals in the California Intensive Care Outcomes Project. We compared the accuracy of the following three LOS models: a recalibrated acute physiology and chronic health evaluation (APACHE) IV-LOS model; and models developed using risk factors in the mortality probability model III at zero hours (MPM0) and the simplified acute physiology score (SAPS) II mortality prediction model. We evaluated models by calculating the following: (1) grouped coefficients of determination; (2) differences between observed and predicted LOS across subgroups; and (3) intraclass correlations of observed/expected LOS ratios between models. Results: The grouped coefficients of determination were APACHE IV with coefficients recalibrated to the LOS values of the study cohort (APACHE IVrecal) [R2 = 0.422], mortality probability model III at zero hours (MPM0 III) [R2 = 0.279], and simplified acute physiology score (SAPS II) [R2 = 0.008]. For each decile of predicted ICU LOS, the mean predicted LOS vs the observed LOS was significantly different (p ≤ 0.05) for three, two, and six deciles using APACHE IVrecal, MPM0 III, and SAPS II, respectively. Plots of the predicted vs the observed LOS ratios of the hospitals revealed a threefold variation in LOS among hospitals with high model correlations. Conclusions: APACHE IV and MPM0 III were more accurate than SAPS II for the prediction of ICU LOS. APACHE IV is the most accurate and best calibrated model. Although it is less accurate, MPM0 III may be a reasonable option if the data collection burden or the treatment effect bias is a consideration. PMID:19363210
Quantum Probability -- A New Direction for Modeling in Cognitive Science
NASA Astrophysics Data System (ADS)
Roy, Sisir
2014-07-01
Human cognition is still a puzzling issue in research and its appropriate modeling. It depends on how the brain behaves at that particular instance and identifies and responds to a signal among myriads of noises that are present in the surroundings (called external noise) as well as in the neurons themselves (called internal noise). Thus it is not surprising to assume that the functionality consists of various uncertainties, possibly a mixture of aleatory and epistemic uncertainties. It is also possible that a complicated pathway consisting of both types of uncertainties in continuum play a major role in human cognition. For more than 200 years mathematicians and philosophers have been using probability theory to describe human cognition. Recently in several experiments with human subjects, violation of traditional probability theory has been clearly revealed in plenty of cases. Literature survey clearly suggests that classical probability theory fails to model human cognition beyond a certain limit. While the Bayesian approach may seem to be a promising candidate to this problem, the complete success story of Bayesian methodology is yet to be written. The major problem seems to be the presence of epistemic uncertainty and its effect on cognition at any given time. Moreover the stochasticity in the model arises due to the unknown path or trajectory (definite state of mind at each time point), a person is following. To this end a generalized version of probability theory borrowing ideas from quantum mechanics may be a plausible approach. A superposition state in quantum theory permits a person to be in an indefinite state at each point of time. Such an indefinite state allows all the states to have the potential to be expressed at each moment. Thus a superposition state appears to be able to represent better, the uncertainty, ambiguity or conflict experienced by a person at any moment demonstrating that mental states follow quantum mechanics during perception and
Modeling spatial variation in avian survival and residency probabilities
Saracco, James F.; Royle, J. Andrew; DeSante, David F.; Gardner, Beth
2010-01-01
The importance of understanding spatial variation in processes driving animal population dynamics is widely recognized. Yet little attention has been paid to spatial modeling of vital rates. Here we describe a hierarchical spatial autoregressive model to provide spatially explicit year-specific estimates of apparent survival (phi) and residency (pi) probabilities from capture-recapture data. We apply the model to data collected on a declining bird species, Wood Thrush (Hylocichla mustelina), as part of a broad-scale bird-banding network, the Monitoring Avian Productivity and Survivorship (MAPS) program. The Wood Thrush analysis showed variability in both phi and pi among years and across space. Spatial heterogeneity in residency probability was particularly striking, suggesting the importance of understanding the role of transients in local populations. We found broad-scale spatial patterning in Wood Thrush phi and pi that lend insight into population trends and can direct conservation and research. The spatial model developed here represents a significant advance over approaches to investigating spatial pattern in vital rates that aggregate data at coarse spatial scales and do not explicitly incorporate spatial information in the model. Further development and application of hierarchical capture-recapture models offers the opportunity to more fully investigate spatiotemporal variation in the processes that drive population changes.
An empirical probability model of detecting species at low densities.
Delaney, David G; Leung, Brian
2010-06-01
False negatives, not detecting things that are actually present, are an important but understudied problem. False negatives are the result of our inability to perfectly detect species, especially those at low density such as endangered species or newly arriving introduced species. They reduce our ability to interpret presence-absence survey data and make sound management decisions (e.g., rapid response). To reduce the probability of false negatives, we need to compare the efficacy and sensitivity of different sampling approaches and quantify an unbiased estimate of the probability of detection. We conducted field experiments in the intertidal zone of New England and New York to test the sensitivity of two sampling approaches (quadrat vs. total area search, TAS), given different target characteristics (mobile vs. sessile). Using logistic regression we built detection curves for each sampling approach that related the sampling intensity and the density of targets to the probability of detection. The TAS approach reduced the probability of false negatives and detected targets faster than the quadrat approach. Mobility of targets increased the time to detection but did not affect detection success. Finally, we interpreted two years of presence-absence data on the distribution of the Asian shore crab (Hemigrapsus sanguineus) in New England and New York, using our probability model for false negatives. The type of experimental approach in this paper can help to reduce false negatives and increase our ability to detect species at low densities by refining sampling approaches, which can guide conservation strategies and management decisions in various areas of ecology such as conservation biology and invasion ecology.
Human Inferences about Sequences: A Minimal Transition Probability Model
2016-01-01
The brain constantly infers the causes of the inputs it receives and uses these inferences to generate statistical expectations about future observations. Experimental evidence for these expectations and their violations include explicit reports, sequential effects on reaction times, and mismatch or surprise signals recorded in electrophysiology and functional MRI. Here, we explore the hypothesis that the brain acts as a near-optimal inference device that constantly attempts to infer the time-varying matrix of transition probabilities between the stimuli it receives, even when those stimuli are in fact fully unpredictable. This parsimonious Bayesian model, with a single free parameter, accounts for a broad range of findings on surprise signals, sequential effects and the perception of randomness. Notably, it explains the pervasive asymmetry between repetitions and alternations encountered in those studies. Our analysis suggests that a neural machinery for inferring transition probabilities lies at the core of human sequence knowledge. PMID:28030543
Bayesian failure probability model sensitivity study. Final report
Not Available
1986-05-30
The Office of the Manager, National Communications System (OMNCS) has developed a system-level approach for estimating the effects of High-Altitude Electromagnetic Pulse (HEMP) on the connectivity of telecommunications networks. This approach incorporates a Bayesian statistical model which estimates the HEMP-induced failure probabilities of telecommunications switches and transmission facilities. The purpose of this analysis is to address the sensitivity of the Bayesian model. This is done by systematically varying two model input parameters--the number of observations, and the equipment failure rates. Throughout the study, a non-informative prior distribution is used. The sensitivity of the Bayesian model to the noninformative prior distribution is investigated from a theoretical mathematical perspective.
Can quantum probability provide a new direction for cognitive modeling?
Pothos, Emmanuel M; Busemeyer, Jerome R
2013-06-01
Classical (Bayesian) probability (CP) theory has led to an influential research tradition for modeling cognitive processes. Cognitive scientists have been trained to work with CP principles for so long that it is hard even to imagine alternative ways to formalize probabilities. However, in physics, quantum probability (QP) theory has been the dominant probabilistic approach for nearly 100 years. Could QP theory provide us with any advantages in cognitive modeling as well? Note first that both CP and QP theory share the fundamental assumption that it is possible to model cognition on the basis of formal, probabilistic principles. But why consider a QP approach? The answers are that (1) there are many well-established empirical findings (e.g., from the influential Tversky, Kahneman research tradition) that are hard to reconcile with CP principles; and (2) these same findings have natural and straightforward explanations with quantum principles. In QP theory, probabilistic assessment is often strongly context- and order-dependent, individual states can be superposition states (that are impossible to associate with specific values), and composite systems can be entangled (they cannot be decomposed into their subsystems). All these characteristics appear perplexing from a classical perspective. However, our thesis is that they provide a more accurate and powerful account of certain cognitive processes. We first introduce QP theory and illustrate its application with psychological examples. We then review empirical findings that motivate the use of quantum theory in cognitive theory, but also discuss ways in which QP and CP theories converge. Finally, we consider the implications of a QP theory approach to cognition for human rationality.
Benassi, Marcello; Strigari, Lidia
2016-01-01
An overview of radiotherapy (RT) induced normal tissue complication probability (NTCP) models is presented. NTCP models based on empirical and mechanistic approaches that describe a specific radiation induced late effect proposed over time for conventional RT are reviewed with particular emphasis on their basic assumptions and related mathematical translation and their weak and strong points. PMID:28044088
Panic attacks during sleep: a hyperventilation-probability model.
Ley, R
1988-09-01
Panic attacks during sleep are analysed in terms of a hyperventilation theory of panic disorder. The theory assumes that panic attacks during sleep are a manifestation of severe chronic hyperventilation, a dysfunctional state in which renal compensation has led to a relatively steady state of diminished bicarbonate. Reductions in respiration during deep non-REM sleep lead to respiratory acidosis which triggers hyperventilatory hypocapnea and subsequent panic. A probability model designed to predict when during sleep panic attacks are likely to occur is supported by relevant data from studies of sleep and panic attacks. Implications for treatment are discussed.
Jamming probabilities for a vacancy in the dimer model.
Poghosyan, V S; Priezzhev, V B; Ruelle, P
2008-04-01
Following the recent proposal made by [J. Bouttier, Phys. Rev. E 76, 041140 (2007)], we study analytically the mobility properties of a single vacancy in the close-packed dimer model on the square lattice. Using the spanning web representation, we find determinantal expressions for various observable quantities. In the limiting case of large lattices, they can be reduced to the calculation of Toeplitz determinants and minors thereof. The probability for the vacancy to be strictly jammed and other diffusion characteristics are computed exactly.
Estimating transition probabilities among everglades wetland communities using multistate models
Hotaling, A.S.; Martin, J.; Kitchens, W.M.
2009-01-01
In this study we were able to provide the first estimates of transition probabilities of wet prairie and slough vegetative communities in Water Conservation Area 3A (WCA3A) of the Florida Everglades and to identify the hydrologic variables that determine these transitions. These estimates can be used in management models aimed at restoring proportions of wet prairie and slough habitats to historical levels in the Everglades. To determine what was driving the transitions between wet prairie and slough communities we evaluated three hypotheses: seasonality, impoundment, and wet and dry year cycles using likelihood-based multistate models to determine the main driver of wet prairie conversion in WCA3A. The most parsimonious model included the effect of wet and dry year cycles on vegetative community conversions. Several ecologists have noted wet prairie conversion in southern WCA3A but these are the first estimates of transition probabilities among these community types. In addition, to being useful for management of the Everglades we believe that our framework can be used to address management questions in other ecosystems. ?? 2009 The Society of Wetland Scientists.
Naive Probability: A Mental Model Theory of Extensional Reasoning.
ERIC Educational Resources Information Center
Johnson-Laird, P. N.; Legrenzi, Paolo; Girotto, Vittorio; Legrenzi, Maria Sonino; Caverni, Jean-Paul
1999-01-01
Outlines a theory of naive probability in which individuals who are unfamiliar with the probability calculus can infer the probabilities of events in an "extensional" way. The theory accommodates reasoning based on numerical premises, and explains how naive reasoners can infer posterior probabilities without relying on Bayes's theorem.…
Nahorniak, Matthew; Larsen, David P; Volk, Carol; Jordan, Chris E
2015-01-01
In ecology, as in other research fields, efficient sampling for population estimation often drives sample designs toward unequal probability sampling, such as in stratified sampling. Design based statistical analysis tools are appropriate for seamless integration of sample design into the statistical analysis. However, it is also common and necessary, after a sampling design has been implemented, to use datasets to address questions that, in many cases, were not considered during the sampling design phase. Questions may arise requiring the use of model based statistical tools such as multiple regression, quantile regression, or regression tree analysis. However, such model based tools may require, for ensuring unbiased estimation, data from simple random samples, which can be problematic when analyzing data from unequal probability designs. Despite numerous method specific tools available to properly account for sampling design, too often in the analysis of ecological data, sample design is ignored and consequences are not properly considered. We demonstrate here that violation of this assumption can lead to biased parameter estimates in ecological research. In addition, to the set of tools available for researchers to properly account for sampling design in model based analysis, we introduce inverse probability bootstrapping (IPB). Inverse probability bootstrapping is an easily implemented method for obtaining equal probability re-samples from a probability sample, from which unbiased model based estimates can be made. We demonstrate the potential for bias in model-based analyses that ignore sample inclusion probabilities, and the effectiveness of IPB sampling in eliminating this bias, using both simulated and actual ecological data. For illustration, we considered three model based analysis tools--linear regression, quantile regression, and boosted regression tree analysis. In all models, using both simulated and actual ecological data, we found inferences to be
Nahorniak, Matthew
2015-01-01
In ecology, as in other research fields, efficient sampling for population estimation often drives sample designs toward unequal probability sampling, such as in stratified sampling. Design based statistical analysis tools are appropriate for seamless integration of sample design into the statistical analysis. However, it is also common and necessary, after a sampling design has been implemented, to use datasets to address questions that, in many cases, were not considered during the sampling design phase. Questions may arise requiring the use of model based statistical tools such as multiple regression, quantile regression, or regression tree analysis. However, such model based tools may require, for ensuring unbiased estimation, data from simple random samples, which can be problematic when analyzing data from unequal probability designs. Despite numerous method specific tools available to properly account for sampling design, too often in the analysis of ecological data, sample design is ignored and consequences are not properly considered. We demonstrate here that violation of this assumption can lead to biased parameter estimates in ecological research. In addition, to the set of tools available for researchers to properly account for sampling design in model based analysis, we introduce inverse probability bootstrapping (IPB). Inverse probability bootstrapping is an easily implemented method for obtaining equal probability re-samples from a probability sample, from which unbiased model based estimates can be made. We demonstrate the potential for bias in model-based analyses that ignore sample inclusion probabilities, and the effectiveness of IPB sampling in eliminating this bias, using both simulated and actual ecological data. For illustration, we considered three model based analysis tools—linear regression, quantile regression, and boosted regression tree analysis. In all models, using both simulated and actual ecological data, we found inferences to be
Modeling evolution using the probability of fixation: history and implications.
McCandlish, David M; Stoltzfus, Arlin
2014-09-01
Many models of evolution calculate the rate of evolution by multiplying the rate at which new mutations originate within a population by a probability of fixation. Here we review the historical origins, contemporary applications, and evolutionary implications of these "origin-fixation" models, which are widely used in evolutionary genetics, molecular evolution, and phylogenetics. Origin-fixation models were first introduced in 1969, in association with an emerging view of "molecular" evolution. Early origin-fixation models were used to calculate an instantaneous rate of evolution across a large number of independently evolving loci; in the 1980s and 1990s, a second wave of origin-fixation models emerged to address a sequence of fixation events at a single locus. Although origin fixation models have been applied to a broad array of problems in contemporary evolutionary research, their rise in popularity has not been accompanied by an increased appreciation of their restrictive assumptions or their distinctive implications. We argue that origin-fixation models constitute a coherent theory of mutation-limited evolution that contrasts sharply with theories of evolution that rely on the presence of standing genetic variation. A major unsolved question in evolutionary biology is the degree to which these models provide an accurate approximation of evolution in natural populations.
Recent Advances in Model-Assisted Probability of Detection
NASA Technical Reports Server (NTRS)
Thompson, R. Bruce; Brasche, Lisa J.; Lindgren, Eric; Swindell, Paul; Winfree, William P.
2009-01-01
The increased role played by probability of detection (POD) in structural integrity programs, combined with the significant time and cost associated with the purely empirical determination of POD, provides motivation for alternate means to estimate this important metric of NDE techniques. One approach to make the process of POD estimation more efficient is to complement limited empirical experiments with information from physics-based models of the inspection process or controlled laboratory experiments. The Model-Assisted Probability of Detection (MAPOD) Working Group was formed by the Air Force Research Laboratory, the FAA Technical Center, and NASA to explore these possibilities. Since the 2004 inception of the MAPOD Working Group, 11 meetings have been held in conjunction with major NDE conferences. This paper will review the accomplishments of this group, which includes over 90 members from around the world. Included will be a discussion of strategies developed to combine physics-based and empirical understanding, draft protocols that have been developed to guide application of the strategies, and demonstrations that have been or are being carried out in a number of countries. The talk will conclude with a discussion of future directions, which will include documentation of benefits via case studies, development of formal protocols for engineering practice, as well as a number of specific technical issues.
Probability model for estimating colorectal polyp progression rates.
Gopalappa, Chaitra; Aydogan-Cremaschi, Selen; Das, Tapas K; Orcun, Seza
2011-03-01
According to the American Cancer Society, colorectal cancer (CRC) is the third most common cause of cancer related deaths in the United States. Experts estimate that about 85% of CRCs begin as precancerous polyps, early detection and treatment of which can significantly reduce the risk of CRC. Hence, it is imperative to develop population-wide intervention strategies for early detection of polyps. Development of such strategies requires precise values of population-specific rates of incidence of polyp and its progression to cancerous stage. There has been a considerable amount of research in recent years on developing screening based CRC intervention strategies. However, these are not supported by population-specific mathematical estimates of progression rates. This paper addresses this need by developing a probability model that estimates polyp progression rates considering race and family history of CRC; note that, it is ethically infeasible to obtain polyp progression rates through clinical trials. We use the estimated rates to simulate the progression of polyps in the population of the State of Indiana, and also the population of a clinical trial conducted in the State of Minnesota, which was obtained from literature. The results from the simulations are used to validate the probability model.
Probability of detection models for eddy current NDE methods
Rajesh, S.N.
1993-04-30
The development of probability of detection (POD) models for a variety of nondestructive evaluation (NDE) methods is motivated by a desire to quantify the variability introduced during the process of testing. Sources of variability involved in eddy current methods of NDE include those caused by variations in liftoff, material properties, probe canting angle, scan format, surface roughness and measurement noise. This thesis presents a comprehensive POD model for eddy current NDE. Eddy current methods of nondestructive testing are used widely in industry to inspect a variety of nonferromagnetic and ferromagnetic materials. The development of a comprehensive POD model is therefore of significant importance. The model incorporates several sources of variability characterized by a multivariate Gaussian distribution and employs finite element analysis to predict the signal distribution. The method of mixtures is then used for estimating optimal threshold values. The research demonstrates the use of a finite element model within a probabilistic framework to the spread in the measured signal for eddy current nondestructive methods. Using the signal distributions for various flaw sizes the POD curves for varying defect parameters have been computed. In contrast to experimental POD models, the cost of generating such curves is very low and complex defect shapes can be handled very easily. The results are also operator independent.
A count probability cookbok: Spurious effects and the scaling model
NASA Technical Reports Server (NTRS)
Colombi, S.; Bouchet, F. R.; Schaeffer, R.
1995-01-01
We study the errors brought by finite volume effects and dilution effects on the practical determination of the count probability distribution function P(sub N)(n,l), which is the probability of having N objects in a cell of volume l cubed for a set of average number density n. Dilution effects are particularly revelant to the so-called sparse sampling strategy. This work is mainly done in the framework of the Bailan & Schaeffer scaling model, which assumes that the Q-body correlation functions obey the scaling relation Xi(sub Q)(lambda r(sub l),....lambda r(sub Q) = lambda(exp -(Q-1)gamma) Xi(sub Q)(r(sub 1),....r(sub Q)). We use three synthetic samples as references to perform our analysis: a fractal generated by a Rayleigh-Levy random walk with approximately 3 x 10(exp 4) objects, a sample dominated by a spherical power-law cluster with approximately 3 x 10(exp 4) objects and a cold dark matter (CDM) universe involving approximately 3 x 10(exp 5) matter particles.
Normal brain ageing: models and mechanisms
Toescu, Emil C
2005-01-01
Normal ageing is associated with a degree of decline in a number of cognitive functions. Apart from the issues raised by the current attempts to expand the lifespan, understanding the mechanisms and the detailed metabolic interactions involved in the process of normal neuronal ageing continues to be a challenge. One model, supported by a significant amount of experimental evidence, views the cellular ageing as a metabolic state characterized by an altered function of the metabolic triad: mitochondria–reactive oxygen species (ROS)–intracellular Ca2+. The perturbation in the relationship between the members of this metabolic triad generate a state of decreased homeostatic reserve, in which the aged neurons could maintain adequate function during normal activity, as demonstrated by the fact that normal ageing is not associated with widespread neuronal loss, but become increasingly vulnerable to the effects of excessive metabolic loads, usually associated with trauma, ischaemia or neurodegenerative processes. This review will concentrate on some of the evidence showing altered mitochondrial function with ageing and also discuss some of the functional consequences that would result from such events, such as alterations in mitochondrial Ca2+ homeostasis, ATP production and generation of ROS. PMID:16321805
Low-probability flood risk modeling for New York City.
Aerts, Jeroen C J H; Lin, Ning; Botzen, Wouter; Emanuel, Kerry; de Moel, Hans
2013-05-01
The devastating impact by Hurricane Sandy (2012) again showed New York City (NYC) is one of the most vulnerable cities to coastal flooding around the globe. The low-lying areas in NYC can be flooded by nor'easter storms and North Atlantic hurricanes. The few studies that have estimated potential flood damage for NYC base their damage estimates on only a single, or a few, possible flood events. The objective of this study is to assess the full distribution of hurricane flood risk in NYC. This is done by calculating potential flood damage with a flood damage model that uses many possible storms and surge heights as input. These storms are representative for the low-probability/high-impact flood hazard faced by the city. Exceedance probability-loss curves are constructed under different assumptions about the severity of flood damage. The estimated flood damage to buildings for NYC is between US$59 and 129 millions/year. The damage caused by a 1/100-year storm surge is within a range of US$2 bn-5 bn, while this is between US$5 bn and 11 bn for a 1/500-year storm surge. An analysis of flood risk in each of the five boroughs of NYC finds that Brooklyn and Queens are the most vulnerable to flooding. This study examines several uncertainties in the various steps of the risk analysis, which resulted in variations in flood damage estimations. These uncertainties include: the interpolation of flood depths; the use of different flood damage curves; and the influence of the spectra of characteristics of the simulated hurricanes.
Modeling pore corrosion in normally open gold- plated copper connectors.
Battaile, Corbett Chandler; Moffat, Harry K.; Sun, Amy Cha-Tien; Enos, David George; Serna, Lysle M.; Sorensen, Neil Robert
2008-09-01
The goal of this study is to model the electrical response of gold plated copper electrical contacts exposed to a mixed flowing gas stream consisting of air containing 10 ppb H{sub 2}S at 30 C and a relative humidity of 70%. This environment accelerates the attack normally observed in a light industrial environment (essentially a simplified version of the Battelle Class 2 environment). Corrosion rates were quantified by measuring the corrosion site density, size distribution, and the macroscopic electrical resistance of the aged surface as a function of exposure time. A pore corrosion numerical model was used to predict both the growth of copper sulfide corrosion product which blooms through defects in the gold layer and the resulting electrical contact resistance of the aged surface. Assumptions about the distribution of defects in the noble metal plating and the mechanism for how corrosion blooms affect electrical contact resistance were needed to complete the numerical model. Comparisons are made to the experimentally observed number density of corrosion sites, the size distribution of corrosion product blooms, and the cumulative probability distribution of the electrical contact resistance. Experimentally, the bloom site density increases as a function of time, whereas the bloom size distribution remains relatively independent of time. These two effects are included in the numerical model by adding a corrosion initiation probability proportional to the surface area along with a probability for bloom-growth extinction proportional to the corrosion product bloom volume. The cumulative probability distribution of electrical resistance becomes skewed as exposure time increases. While the electrical contact resistance increases as a function of time for a fraction of the bloom population, the median value remains relatively unchanged. In order to model this behavior, the resistance calculated for large blooms has been weighted more heavily.
Effectiveness of Incorporating Adversary Probability Perception Modeling in Security Games
2015-01-30
security game (SSG) algorithms. Given recent work on human decision-making, we adjust the existing subjective utility function to account for...data from previous security game experiments with human subjects. Our results show the incorporation of probability perceptions into the SUQR can...provide improvements in the ability to predict probabilities of attack in certain games .
A Probability Model of Accuracy in Deception Detection Experiments.
ERIC Educational Resources Information Center
Park, Hee Sun; Levine, Timothy R.
2001-01-01
Extends the recent work on the veracity effect in deception detection. Explains the probabilistic nature of a receiver's accuracy in detecting deception and analyzes a receiver's detection of deception in terms of set theory and conditional probability. Finds that accuracy is shown to be a function of the relevant conditional probability and the…
Estimation of State Transition Probabilities: A Neural Network Model
NASA Astrophysics Data System (ADS)
Saito, Hiroshi; Takiyama, Ken; Okada, Masato
2015-12-01
Humans and animals can predict future states on the basis of acquired knowledge. This prediction of the state transition is important for choosing the best action, and the prediction is only possible if the state transition probability has already been learned. However, how our brains learn the state transition probability is unknown. Here, we propose a simple algorithm for estimating the state transition probability by utilizing the state prediction error. We analytically and numerically confirmed that our algorithm is able to learn the probability completely with an appropriate learning rate. Furthermore, our learning rule reproduced experimentally reported psychometric functions and neural activities in the lateral intraparietal area in a decision-making task. Thus, our algorithm might describe the manner in which our brains learn state transition probabilities and predict future states.
Aerosol Behavior Log-Normal Distribution Model.
GIESEKE, J. A.
2001-10-22
HAARM3, an acronym for Heterogeneous Aerosol Agglomeration Revised Model 3, is the third program in the HAARM series developed to predict the time-dependent behavior of radioactive aerosols under postulated LMFBR accident conditions. HAARM3 was developed to include mechanisms of aerosol growth and removal which had not been accounted for in the earlier models. In addition, experimental measurements obtained on sodium oxide aerosols have been incorporated in the code. As in HAARM2, containment gas temperature, pressure, and temperature gradients normal to interior surfaces are permitted to vary with time. The effects of reduced density on sodium oxide agglomerate behavior and of nonspherical shape of particles on aerosol behavior mechanisms are taken into account, and aerosol agglomeration due to turbulent air motion is considered. Also included is a capability to calculate aerosol concentration attenuation factors and to restart problems requiring long computing times.
Modeling seismic noise by normal mode summation
NASA Astrophysics Data System (ADS)
Gualtieri, L.; Stutzmann, E.; Capdeville, Y.; Ardhuin, F.; Schimmel, M.; Mangeney, A.; Morelli, A.
2012-12-01
Cross-correlation of ambient seismic noise plays a fundamental role to extract and better understand seismic properties of the Earth. The knowledge of the distribution of noise sources and the theory behind the seismic noise generation turns out to be of fundamental importance in the study of seismic noise cross-correlation. To improve these knowledge, we model the secondary microseismic noise, i.e. the period band 5-12 s, using normal mode summation and focus our attention on the noise sources distribution varying both in space and in time. Longuet-Higgins (1950) showed that the sources of the secondary microseismic noise are due to the pressure fluctuations that are generated by the interaction of ocean waves either in deep ocean or close by the coast and due to coastal reflection. Considering a recent ocean wave model (Ardhuin et al., 2011) that takes into account coastal reflection, we compute the vertical force due to the pressure fluctuation that has to be applied at the surface of the ocean. Noise sources are discretized in a spherical grid with constant resolution of 50 km and they are used to compute synthetic seismograms and spectra by normal mode summation. We show that we retrieve the maximum force amplitude for periods of 6-7 s which is consistent with the position of the maximum peak in the spectra and that, for long period in the secondary microseismic band, i.e. around 12 s, mostly the sources generated by coastal reflection have a strong influence on the microseism generation. We also show that the displacement of the ground is amplified in relation with the ocean bathymetry in agreement with Longuet-Higgins' theory and that the ocean site amplification can be computed using normal modes. We investigate also the role of the attenuation considering sources at regional scale. We are able to reproduce seasonal variations and to identify the noise sources having the main contribution in the spectra. We obtain a good agreement between synthetic and real
Smits, Iris A M; Timmerman, Marieke E; Stegeman, Alwin
2016-05-01
Maximum likelihood estimation of the linear factor model for continuous items assumes normally distributed item scores. We consider deviations from normality by means of a skew-normally distributed factor model or a quadratic factor model. We show that the item distributions under a skew-normal factor are equivalent to those under a quadratic model up to third-order moments. The reverse only holds if the quadratic loadings are equal to each other and within certain bounds. We illustrate that observed data which follow any skew-normal factor model can be so well approximated with the quadratic factor model that the models are empirically indistinguishable, and that the reverse does not hold in general. The choice between the two models to account for deviations of normality is illustrated by an empirical example from clinical psychology.
Sabelnikov, Alexander; Zhukov, Vladimir; Kempf, Ruth
2006-05-15
Real-time biosensors are expected to provide significant help in emergency response management should a terrorist attack with the use of biowarfare, BW, agents occur. In spite of recent and spectacular progress in the field of biosensors, several core questions still remain unaddressed. For instance, how sensitive should be a sensor? To what levels of infection would the different sensitivity limits correspond? How the probabilities of identification correspond to the probabilities of infection by an agent? In this paper, an attempt was made to address these questions. A simple probability model was generated for the calculation of risks of infection of humans exposed to different doses of infectious agents and of the probability of their simultaneous real-time detection/identification by a model biosensor and its network. A model biosensor was defined as a single device that included an aerosol sampler and a device for identification by any known (or conceived) method. A network of biosensors was defined as a set of several single biosensors that operated in a similar way and dealt with the same amount of an agent. Neither the particular deployment of sensors within the network, nor the spacious and timely distribution of agent aerosols due to wind, ventilation, humidity, temperature, etc., was considered by the model. Three model biosensors based on PCR-, antibody/antigen-, and MS-technique were used for simulation. A wide range of their metric parameters encompassing those of commercially available and laboratory biosensors, and those of future, theoretically conceivable devices was used for several hundred simulations. Based on the analysis of the obtained results, it is concluded that small concentrations of aerosolized agents that are still able to provide significant risks of infection especially for highly infectious agents (e.g. for small pox those risk are 1, 8, and 37 infected out of 1000 exposed, depending on the viability of the virus preparation) will
Model and test in a fungus of the probability that beneficial mutations survive drift.
Gifford, Danna R; de Visser, J Arjan G M; Wahl, Lindi M
2013-02-23
Determining the probability of fixation of beneficial mutations is critically important for building predictive models of adaptive evolution. Despite considerable theoretical work, models of fixation probability have stood untested for nearly a century. However, recent advances in experimental and theoretical techniques permit the development of models with testable predictions. We developed a new model for the probability of surviving genetic drift, a major component of fixation probability, for novel beneficial mutations in the fungus Aspergillus nidulans, based on the life-history characteristics of its colony growth on a solid surface. We tested the model by measuring the probability of surviving drift in 11 adapted strains introduced into wild-type populations of different densities. We found that the probability of surviving drift increased with mutant invasion fitness, and decreased with wild-type density, as expected. The model accurately predicted the survival probability for the majority of mutants, yielding one of the first direct tests of the extinction probability of beneficial mutations.
Modeling the effect of reward amount on probability discounting.
Myerson, Joel; Green, Leonard; Morris, Joshua
2011-03-01
The present study with college students examined the effect of amount on the discounting of probabilistic monetary rewards. A hyperboloid function accurately described the discounting of hypothetical rewards ranging in amount from $20 to $10,000,000. The degree of discounting increased continuously with amount of probabilistic reward. This effect of amount was not due to changes in the rate parameter of the discounting function, but rather was due to increases in the exponent. These results stand in contrast to those observed with the discounting of delayed monetary rewards, in which the degree of discounting decreases with reward amount due to amount-dependent decreases in the rate parameter. Taken together, this pattern of results suggests that delay and probability discounting reflect different underlying mechanisms. That is, the fact that the exponent in the delay discounting function is independent of amount is consistent with a psychophysical scaling interpretation, whereas the finding that the exponent of the probability-discounting function is amount-dependent is inconsistent with such an interpretation. Instead, the present results are consistent with the idea that the probability-discounting function is itself the product of a value function and a weighting function. This idea was first suggested by Kahneman and Tversky (1979), although their prospect theory does not predict amount effects like those observed. The effect of amount on probability discounting was parsimoniously incorporated into our hyperboloid discounting function by assuming that the exponent was proportional to the amount raised to a power. The amount-dependent exponent of the probability-discounting function may be viewed as reflecting the effect of amount on the weighting of the probability with which the reward will be received.
The Sequential Probability Ratio Test and Binary Item Response Models
ERIC Educational Resources Information Center
Nydick, Steven W.
2014-01-01
The sequential probability ratio test (SPRT) is a common method for terminating item response theory (IRT)-based adaptive classification tests. To decide whether a classification test should stop, the SPRT compares a simple log-likelihood ratio, based on the classification bound separating two categories, to prespecified critical values. As has…
Modeling seismic noise by normal mode summation
NASA Astrophysics Data System (ADS)
Gualtieri, L.; Stutzmann, E.; Capdeville, Y.; Ardhuin, F.; Schimmel, M.; Mangeney, A.; Morelli, A.
2012-04-01
Microseismic noise is the continuous oscillation of the ground in the period band 5-20 s. We observe seasonal variations of this noise that are stable over the last 20 years. Microseism spectra display 2 peaks, and the strongest peak, in the period band 5-12 s, correspond to the so called secondary microseism. Longuet-Higgins (1950) showed that the corresponding sources are pressure fluctuations that are generated by the interaction of ocean waves either in deep ocean or due to coastal reflection. Considering an ocean wave model that takes into account coastal reflection, we compute the pressure fluctuation as a vertical force applied at the surface of the ocean. The sources are discretized in a spherical grid with constant grid spacing of 50 km. We then compute the synthetic spectra by normal mode summation in a realistic Earth model. We show that the maximum force amplitude is for periods 6-7 s which is consistent with the period of the seismic spectra maximum peak and that, for periods around 12 s, only the sources generated by coastal reflection have a strong influence for the microseism generation. We also show that the displacement of the ground is amplified in relation with the ocean bathymetry in agreement with Longuet-Higgins' theory. We obtain a good agreement between synthetic and real seismic spectra in the period band 5-12sec. Modeling seismic noise is a useful tool for selecting particular noise data such as the strongest peaks and further investigating the corresponding sources. These noise sources may then be used for tomography.
NASA Astrophysics Data System (ADS)
Baer, P.; Mastrandrea, M.
2006-12-01
Simple probabilistic models which attempt to estimate likely transient temperature change from specified CO2 emissions scenarios must make assumptions about at least six uncertain aspects of the causal chain between emissions and temperature: current radiative forcing (including but not limited to aerosols), current land use emissions, carbon sinks, future non-CO2 forcing, ocean heat uptake, and climate sensitivity. Of these, multiple PDFs (probability density functions) have been published for the climate sensitivity, a couple for current forcing and ocean heat uptake, one for future non-CO2 forcing, and none for current land use emissions or carbon cycle uncertainty (which are interdependent). Different assumptions about these parameters, as well as different model structures, will lead to different estimates of likely temperature increase from the same emissions pathway. Thus policymakers will be faced with a range of temperature probability distributions for the same emissions scenarios, each described by a central tendency and spread. Because our conventional understanding of uncertainty and probability requires that a probabilistically defined variable of interest have only a single mean (or median, or modal) value and a well-defined spread, this "multidimensional" uncertainty defies straightforward utilization in policymaking. We suggest that there are no simple solutions to the questions raised. Crucially, we must dispel the notion that there is a "true" probability probabilities of this type are necessarily subjective, and reasonable people may disagree. Indeed, we suggest that what is at stake is precisely the question, what is it reasonable to believe, and to act as if we believe? As a preliminary suggestion, we demonstrate how the output of a simple probabilistic climate model might be evaluated regarding the reasonableness of the outputs it calculates with different input PDFs. We suggest further that where there is insufficient evidence to clearly
ERIC Educational Resources Information Center
Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas
2013-01-01
Data analysis requires subtle probability reasoning to answer questions like "What is the chance of event A occurring, given that event B was observed?" This generic question arises in discussions of many intriguing scientific questions such as "What is the probability that an adolescent weighs between 120 and 140 pounds given that…
Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.
2012-04-01
Purpose: To determine whether normal tissue complication probability (NTCP) analyses of the human spinal cord by use of the Lyman-Kutcher-Burman (LKB) model, supplemented by linear-quadratic modeling to account for the effect of fractionation, predict the risk of myelopathy from stereotactic radiosurgery (SRS). Methods and Materials: From November 2001 to July 2008, 24 spinal hemangioblastomas in 17 patients were treated with SRS. Of the tumors, 17 received 1 fraction with a median dose of 20 Gy (range, 18-30 Gy) and 7 received 20 to 25 Gy in 2 or 3 sessions, with cord maximum doses of 22.7 Gy (range, 17.8-30.9 Gy) and 22.0 Gy (range, 20.2-26.6 Gy), respectively. By use of conventional values for {alpha}/{beta}, volume parameter n, 50% complication probability dose TD{sub 50}, and inverse slope parameter m, a computationally simplified implementation of the LKB model was used to calculate the biologically equivalent uniform dose and NTCP for each treatment. Exploratory calculations were performed with alternate values of {alpha}/{beta} and n. Results: In this study 1 case (4%) of myelopathy occurred. The LKB model using radiobiological parameters from Emami and the logistic model with parameters from Schultheiss overestimated complication rates, predicting 13 complications (54%) and 18 complications (75%), respectively. An increase in the volume parameter (n), to assume greater parallel organization, improved the predictive value of the models. Maximum-likelihood LKB fitting of {alpha}/{beta} and n yielded better predictions (0.7 complications), with n = 0.023 and {alpha}/{beta} = 17.8 Gy. Conclusions: The spinal cord tolerance to the dosimetry of SRS is higher than predicted by the LKB model using any set of accepted parameters. Only a high {alpha}/{beta} value in the LKB model and only a large volume effect in the logistic model with Schultheiss data could explain the low number of complications observed. This finding emphasizes that radiobiological models
Simplifying Probability Elicitation and Uncertainty Modeling in Bayesian Networks
Paulson, Patrick R; Carroll, Thomas E; Sivaraman, Chitra; Neorr, Peter A; Unwin, Stephen D; Hossain, Shamina S
2011-04-16
In this paper we contribute two methods that simplify the demands of knowledge elicitation for particular types of Bayesian networks. The first method simplify the task of providing probabilities when the states that a random variable takes can be described by a new, fully ordered state set in which a state implies all the preceding states. The second method leverages Dempster-Shafer theory of evidence to provide a way for the expert to express the degree of ignorance that they feel about the estimates being provided.
NASA Astrophysics Data System (ADS)
Homer, Rachel M.; Law, David W.; Molyneaux, Thomas C. K.
2015-07-01
In previous studies, a 1-D numerical predictive tool to simulate the salt induced corrosion of port assets in Australia has been developed into a 2-D and 3-D model based on current predictive probabilistic models. These studies use a probability distribution function based on the mean and standard deviation of the parameters for a structure incorporating surface chloride concentration, diffusion coefficient and cover. In this paper, this previous work is extended through an investigation of the distribution of actual cover by specified cover, element type and method of construction. Significant differences are found for the measured cover within structures, by method of construction, element type and specified cover. The data are not normally distributed and extreme values, usually low, are found in a number of locations. Elements cast insitu are less likely to meet the specified cover and the measured cover is more dispersed than those in elements which are precast. Individual probability distribution functions are available and are tested against the original function. Methods of combining results so that one distribution is available for a structure are formulated and evaluated. The ability to utilise the model for structures where no measurement have been taken is achieved by transposing results based on the specified cover.
NASA Astrophysics Data System (ADS)
Silva, Antonio
2005-03-01
It is well-known that the mathematical theory of Brownian motion was first developed in the Ph. D. thesis of Louis Bachelier for the French stock market before Einstein [1]. In Ref. [2] we studied the so-called Heston model, where the stock-price dynamics is governed by multiplicative Brownian motion with stochastic diffusion coefficient. We solved the corresponding Fokker-Planck equation exactly and found an analytic formula for the time-dependent probability distribution of stock price changes (returns). The formula interpolates between the exponential (tent-shaped) distribution for short time lags and the Gaussian (parabolic) distribution for long time lags. The theoretical formula agrees very well with the actual stock-market data ranging from the Dow-Jones index [2] to individual companies [3], such as Microsoft, Intel, etc. [] [1] Louis Bachelier, ``Th'eorie de la sp'eculation,'' Annales Scientifiques de l''Ecole Normale Sup'erieure, III-17:21-86 (1900).[] [2] A. A. Dragulescu and V. M. Yakovenko, ``Probability distribution of returns in the Heston model with stochastic volatility,'' Quantitative Finance 2, 443--453 (2002); Erratum 3, C15 (2003). [cond-mat/0203046] [] [3] A. C. Silva, R. E. Prange, and V. M. Yakovenko, ``Exponential distribution of financial returns at mesoscopic time lags: a new stylized fact,'' Physica A 344, 227--235 (2004). [cond-mat/0401225
Modeling Conditional Probabilities in Complex Educational Assessments. CSE Technical Report.
ERIC Educational Resources Information Center
Mislevy, Robert J.; Almond, Russell; Dibello, Lou; Jenkins, Frank; Steinberg, Linda; Yan, Duanli; Senturk, Deniz
An active area in psychometric research is coordinated task design and statistical analysis built around cognitive models. Compared with classical test theory and item response theory, there is often less information from observed data about the measurement-model parameters. On the other hand, there is more information from the grounding…
Application of Probability Methods to Assess Crash Modeling Uncertainty
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Stockwell, Alan E.; Hardy, Robin C.
2003-01-01
Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stress-strain behaviors, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the effects of finite element modeling assumptions on the predicted responses. The vertical drop test of a Fokker F28 fuselage section will be the focus of this paper. The results of a probabilistic analysis using finite element simulations will be compared with experimental data.
Application of Probability Methods to Assess Crash Modeling Uncertainty
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Stockwell, Alan E.; Hardy, Robin C.
2007-01-01
Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stress-strain behaviors, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the effects of finite element modeling assumptions on the predicted responses. The vertical drop test of a Fokker F28 fuselage section will be the focus of this paper. The results of a probabilistic analysis using finite element simulations will be compared with experimental data.
Modeling the Spectra of Dense Hydrogen Plasmas: Beyond Occupation Probability
NASA Astrophysics Data System (ADS)
Gomez, T. A.; Montgomery, M. H.; Nagayama, T.; Kilcrease, D. P.; Winget, D. E.
2017-03-01
Accurately measuring the masses of white dwarf stars is crucial in many astrophysical contexts (e.g., asteroseismology and cosmochronology). These masses are most commonly determined by fitting a model atmosphere to an observed spectrum; this is known as the spectroscopic method. However, for cases in which more than one method may be employed, there are well known discrepancies between masses determined by the spectroscopic method and those determined by astrometric, dynamical, and/or gravitational-redshift methods. In an effort to resolve these discrepancies, we are developing a new model of hydrogen in a dense plasma that is a significant departure from previous models. Experiments at Sandia National Laboratories are currently underway to validate these new models, and we have begun modifications to incorporate these models into stellar-atmosphere codes.
The probability distribution model of air pollution index and its dominants in Kuala Lumpur
NASA Astrophysics Data System (ADS)
AL-Dhurafi, Nasr Ahmed; Razali, Ahmad Mahir; Masseran, Nurulkamal; Zamzuri, Zamira Hasanah
2016-11-01
This paper focuses on the statistical modeling for the distributions of air pollution index (API) and its sub-indexes data observed at Kuala Lumpur in Malaysia. Five pollutants or sub-indexes are measured including, carbon monoxide (CO); sulphur dioxide (SO2); nitrogen dioxide (NO2), and; particulate matter (PM10). Four probability distributions are considered, namely log-normal, exponential, Gamma and Weibull in search for the best fit distribution to the Malaysian air pollutants data. In order to determine the best distribution for describing the air pollutants data, five goodness-of-fit criteria's are applied. This will help in minimizing the uncertainty in pollution resource estimates and improving the assessment phase of planning. The conflict in criterion results for selecting the best distribution was overcome by using the weight of ranks method. We found that the Gamma distribution is the best distribution for the majority of air pollutants data in Kuala Lumpur.
Takemura, Kazuhisa; Murakami, Hajime
2016-01-01
A probability weighting function (w(p)) is considered to be a nonlinear function of probability (p) in behavioral decision theory. This study proposes a psychophysical model of probability weighting functions derived from a hyperbolic time discounting model and a geometric distribution. The aim of the study is to show probability weighting functions from the point of view of waiting time for a decision maker. Since the expected value of a geometrically distributed random variable X is 1/p, we formulized the probability weighting function of the expected value model for hyperbolic time discounting as w(p) = (1 - k log p)(-1). Moreover, the probability weighting function is derived from Loewenstein and Prelec's (1992) generalized hyperbolic time discounting model. The latter model is proved to be equivalent to the hyperbolic-logarithmic weighting function considered by Prelec (1998) and Luce (2001). In this study, we derive a model from the generalized hyperbolic time discounting model assuming Fechner's (1860) psychophysical law of time and a geometric distribution of trials. In addition, we develop median models of hyperbolic time discounting and generalized hyperbolic time discounting. To illustrate the fitness of each model, a psychological experiment was conducted to assess the probability weighting and value functions at the level of the individual participant. The participants were 50 university students. The results of individual analysis indicated that the expected value model of generalized hyperbolic discounting fitted better than previous probability weighting decision-making models. The theoretical implications of this finding are discussed.
Fales, Roger
2010-10-01
In this work, a method for determining the reliability of dynamic systems is discussed. Using statistical information on system parameters, the goal is to determine the probability of a dynamic system achieving or not achieving frequency domain performance specifications such as low frequency tracking error, and bandwidth. An example system is considered with closed loop control. A performance specification is given and converted into a performance weight transfer function. The example system is found to have a 20% chance of not achieving the given performance specification. An example of a realistic higher order system model of an electro hydraulic valve with spring feedback and position measurement feedback is also considered. The spring rate and viscous friction are considered as random variables with normal distributions. It was found that nearly 6% of valve systems would not achieve the given frequency domain performance requirement. Uncertainty modeling is also considered. An uncertainty model for the hydraulic valve systems is presented with the same uncertain parameters as in the previous example. However, the uncertainty model was designed such that only 95% of plants would be covered by the uncertainty model. This uncertainty model was applied to the valve control system example in a robust performance test.
Jakobi, Annika; Bandurska-Luque, Anna; Stützer, Kristin; Haase, Robert; Löck, Steffen; Wack, Linda-Jacqueline; Mönnich, David; Thorwarth, Daniela; and others
2015-08-01
Purpose: The purpose of this study was to determine, by treatment plan comparison along with normal tissue complication probability (NTCP) modeling, whether a subpopulation of patients with head and neck squamous cell carcinoma (HNSCC) could be identified that would gain substantial benefit from proton therapy in terms of NTCP. Methods and Materials: For 45 HNSCC patients, intensity modulated radiation therapy (IMRT) was compared to intensity modulated proton therapy (IMPT). Physical dose distributions were evaluated as well as the resulting NTCP values, using modern models for acute mucositis, xerostomia, aspiration, dysphagia, laryngeal edema, and trismus. Patient subgroups were defined based on primary tumor location. Results: Generally, IMPT reduced the NTCP values while keeping similar target coverage for all patients. Subgroup analyses revealed a higher individual reduction of swallowing-related side effects by IMPT for patients with tumors in the upper head and neck area, whereas the risk reduction of acute mucositis was more pronounced in patients with tumors in the larynx region. More patients with tumors in the upper head and neck area had a reduction in NTCP of more than 10%. Conclusions: Subgrouping can help to identify patients who may benefit more than others from the use of IMPT and, thus, can be a useful tool for a preselection of patients in the clinic where there are limited PT resources. Because the individual benefit differs within a subgroup, the relative merits should additionally be evaluated by individual treatment plan comparisons.
The effect of interruption probability in lattice model of two-lane traffic flow with passing
NASA Astrophysics Data System (ADS)
Peng, Guanghan
2016-11-01
A new lattice model is proposed by taking into account the interruption probability with passing for two-lane freeway. The effect of interruption probability with passing is investigated about the linear stability condition and the mKdV equation through linear stability analysis and nonlinear analysis, respectively. Furthermore, numerical simulation is carried out to study traffic phenomena resulted from the interruption probability with passing in two-lane system. The results show that the interruption probability with passing can improve the stability of traffic flow for low reaction coefficient while the interruption probability with passing can destroy the stability of traffic flow for high reaction coefficient on two-lane highway.
Azarang, Leyla; Scheike, Thomas; de Uña-Álvarez, Jacobo
2017-02-26
In this work, we present direct regression analysis for the transition probabilities in the possibly non-Markov progressive illness-death model. The method is based on binomial regression, where the response is the indicator of the occupancy for the given state along time. Randomly weighted score equations that are able to remove the bias due to censoring are introduced. By solving these equations, one can estimate the possibly time-varying regression coefficients, which have an immediate interpretation as covariate effects on the transition probabilities. The performance of the proposed estimator is investigated through simulations. We apply the method to data from the Registry of Systematic Lupus Erythematosus RELESSER, a multicenter registry created by the Spanish Society of Rheumatology. Specifically, we investigate the effect of age at Lupus diagnosis, sex, and ethnicity on the probability of damage and death along time. Copyright © 2017 John Wiley & Sons, Ltd.
Physical Model Assisted Probability of Detection in Nondestructive Evaluation
NASA Astrophysics Data System (ADS)
Li, M.; Meeker, W. Q.; Thompson, R. B.
2011-06-01
Nondestructive evaluation is used widely in many engineering and industrial areas to detect defects or flaws such as cracks inside parts or structures during manufacturing or for products in service. The standard statistical model is a simple empirical linear regression between the (possibly transformed) signal response variables and the (possibly transformed) explanatory variables. For some applications, such a simple empirical approach is inadequate. An important alternative approach is to use knowledge of the physics of the inspection process to provide information about the underlying relationship between the response and explanatory variables. Use of such knowledge can greatly increase the power and accuracy of the statistical analysis and enable, when needed, proper extrapolation outside the range of the observed explanatory variables. This paper describes a set of physical model-assisted analyses to study the capability of two different ultrasonic testing inspection methods to detect synthetic hard alpha inclusion and flat-bottom hole defects in a titanium forging disk.
Transition probabilities matrix of Markov Chain in the fatigue crack growth model
NASA Astrophysics Data System (ADS)
Nopiah, Zulkifli Mohd; Januri, Siti Sarah; Ariffin, Ahmad Kamal; Masseran, Nurulkamal; Abdullah, Shahrum
2016-10-01
Markov model is one of the reliable method to describe the growth of the crack from the initial until fracture phase. One of the important subjects in the crack growth models is to obtain the transition probability matrix of the fatigue. Determining probability transition matrix is important in Markov Chain model for describing probability behaviour of fatigue life in the structure. In this paper, we obtain transition probabilities of a Markov chain based on the Paris law equation to describe the physical meaning of fatigue crack growth problem. The results show that the transition probabilities are capable to calculate the probability of damage in the future with the possibilities of comparing each stage between time.
Probability models in the analysis of radiation-related complications: utility and limitations
Potish, R.A.; Boen, J.; Jones, T.K. Jr.; Levitt, S.H.
1981-07-01
In order to predict radiation-related enteric damage, 92 women were studied who had received identical radiation doses for cancer of the ovary from 1970 through 1977. A logistic model was used to predict the probability of complication as a function of number of laparotomies, hypertension, and thin physique. The utility and limitations of such probability models are presented.
Probabilistic Independence Networks for Hidden Markov Probability Models
NASA Technical Reports Server (NTRS)
Smyth, Padhraic; Heckerman, Cavid; Jordan, Michael I
1996-01-01
In this paper we explore hidden Markov models(HMMs) and related structures within the general framework of probabilistic independence networks (PINs). The paper contains a self-contained review of the basic principles of PINs. It is shown that the well-known forward-backward (F-B) and Viterbi algorithms for HMMs are special cases of more general enference algorithms for arbitrary PINs.
Inferring Tree Causal Models of Cancer Progression with Probability Raising
Mauri, Giancarlo; Antoniotti, Marco; Mishra, Bud
2014-01-01
Existing techniques to reconstruct tree models of progression for accumulative processes, such as cancer, seek to estimate causation by combining correlation and a frequentist notion of temporal priority. In this paper, we define a novel theoretical framework called CAPRESE (CAncer PRogression Extraction with Single Edges) to reconstruct such models based on the notion of probabilistic causation defined by Suppes. We consider a general reconstruction setting complicated by the presence of noise in the data due to biological variation, as well as experimental or measurement errors. To improve tolerance to noise we define and use a shrinkage-like estimator. We prove the correctness of our algorithm by showing asymptotic convergence to the correct tree under mild constraints on the level of noise. Moreover, on synthetic data, we show that our approach outperforms the state-of-the-art, that it is efficient even with a relatively small number of samples and that its performance quickly converges to its asymptote as the number of samples increases. For real cancer datasets obtained with different technologies, we highlight biologically significant differences in the progressions inferred with respect to other competing techniques and we also show how to validate conjectured biological relations with progression models. PMID:25299648
A probability model for the distribution of the number of migrants at the household level.
Yadava, K N; Singh, R B
1991-01-01
A probability model to characterize the pattern of total number of migrants from a household has been developed. Earlier models which had several limitations have been improved. The assumptions of the proposed model were migrants from a household occur in clusters and may rarely happen, and the risk of migration occurring in a cluster vary from household to household. Data from 3514 households from either semiurban, remote, or growth center villages in India were applied to the proposed probability model. The Rural Development and Population Growth--A Sample Survey 1978 defined household as a group of people who normally live together and eat from a shared kitchen. The people do not necessarily reside in the village, however, but work elsewhere and send remittances. They consider themselves to be part of the household. Observed and expected frequencies significantly agreed only for those in the high social status group (p.05). The mean number of clusters/household was greater for remote villages (.26) than growth center and semiurban villages (.22 and .13, respectively). On the other hand, the mean number of migrants/cluster was smaller for remote villages (2.1) than growth center and semiurban villages (2.17 and 2.62, respectively). These results may indicate that men migrate alone in different clusters from remote villages and men from growth center and semiurban villages migrate with their families in fewer number of clusters. Men from growth center and semiurban villages tended to be well educated and professionals. The mean number of migrants/household was higher for remote villages (.56) than growth center (.47) and semiurban (.33) villages. Commuting to work accounted for this difference.
An improved cellular automaton traffic model considering gap-dependent delay probability
NASA Astrophysics Data System (ADS)
Li, Qi-Lang; Wang, Bing-Hong; Liu, Mu-Ren
2011-04-01
In this paper, the delay probability of the original Nagel and Schreckenberg model is modified to simulate one-lane traffic flow. The delay probability of a vehicle depends on its corresponding gap. According to simulation results, it has been found that the structure of the fundamental diagram of the new model is sensitively dependent on the values of the delay probability. In comparison with the NS model, one notes that the fundamental diagram of the new model is more consistent with the results measured in the real traffic, and the velocity distributions of the new model are relatively reasonable.
Skew-normal antedependence models for skewed longitudinal data.
Chang, Shu-Ching; Zimmerman, Dale L
2016-06-01
Antedependence models, also known as transition models, have proven to be useful for longitudinal data exhibiting serial correlation, especially when the variances and/or same-lag correlations are time-varying. Statistical inference procedures associated with normal antedependence models are well-developed and have many nice properties, but they are not appropriate for longitudinal data that exhibit considerable skewness. We propose two direct extensions of normal antedependence models to skew-normal antedependence models. The first is obtained by imposing antedependence on a multivariate skew-normal distribution, and the second is a sequential autoregressive model with skew-normal innovations. For both models, necessary and sufficient conditions for [Formula: see text]th-order antedependence are established, and likelihood-based estimation and testing procedures for models satisfying those conditions are developed. The procedures are applied to simulated data and to real data from a study of cattle growth.
Probability distributions of molecular observables computed from Markov models.
Noé, Frank
2008-06-28
Molecular dynamics (MD) simulations can be used to estimate transition rates between conformational substates of the simulated molecule. Such an estimation is associated with statistical uncertainty, which depends on the number of observed transitions. In turn, it induces uncertainties in any property computed from the simulation, such as free energy differences or the time scales involved in the system's kinetics. Assessing these uncertainties is essential for testing the reliability of a given observation and also to plan further simulations in such a way that the most serious uncertainties will be reduced with minimal effort. Here, a rigorous statistical method is proposed to approximate the complete statistical distribution of any observable of an MD simulation provided that one can identify conformational substates such that the transition process between them may be modeled with a memoryless jump process, i.e., Markov or Master equation dynamics. The method is based on sampling the statistical distribution of Markov transition matrices that is induced by the observed transition events. It allows physically meaningful constraints to be included, such as sampling only matrices that fulfill detailed balance, or matrices that produce a predefined equilibrium distribution of states. The method is illustrated on mus MD simulations of a hexapeptide for which the distributions and uncertainties of the free energy differences between conformations, the transition matrix elements, and the transition matrix eigenvalues are estimated. It is found that both constraints, detailed balance and predefined equilibrium distribution, can significantly reduce the uncertainty of some observables.
NASA Astrophysics Data System (ADS)
Neupauer, R. M.; Wilson, J. L.
2005-02-01
Backward location and travel time probability density functions characterize the possible former locations (or the source location) of contamination that is observed in an aquifer. For an observed contaminant particle the backward location probability density function (PDF) describes its position at a fixed time prior to sampling, and the backward travel time probability density function describes the amount of time required for the particle to travel to the sampling location from a fixed upgradient position. The backward probability model has been developed for a single observation of contamination (e.g., Neupauer and Wilson, 1999). In practical situations, contamination is sampled at multiple locations and times, and these additional data provide information that can be used to better characterize the former position of contamination. Through Bayes' theorem we combine the individual PDFs for each observation to obtain a PDF for multiple observations that describes the possible source locations or release times of all observed contaminant particles, assuming they originated from the same instantaneous point source. We show that the multiple-observation probability density function is the normalized product of the single-observation PDFs. The additional information available from multiple observations reduces the variances of the source location and travel time probability density functions and improves the characterization of the contamination source. We apply the backward probability model to a trichloroethylene (TCE) plume at the Massachusetts Military Reservation (MMR). We use four TCE samples distributed throughout the plume to obtain single-observation and multiple-observation location and travel time PDFs in three dimensions. These PDFs provide information about the possible sources of contamination. Under assumptions that the existing MMR model is properly calibrated and the conceptual model is correct the results confirm the two suspected sources of
Time‐dependent renewal‐model probabilities when date of last earthquake is unknown
Field, Edward H.; Jordan, Thomas H.
2015-01-01
We derive time-dependent, renewal-model earthquake probabilities for the case in which the date of the last event is completely unknown, and compare these with the time-independent Poisson probabilities that are customarily used as an approximation in this situation. For typical parameter values, the renewal-model probabilities exceed Poisson results by more than 10% when the forecast duration exceeds ~20% of the mean recurrence interval. We also derive probabilities for the case in which the last event is further constrained to have occurred before historical record keeping began (the historic open interval), which can only serve to increase earthquake probabilities for typically applied renewal models.We conclude that accounting for the historic open interval can improve long-term earthquake rupture forecasts for California and elsewhere.
Cold and hot cognition: quantum probability theory and realistic psychological modeling.
Corr, Philip J
2013-06-01
Typically, human decision making is emotionally "hot" and does not conform to "cold" classical probability (CP) theory. As quantum probability (QP) theory emphasises order, context, superimposition states, and nonlinear dynamic effects, one of its major strengths may be its power to unify formal modeling and realistic psychological theory (e.g., information uncertainty, anxiety, and indecision, as seen in the Prisoner's Dilemma).
ERIC Educational Resources Information Center
Jenny, Mirjam A.; Rieskamp, Jörg; Nilsson, Håkan
2014-01-01
Judging whether multiple events will co-occur is an important aspect of everyday decision making. The underlying probabilities of occurrence are usually unknown and have to be inferred from experience. Using a rigorous, quantitative model comparison, we investigate how people judge the conjunctive probabilities of multiple events to co-occur. In 2…
Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno
2016-01-01
Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision. PMID:27303323
NASA Astrophysics Data System (ADS)
Love, J. J.; Rigler, E. J.; Pulkkinen, A. A.; Riley, P.
2015-12-01
An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to -Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, -Dst > 850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42, 2.41] times per century; a 100-yr magnetic storm is identified as having a -Dst > 880 nT (greater than Carrington) but a wide 95% confidence interval of [490, 1187] nT. This work is partially motivated by United States National Science and Technology Council and Committee on Space Research and International Living with a Star priorities and strategic plans for the assessment and mitigation of space-weather hazards.
Love, Jeffrey J.; Rigler, E. Joshua; Pulkkinen, Antti; Riley, Pete
2015-01-01
An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to −Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, −Dst≥850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42,2.41] times per century; a 100-yr magnetic storm is identified as having a −Dst≥880 nT (greater than Carrington) but a wide 95% confidence interval of [490,1187] nT.
Developing a probability-based model of aquifer vulnerability in an agricultural region
NASA Astrophysics Data System (ADS)
Chen, Shih-Kai; Jang, Cheng-Shin; Peng, Yi-Huei
2013-04-01
SummaryHydrogeological settings of aquifers strongly influence the regional groundwater movement and pollution processes. Establishing a map of aquifer vulnerability is considerably critical for planning a scheme of groundwater quality protection. This study developed a novel probability-based DRASTIC model of aquifer vulnerability in the Choushui River alluvial fan, Taiwan, using indicator kriging and to determine various risk categories of contamination potentials based on estimated vulnerability indexes. Categories and ratings of six parameters in the probability-based DRASTIC model were probabilistically characterized according to the parameter classification methods of selecting a maximum estimation probability and calculating an expected value. Moreover, the probability-based estimation and assessment gave us an excellent insight into propagating the uncertainty of parameters due to limited observation data. To examine the prediction capacity of pollutants for the developed probability-based DRASTIC model, medium, high, and very high risk categories of contamination potentials were compared with observed nitrate-N exceeding 0.5 mg/L indicating the anthropogenic groundwater pollution. The analyzed results reveal that the developed probability-based DRASTIC model is capable of predicting high nitrate-N groundwater pollution and characterizing the parameter uncertainty via the probability estimation processes.
A removal model for estimating detection probabilities from point-count surveys
Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.
2000-01-01
We adapted a removal model to estimate detection probability during point count surveys. The model assumes one factor influencing detection during point counts is the singing frequency of birds. This may be true for surveys recording forest songbirds when most detections are by sound. The model requires counts to be divided into several time intervals. We used time intervals of 2, 5, and 10 min to develop a maximum-likelihood estimator for the detectability of birds during such surveys. We applied this technique to data from bird surveys conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. The overall detection probability for all birds was 75%. We found differences in detection probability among species. Species that sing frequently such as Winter Wren and Acadian Flycatcher had high detection probabilities (about 90%) and species that call infrequently such as Pileated Woodpecker had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. This method of estimating detectability during point count surveys offers a promising new approach to using count data to address questions of the bird abundance, density, and population trends.
Simpson, Daniel R.; Song, William Y.; Moiseenko, Vitali; Rose, Brent S.; Yashar, Catheryn M.; Mundt, Arno J.; Mell, Loren K.
2012-05-01
Purpose: To test the hypothesis that increased bowel radiation dose is associated with acute gastrointestinal (GI) toxicity in cervical cancer patients undergoing concurrent chemotherapy and intensity-modulated radiation therapy (IMRT), using a previously derived normal tissue complication probability (NTCP) model. Methods: Fifty patients with Stage I-III cervical cancer undergoing IMRT and concurrent weekly cisplatin were analyzed. Acute GI toxicity was graded using the Radiation Therapy Oncology Group scale, excluding upper GI events. A logistic model was used to test correlations between acute GI toxicity and bowel dosimetric parameters. The primary objective was to test the association between Grade {>=}2 GI toxicity and the volume of bowel receiving {>=}45 Gy (V{sub 45}) using the logistic model. Results: Twenty-three patients (46%) had Grade {>=}2 GI toxicity. The mean (SD) V{sub 45} was 143 mL (99). The mean V{sub 45} values for patients with and without Grade {>=}2 GI toxicity were 176 vs. 115 mL, respectively. Twenty patients (40%) had V{sub 45} >150 mL. The proportion of patients with Grade {>=}2 GI toxicity with and without V{sub 45} >150 mL was 65% vs. 33% (p = 0.03). Logistic model parameter estimates V50 and {gamma} were 161 mL (95% confidence interval [CI] 60-399) and 0.31 (95% CI 0.04-0.63), respectively. On multivariable logistic regression, increased V{sub 45} was associated with an increased odds of Grade {>=}2 GI toxicity (odds ratio 2.19 per 100 mL, 95% CI 1.04-4.63, p = 0.04). Conclusions: Our results support the hypothesis that increasing bowel V{sub 45} is correlated with increased GI toxicity in cervical cancer patients undergoing IMRT and concurrent cisplatin. Reducing bowel V{sub 45} could reduce the risk of Grade {>=}2 GI toxicity by approximately 50% per 100 mL of bowel spared.
Proposal: A Hybrid Dictionary Modelling Approach for Malay Tweet Normalization
NASA Astrophysics Data System (ADS)
Muhamad, Nor Azlizawati Binti; Idris, Norisma; Arshi Saloot, Mohammad
2017-02-01
Malay Twitter message presents a special deviation from the original language. Malay Tweet widely used currently by Twitter users, especially at Malaya archipelago. Thus, it is important to make a normalization system which can translated Malay Tweet language into the standard Malay language. Some researchers have conducted in natural language processing which mainly focuses on normalizing English Twitter messages, while few studies have been done for normalize Malay Tweets. This paper proposes an approach to normalize Malay Twitter messages based on hybrid dictionary modelling methods. This approach normalizes noisy Malay twitter messages such as colloquially language, novel words, and interjections into standard Malay language. This research will be used Language Model and N-grams model.
A removal model for estimating detection probabilities from point-count surveys
Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.
2002-01-01
Use of point-count surveys is a popular method for collecting data on abundance and distribution of birds. However, analyses of such data often ignore potential differences in detection probability. We adapted a removal model to directly estimate detection probability during point-count surveys. The model assumes that singing frequency is a major factor influencing probability of detection when birds are surveyed using point counts. This may be appropriate for surveys in which most detections are by sound. The model requires counts to be divided into several time intervals. Point counts are often conducted for 10 min, where the number of birds recorded is divided into those first observed in the first 3 min, the subsequent 2 min, and the last 5 min. We developed a maximum-likelihood estimator for the detectability of birds recorded during counts divided into those intervals. This technique can easily be adapted to point counts divided into intervals of any length. We applied this method to unlimited-radius counts conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. We found differences in detection probability among species. Species that sing frequently such as Winter Wren (Troglodytes troglodytes) and Acadian Flycatcher (Empidonax virescens) had high detection probabilities (~90%) and species that call infrequently such as Pileated Woodpecker (Dryocopus pileatus) had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. We used the same approach to estimate detection probability and density for a subset of the observations with limited-radius point counts.
Study on Effects of the Stochastic Delay Probability for 1d CA Model of Traffic Flow
NASA Astrophysics Data System (ADS)
Xue, Yu; Chen, Yan-Hong; Kong, Ling-Jiang
Considering the effects of different factors on the stochastic delay probability, the delay probability has been classified into three cases. The first case corresponding to the brake state has a large delay probability if the anticipant velocity is larger than the gap between the successive cars. The second one corresponding to the following-the-leader rule has intermediate delay probability if the anticipant velocity is equal to the gap. Finally, the third case is the acceleration, which has minimum delay probability. The fundamental diagram obtained by numerical simulation shows the different properties compared to that by the NaSch model, in which there exist two different regions, corresponding to the coexistence state, and jamming state respectively.
Not Quite Normal: Consequences of Violating the Assumption of Normality in Regression Mixture Models
ERIC Educational Resources Information Center
Van Horn, M. Lee; Smith, Jessalyn; Fagan, Abigail A.; Jaki, Thomas; Feaster, Daniel J.; Masyn, Katherine; Hawkins, J. David; Howe, George
2012-01-01
Regression mixture models, which have only recently begun to be used in applied research, are a new approach for finding differential effects. This approach comes at the cost of the assumption that error terms are normally distributed within classes. This study uses Monte Carlo simulations to explore the effects of relatively minor violations of…
Fuss, Ian G; Navarro, Daniel J
2013-10-01
In recent years quantum probability models have been used to explain many aspects of human decision making, and as such quantum models have been considered a viable alternative to Bayesian models based on classical probability. One criticism that is often leveled at both kinds of models is that they lack a clear interpretation in terms of psychological mechanisms. In this paper we discuss the mechanistic underpinnings of a quantum walk model of human decision making and response time. The quantum walk model is compared to standard sequential sampling models, and the architectural assumptions of both are considered. In particular, we show that the quantum model has a natural interpretation in terms of a cognitive architecture that is both massively parallel and involves both co-operative (excitatory) and competitive (inhibitory) interactions between units. Additionally, we introduce a family of models that includes aspects of the classical and quantum walk models.
Trending in Probability of Collision Measurements via a Bayesian Zero-Inflated Beta Mixed Model
NASA Technical Reports Server (NTRS)
Vallejo, Jonathon; Hejduk, Matt; Stamey, James
2015-01-01
We investigate the performance of a generalized linear mixed model in predicting the Probabilities of Collision (Pc) for conjunction events. Specifically, we apply this model to the log(sub 10) transformation of these probabilities and argue that this transformation yields values that can be considered bounded in practice. Additionally, this bounded random variable, after scaling, is zero-inflated. Consequently, we model these values using the zero-inflated Beta distribution, and utilize the Bayesian paradigm and the mixed model framework to borrow information from past and current events. This provides a natural way to model the data and provides a basis for answering questions of interest, such as what is the likelihood of observing a probability of collision equal to the effective value of zero on a subsequent observation.
NASA Astrophysics Data System (ADS)
Maslennikova, Yu. S.; Nugmanov, I. S.
2016-08-01
The problem of probability density function estimation for a random process is one of the most common in practice. There are several methods to solve this problem. Presented laboratory work uses methods of the mathematical statistics to detect patterns in the realization of random process. On the basis of ergodic theory, we construct algorithm for estimating univariate probability density distribution function for a random process. Correlational analysis of realizations is applied to estimate the necessary size of the sample and the time of observation. Hypothesis testing for two probability distributions (normal and Cauchy) is used on the experimental data, using χ2 criterion. To facilitate understanding and clarity of the problem solved, we use ELVIS II platform and LabVIEW software package that allows us to make the necessary calculations, display results of the experiment and, most importantly, to control the experiment. At the same time students are introduced to a LabVIEW software package and its capabilities.
Height probabilities in the Abelian sandpile model on the generalized finite Bethe lattice
NASA Astrophysics Data System (ADS)
Chen, Haiyan; Zhang, Fuji
2013-08-01
In this paper, we study the sandpile model on the generalized finite Bethe lattice with a particular boundary condition. Using a combinatorial method, we give the exact expressions for all single-site probabilities and some two-site joint probabilities. As a by-product, we prove that the height probabilities of bulk vertices are all the same for the Bethe lattice with certain given boundary condition, which was found from numerical evidence by Grassberger and Manna ["Some more sandpiles," J. Phys. (France) 51, 1077-1098 (1990)], 10.1051/jphys:0199000510110107700 but without a proof.
NASA Astrophysics Data System (ADS)
Kwasniok, Frank
2013-11-01
A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.
Kwasniok, Frank
2013-11-01
A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.
Reasenberg, P.A.; Hanks, T.C.; Bakun, W.H.
2003-01-01
The moment magnitude M 7.8 earthquake in 1906 profoundly changed the rate of seismic activity over much of northern California. The low rate of seismic activity in the San Francisco Bay region (SFBR) since 1906, relative to that of the preceding 55 yr, is often explained as a stress-shadow effect of the 1906 earthquake. However, existing elastic and visco-elastic models of stress change fail to fully account for the duration of the lowered rate of earthquake activity. We use variations in the rate of earthquakes as a basis for a simple empirical model for estimating the probability of M ≥6.7 earthquakes in the SFBR. The model preserves the relative magnitude distribution of sources predicted by the Working Group on California Earthquake Probabilities' (WGCEP, 1999; WGCEP, 2002) model of characterized ruptures on SFBR faults and is consistent with the occurrence of the four M ≥6.7 earthquakes in the region since 1838. When the empirical model is extrapolated 30 yr forward from 2002, it gives a probability of 0.42 for one or more M ≥6.7 in the SFBR. This result is lower than the probability of 0.5 estimated by WGCEP (1988), lower than the 30-yr Poisson probability of 0.60 obtained by WGCEP (1999) and WGCEP (2002), and lower than the 30-yr time-dependent probabilities of 0.67, 0.70, and 0.63 obtained by WGCEP (1990), WGCEP (1999), and WGCEP (2002), respectively, for the occurrence of one or more large earthquakes. This lower probability is consistent with the lack of adequate accounting for the 1906 stress-shadow in these earlier reports. The empirical model represents one possible approach toward accounting for the stress-shadow effect of the 1906 earthquake. However, the discrepancy between our result and those obtained with other modeling methods underscores the fact that the physics controlling the timing of earthquakes is not well understood. Hence, we advise against using the empirical model alone (or any other single probability model) for estimating the
1998-05-01
Coverage Probability with a Random Optimization Procedure: An Artificial Neural Network Approach by Biing T. Guan, George Z. Gertner, and Alan B...Modeling Training Site Vegetation Coverage Probability with a Random Optimizing Procedure: An Artificial Neural Network Approach 6. AUTHOR(S) Biing...coverage based on past coverage. Approach A literature survey was conducted to identify artificial neural network analysis techniques applicable for
NASA Astrophysics Data System (ADS)
Pu, H. C.; Lin, C. H.
2016-05-01
To investigate the seismic behavior of crustal deformation, we deployed a dense seismic network at the Hsinchu area of northwestern Taiwan during the period between 2004 and 2006. Based on abundant local micro-earthquakes recorded at this seismic network, we have successfully determined 274 focal mechanisms among ∼1300 seismic events. It is very interesting to see that the dominant energy of both seismic strike-slip and normal faulting mechanisms repeatedly alternated with each other within two years. Also, the strike-slip and normal faulting earthquakes were largely accompanied with the surface slipping along N60°E and uplifting obtained from the continuous GPS data, individually. Those phenomena were probably resulted by the slow uplifts at the mid-crust beneath the northwestern Taiwan area. As the deep slow uplift was active below 10 km in depth along either the boundary fault or blind fault, the push of the uplifting material would simultaneously produce both of the normal faulting earthquakes in the shallow depths (0-10 km) and the slight surface uplifting. As the deep slow uplift was stop, instead, the strike-slip faulting earthquakes would be dominated as usual due to strongly horizontal plate convergence in the Taiwan. Since the normal faulting earthquakes repeatedly dominated in every 6 or 7 months between 2004 and 2006, it may conclude that slow slip events in the mid crust were frequent to release accumulated tectonic stress in the Hsinchu area.
General properties of different models used to predict normal tissue complications due to radiation
Kuperman, V. Y.
2008-11-15
In the current study the author analyzes general properties of three different models used to predict normal tissue complications due to radiation: (1) Surviving fraction of normal cells in the framework of the linear quadratic (LQ) equation for cell kill, (2) the Lyman-Kutcher-Burman (LKB) model for normal tissue complication probability (NTCP), and (3) generalized equivalent uniform dose (gEUD). For all considered cases the author assumes fixed average dose to an organ of interest. The author's goal is to establish whether maximizing dose uniformity in the irradiated normal tissues is radiobiologically beneficial. Assuming that NTCP increases with increasing overall cell kill, it is shown that NTCP in the LQ model is maximized for uniform dose. Conversely, NTCP in the LKB and gEUD models is always smaller for a uniform dose to a normal organ than that for a spatially varying dose if parameter n in these models is small (i.e., n<1). The derived conflicting properties of the considered models indicate the need for more studies before these models can be utilized clinically for plan evaluation and/or optimization of dose distributions. It is suggested that partial-volume irradiation can be used to establish the validity of the considered models.
NASA Astrophysics Data System (ADS)
Matias-Peralta, Hazel Monica; Ghodsi, Alireza; Shitan, Mahendran; Yusoff, Fatimah Md.
Copepods are the most abundant microcrustaceans in the marine waters and are the major food resource for many commercial fish species. In addition, changes in the distribution and population composition of copepods may also serve as an indicator of global climate changes. Therefore, it is important to model the copepod distribution in different ecosystems. Copepod samples were collected from three different ecosystems (seagrass area, cage aquaculture area and coastal waters off shrimp aquaculture farm) along the coastal waters of the Malacca Straits over a one year period. In this study the major statistical analysis consisted of fitting different probability models. This paper highlights the fitting of probability distributions and discusses the adequateness of the fitted models. The usefulness of these fitted models would enable one to make probability statements about the distribution of copepods in three different ecosystems.
Normal seasonal variations for atmospheric radon concentration: a sinusoidal model.
Hayashi, Koseki; Yasuoka, Yumi; Nagahama, Hiroyuki; Muto, Jun; Ishikawa, Tetsuo; Omori, Yasutaka; Suzuki, Toshiyuki; Homma, Yoshimi; Mukai, Takahiro
2015-01-01
Anomalous radon readings in air have been reported before an earthquake activity. However, careful measurements of atmospheric radon concentrations during a normal period are required to identify anomalous variations in a precursor period. In this study, we obtained radon concentration data for 5 years (2003-2007) that can be considered a normal period and compared it with data from the precursory period of 2008 until March 2011, when the 2011 Tohoku-Oki Earthquake occurred. Then, we established a model for seasonal variation by fitting a sinusoidal model to the radon concentration data during the normal period, considering that the seasonal variation was affected by atmospheric turbulence. By determining the amplitude in the sinusoidal model, the normal variation of the radon concentration can be estimated. Thus, the results of this method can be applied to identify anomalous radon variations before an earthquake.
Franceschetti, Donald R; Gire, Elizabeth
2013-06-01
Quantum probability theory offers a viable alternative to classical probability, although there are some ambiguities inherent in transferring the quantum formalism to a less determined realm. A number of physicists are now looking at the applicability of quantum ideas to the assessment of physics learning, an area particularly suited to quantum probability ideas.
A Taxonomy of Latent Structure Assumptions for Probability Matrix Decomposition Models.
ERIC Educational Resources Information Center
Meulders, Michel; De Boeck, Paul; Van Mechelen, Iven
2003-01-01
Proposed a taxonomy of latent structure assumptions for probability matrix decomposition (PMD) that includes the original PMD model and a three-way extension of the multiple classification latent class model. Simulation study results show the usefulness of the taxonomy. (SLD)
Marewski, Julian N; Hoffrage, Ulrich
2013-06-01
A lot of research in cognition and decision making suffers from a lack of formalism. The quantum probability program could help to improve this situation, but we wonder whether it would provide even more added value if its presumed focus on outcome models were complemented by process models that are, ideally, informed by ecological analyses and integrated into cognitive architectures.
Construct Reliability of the Probability of Adoption of Change (PAC) Model.
ERIC Educational Resources Information Center
Creamer, E. G.; And Others
1991-01-01
Describes Probability of Adoption of Change (PAC) model, theoretical paradigm for explaining likelihood of successful adoption of planned change initiatives in student affairs. Reports on PAC construct reliability from survey of 62 Chief Student Affairs Officers. Discusses two refinements to the model: merger of leadership and top level support…
Uncovering the Best Skill Multimap by Constraining the Error Probabilities of the Gain-Loss Model
ERIC Educational Resources Information Center
Anselmi, Pasquale; Robusto, Egidio; Stefanutti, Luca
2012-01-01
The Gain-Loss model is a probabilistic skill multimap model for assessing learning processes. In practical applications, more than one skill multimap could be plausible, while none corresponds to the true one. The article investigates whether constraining the error probabilities is a way of uncovering the best skill assignment among a number of…
Effects with Multiple Causes: Evaluating Arguments Using the Subjective Probability Model.
ERIC Educational Resources Information Center
Allen, Mike; Burrell, Nancy; Egan, Tony
2000-01-01
Finds that the subjective probability model continues to provide some degree of prediction for beliefs (of an individual for circumstances of a single event with multiple causes) prior to the exposure to a message, but that after exposure to a persuasive message, the model did not maintain the same level of accuracy of prediction. Offers several…
Identification of Probability Weighted Multiple ARX Models and Its Application to Behavior Analysis
NASA Astrophysics Data System (ADS)
Taguchi, Shun; Suzuki, Tatsuya; Hayakawa, Soichiro; Inagaki, Shinkichi
This paper proposes a Probability weighted ARX (PrARX) model wherein the multiple ARX models are composed by the probabilistic weighting functions. As the probabilistic weighting function, a ‘softmax’ function is introduced. Then, the parameter estimation problem for the proposed model is formulated as a single optimization problem. Furthermore, the identified PrARX model can be easily transformed to the corresponding PWARX model with complete partitions between regions. Finally, the proposed model is applied to the modeling of the driving behavior, and the usefulness of the model is verified and discussed.
Diagnostic models of the pre-test probability of stable coronary artery disease: A systematic review
He, Ting; Liu, Xing; Xu, Nana; Li, Ying; Wu, Qiaoyu; Liu, Meilin; Yuan, Hong
2017-01-01
A comprehensive search of PubMed and Embase was performed in January 2015 to examine the available literature on validated diagnostic models of the pre-test probability of stable coronary artery disease and to describe the characteristics of the models. Studies that were designed to develop and validate diagnostic models of pre-test probability for stable coronary artery disease were included. Data regarding baseline patient characteristics, procedural characteristics, modeling methods, metrics of model performance, risk of bias, and clinical usefulness were extracted. Ten studies involving the development of 12 models and two studies focusing on external validation were identified. Seven models were validated internally, and seven models were validated externally. Discrimination varied between studies that were validated internally (C statistic 0.66-0.81) and externally (0.49-0.87). Only one study presented reclassification indices. The majority of better performing models included sex, age, symptoms, diabetes, smoking, and hyperlipidemia as variables. Only two diagnostic models evaluated the effects on clinical decision making processes or patient outcomes. Most diagnostic models of the pre-test probability of stable coronary artery disease have had modest success, and very few present data regarding the effects of these models on clinical decision making processes or patient outcomes. PMID:28355366
He, Ting; Liu, Xing; Xu, Nana; Li, Ying; Wu, Qiaoyu; Liu, Meilin; Yuan, Hong
2017-03-01
A comprehensive search of PubMed and Embase was performed in January 2015 to examine the available literature on validated diagnostic models of the pre-test probability of stable coronary artery disease and to describe the characteristics of the models. Studies that were designed to develop and validate diagnostic models of pre-test probability for stable coronary artery disease were included. Data regarding baseline patient characteristics, procedural characteristics, modeling methods, metrics of model performance, risk of bias, and clinical usefulness were extracted. Ten studies involving the development of 12 models and two studies focusing on external validation were identified. Seven models were validated internally, and seven models were validated externally. Discrimination varied between studies that were validated internally (C statistic 0.66-0.81) and externally (0.49-0.87). Only one study presented reclassification indices. The majority of better performing models included sex, age, symptoms, diabetes, smoking, and hyperlipidemia as variables. Only two diagnostic models evaluated the effects on clinical decision making processes or patient outcomes. Most diagnostic models of the pre-test probability of stable coronary artery disease have had modest success, and very few present data regarding the effects of these models on clinical decision making processes or patient outcomes.
Approximating Multivariate Normal Orthant Probabilities
1990-06-01
limno. ?,oUburgX. PA 15268 !Z8 W,v,.tfcrr Co.ri Departmet of Prycboeot 7:,ms Rxter %J 09753 ftQ3 E. Ditive SL. Dr. Bert Green Champaign. IL 61820 l’oon...249 Batttmom SID .1218 icnerniv Part University of Colorsidoi Pstbirgn. PA 15213 Boulder. CO sw94Z49 Mi’chael Habon DOR%IER GMBH Dr MI lto. S. Katz...UnnerniY of lAno College of Education Layn&K s4 Dr. Ratna Niandatumar Li oria(ia Educaional Studies oaCe r TomJ b Willard Hall. Room Z13E !,a ot A522
NASA Technical Reports Server (NTRS)
Deiwert, G. S.; Yoshikawa, K. K.
1975-01-01
A semiclassical model proposed by Pearson and Hansen (1974) for computing collision-induced transition probabilities in diatomic molecules is tested by the direct-simulation Monte Carlo method. Specifically, this model is described by point centers of repulsion for collision dynamics, and the resulting classical trajectories are used in conjunction with the Schroedinger equation for a rigid-rotator harmonic oscillator to compute the rotational energy transition probabilities necessary to evaluate the rotation-translation exchange phenomena. It is assumed that a single, average energy spacing exists between the initial state and possible final states for a given collision.
Coupled escape probability for an asymmetric spherical case: Modeling optically thick comets
Gersch, Alan M.; A'Hearn, Michael F.
2014-05-20
We have adapted Coupled Escape Probability, a new exact method of solving radiative transfer problems, for use in asymmetrical spherical situations. Our model is intended specifically for use in modeling optically thick cometary comae, although not limited to such use. This method enables the accurate modeling of comets' spectra even in the potentially optically thick regions nearest the nucleus, such as those seen in Deep Impact observations of 9P/Tempel 1 and EPOXI observations of 103P/Hartley 2.
NASA Technical Reports Server (NTRS)
Courey, Karim; Wright, Clara; Asfour, Shihab; Onar, Arzu; Bayliss, Jon; Ludwig, Larry
2009-01-01
In this experiment, an empirical model to quantify the probability of occurrence of an electrical short circuit from tin whiskers as a function of voltage was developed. This empirical model can be used to improve existing risk simulation models. FIB and TEM images of a tin whisker confirm the rare polycrystalline structure on one of the three whiskers studied. FIB cross-section of the card guides verified that the tin finish was bright tin.
Results from probability-based, simplified, off-shore Louisiana CSEM hydrocarbon reservoir modeling
NASA Astrophysics Data System (ADS)
Stalnaker, J. L.; Tinley, M.; Gueho, B.
2009-12-01
Perhaps the biggest impediment to the commercial application of controlled-source electromagnetic (CSEM) geophysics marine hydrocarbon exploration is the inefficiency of modeling and data inversion. If an understanding of the typical (in a statistical sense) geometrical and electrical nature of a reservoir can be attained, then it is possible to derive therefrom a simplified yet accurate model of the electromagnetic interactions that produce a measured marine CSEM signal, leading ultimately to efficient modeling and inversion. We have compiled geometric and resistivity measurements from roughly 100 known, producing off-shore Louisiana Gulf of Mexico reservoirs. Recognizing that most reservoirs could be recreated roughly from a sectioned hemi-ellipsoid, we devised a unified, compact reservoir geometry description. Each reservoir was initially fit to the ellipsoid by eye, though we plan in the future to perform a more rigorous least-squares fit. We created, using kernel density estimation, initial probabilistic descriptions of reservoir parameter distributions, with the understanding that additional information would not fundamentally alter our results, but rather increase accuracy. From the probabilistic description, we designed an approximate model consisting of orthogonally oriented current segments distributed across the ellipsoid--enough to define the shape, yet few enough to be resolved during inversion. The moment and length of the currents are mapped to geometry and resistivity of the ellipsoid. The probability density functions (pdfs) derived from reservoir statistics serve as a workbench. We first use the pdfs in a Monte Carlo simulation designed to assess the detectability off-shore Louisiana reservoirs using magnitude versus offset (MVO) anomalies. From the pdfs, many reservoir instances are generated (using rejection sampling) and each normalized MVO response is calculated. The response strength is summarized by numerically computing MVO power, and that
Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique
Glosup, J.G.; Axelrod M.C.
1994-11-15
The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of our interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method.
Application of damping mechanism model and stacking fault probability in Fe-Mn alloy
Huang, S.K.; Wen, Y.H.; Li, N. Teng, J.; Ding, S.; Xu, Y.G.
2008-06-15
In this paper, the damping mechanism model of Fe-Mn alloy was analyzed using dislocation theory. Moreover, as an important parameter in Fe-Mn based alloy, the effect of stacking fault probability on the damping capacity of Fe-19.35Mn alloy after deep-cooling or tensile deformation was also studied. The damping capacity was measured using reversal torsion pendulum. The stacking fault probability of {gamma}-austenite and {epsilon}-martensite was determined by means of X-ray diffraction (XRD) profile analysis. The microstructure was observed using scanning electronic microscope (SEM). The results indicated that with the strain amplitude increasing above a critical value, the damping capacity of Fe-19.35Mn alloy increased rapidly which could be explained using the breakaway model of Shockley partial dislocations. Deep-cooling and suitable tensile deformation could improve the damping capacity owning to the increasing of stacking fault probability of Fe-19.35Mn alloy.
A model selection algorithm for a posteriori probability estimation with neural networks.
Arribas, Juan Ignacio; Cid-Sueiro, Jesús
2005-07-01
This paper proposes a novel algorithm to jointly determine the structure and the parameters of a posteriori probability model based on neural networks (NNs). It makes use of well-known ideas of pruning, splitting, and merging neural components and takes advantage of the probabilistic interpretation of these components. The algorithm, so called a posteriori probability model selection (PPMS), is applied to an NN architecture called the generalized softmax perceptron (GSP) whose outputs can be understood as probabilities although results shown can be extended to more general network architectures. Learning rules are derived from the application of the expectation-maximization algorithm to the GSP-PPMS structure. Simulation results show the advantages of the proposed algorithm with respect to other schemes.
NASA Astrophysics Data System (ADS)
Espinoza, G. E.; Arctur, D. K.; Maidment, D. R.; Teng, W. L.
2015-12-01
Anticipating extreme events, whether potential for flooding or drought, becomes more urgent every year, with increased variability in weather and climate. Hydrologic processes are inherently spatiotemporal. Extreme conditions can be identified at a certain period of time in a specific geographic region. These extreme conditions occur when the values of a hydrologic variable are record low or high, or they approach those records. The calculation of the historic probability distributions is essential to understanding when values exceed the thresholds and become extreme. A dense data model in time and space must be used to properly estimate the historic distributions. The purpose of this research is to model the time-dependent probability distributions of hydrologic variables at a national scale. These historic probability distributions are modeled daily, using 35 years of data from the North American Land Data Assimilation System (NLDAS) Noah model, which is a land-surface model with a 1/8 degree grid and hourly values from 1979 to the present. Five hydrologic variables are selected: soil moisture, precipitation, runoff, evapotranspiration, and temperature. The probability distributions are used to compare with the latest results from NLDAS and identify areas where extreme hydrologic conditions are present. The identification of extreme values in hydrologic variables and their inter-correlation improve the assessment and characterization of natural disasters such as floods or droughts. This information is presented through a dynamic web application that shows the latest results from NLDAS and any anomalies.
Breather solutions for inhomogeneous FPU models using Birkhoff normal forms
NASA Astrophysics Data System (ADS)
Martínez-Farías, Francisco; Panayotaros, Panayotis
2016-11-01
We present results on spatially localized oscillations in some inhomogeneous nonlinear lattices of Fermi-Pasta-Ulam (FPU) type derived from phenomenological nonlinear elastic network models proposed to study localized protein vibrations. The main feature of the FPU lattices we consider is that the number of interacting neighbors varies from site to site, and we see numerically that this spatial inhomogeneity leads to spatially localized normal modes in the linearized problem. This property is seen in 1-D models, and in a 3-D model with a geometry obtained from protein data. The spectral analysis of these examples suggests some non-resonance assumptions that we use to show the existence of invariant subspaces of spatially localized solutions in quartic Birkhoff normal forms of the FPU systems. The invariant subspaces have an additional symmetry and this fact allows us to compute periodic orbits of the quartic normal form in a relatively simple way.
Modeling and simulation of normal and hemiparetic gait
NASA Astrophysics Data System (ADS)
Luengas, Lely A.; Camargo, Esperanza; Sanchez, Giovanni
2015-09-01
Gait is the collective term for the two types of bipedal locomotion, walking and running. This paper is focused on walking. The analysis of human gait is of interest to many different disciplines, including biomechanics, human-movement science, rehabilitation and medicine in general. Here we present a new model that is capable of reproducing the properties of walking, normal and pathological. The aim of this paper is to establish the biomechanical principles that underlie human walking by using Lagrange method. The constraint forces of Rayleigh dissipation function, through which to consider the effect on the tissues in the gait, are included. Depending on the value of the factor present in the Rayleigh dissipation function, both normal and pathological gait can be simulated. First of all, we apply it in the normal gait and then in the permanent hemiparetic gait. Anthropometric data of adult person are used by simulation, and it is possible to use anthropometric data for children but is necessary to consider existing table of anthropometric data. Validation of these models includes simulations of passive dynamic gait that walk on level ground. The dynamic walking approach provides a new perspective of gait analysis, focusing on the kinematics and kinetics of gait. There have been studies and simulations to show normal human gait, but few of them have focused on abnormal, especially hemiparetic gait. Quantitative comparisons of the model predictions with gait measurements show that the model can reproduce the significant characteristics of normal gait.
NASA Astrophysics Data System (ADS)
Tian, Chuan; Sun, Di-Hua
2010-12-01
Considering the effects that the probability of traffic interruption and the friction between two lanes have on the car-following behaviour, this paper establishes a new two-lane microscopic car-following model. Based on this microscopic model, a new macroscopic model was deduced by the relevance relation of microscopic and macroscopic scale parameters for the two-lane traffic flow. Terms related to lane change are added into the continuity equations and velocity dynamic equations to investigate the lane change rate. Numerical results verify that the proposed model can be efficiently used to reflect the effect of the probability of traffic interruption on the shock, rarefaction wave and lane change behaviour on two-lane freeways. The model has also been applied in reproducing some complex traffic phenomena caused by traffic accident interruption.
Modelling detection probabilities to evaluate management and control tools for an invasive species
Christy, M.T.; Yackel Adams, A.A.; Rodda, G.H.; Savidge, J.A.; Tyrrell, C.L.
2010-01-01
For most ecologists, detection probability (p) is a nuisance variable that must be modelled to estimate the state variable of interest (i.e. survival, abundance, or occupancy). However, in the realm of invasive species control, the rate of detection and removal is the rate-limiting step for management of this pervasive environmental problem. For strategic planning of an eradication (removal of every individual), one must identify the least likely individual to be removed, and determine the probability of removing it. To evaluate visual searching as a control tool for populations of the invasive brown treesnake Boiga irregularis, we designed a mark-recapture study to evaluate detection probability as a function of time, gender, size, body condition, recent detection history, residency status, searcher team and environmental covariates. We evaluated these factors using 654 captures resulting from visual detections of 117 snakes residing in a 5-ha semi-forested enclosure on Guam, fenced to prevent immigration and emigration of snakes but not their prey. Visual detection probability was low overall (= 0??07 per occasion) but reached 0??18 under optimal circumstances. Our results supported sex-specific differences in detectability that were a quadratic function of size, with both small and large females having lower detection probabilities than males of those sizes. There was strong evidence for individual periodic changes in detectability of a few days duration, roughly doubling detection probability (comparing peak to non-elevated detections). Snakes in poor body condition had estimated mean detection probabilities greater than snakes with high body condition. Search teams with high average detection rates exhibited detection probabilities about twice that of search teams with low average detection rates. Surveys conducted with bright moonlight and strong wind gusts exhibited moderately decreased probabilities of detecting snakes. Synthesis and applications. By
Zhu, Lin; Dai, Zhenxue; Gong, Huili; ...
2015-06-12
Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less
Zhu, Lin; Dai, Zhenxue; Gong, Huili; Gable, Carl; Teatini, Pietro
2015-06-12
Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in an accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.
Cool, Geneviève; Lebel, Alexandre; Sadiq, Rehan; Rodriguez, Manuel J
2015-12-01
The regional variability of the probability of occurrence of high total trihalomethane (TTHM) levels was assessed using multilevel logistic regression models that incorporate environmental and infrastructure characteristics. The models were structured in a three-level hierarchical configuration: samples (first level), drinking water utilities (DWUs, second level) and natural regions, an ecological hierarchical division from the Quebec ecological framework of reference (third level). They considered six independent variables: precipitation, temperature, source type, seasons, treatment type and pH. The average probability of TTHM concentrations exceeding the targeted threshold was 18.1%. The probability was influenced by seasons, treatment type, precipitations and temperature. The variance at all levels was significant, showing that the probability of TTHM concentrations exceeding the threshold is most likely to be similar if located within the same DWU and within the same natural region. However, most of the variance initially attributed to natural regions was explained by treatment types and clarified by spatial aggregation on treatment types. Nevertheless, even after controlling for treatment type, there was still significant regional variability of the probability of TTHM concentrations exceeding the threshold. Regional variability was particularly important for DWUs using chlorination alone since they lack the appropriate treatment required to reduce the amount of natural organic matter (NOM) in source water prior to disinfection. Results presented herein could be of interest to authorities in identifying regions with specific needs regarding drinking water quality and for epidemiological studies identifying geographical variations in population exposure to disinfection by-products (DBPs).
Bailey, Larissa L.; Reid, Janice A.; Forsman, Eric D.; Nichols, James D.
2009-01-01
Barred owls (Strix varia) have recently expanded their range and now encompass the entire range of the northern spotted owl (Strix occidentalis caurina). This expansion has led to two important issues of concern for management of northern spotted owls: (1) possible competitive interactions between the two species that could contribute to population declines of northern spotted owls, and (2) possible changes in vocalization behavior and detection probabilities of northern spotted owls induced by presence of barred owls. We used a two-species occupancy model to investigate whether there was evidence of competitive exclusion between the two species at study locations in Oregon, USA. We simultaneously estimated detection probabilities for both species and determined if the presence of one species influenced the detection of the other species. Model selection results and associated parameter estimates provided no evidence that barred owls excluded spotted owls from territories. We found strong evidence that detection probabilities differed for the two species, with higher probabilities for northern spotted owls that are the object of current surveys. Non-detection of barred owls is very common in surveys for northern spotted owls, and detection of both owl species was negatively influenced by the presence of the congeneric species. Our results suggest that analyses directed at hypotheses of barred owl effects on demographic or occupancy vital rates of northern spotted owls need to deal adequately with imperfect and variable detection probabilities for both species.
Blind Students' Learning of Probability through the Use of a Tactile Model
ERIC Educational Resources Information Center
Vita, Aida Carvalho; Kataoka, Verônica Yumi
2014-01-01
The objective of this paper is to discuss how blind students learn basic concepts of probability using the tactile model proposed by Vita (2012). Among the activities were part of the teaching sequence "Jefferson's Random Walk", in which students built a tree diagram (using plastic trays, foam cards, and toys), and pictograms in 3D…
ERIC Educational Resources Information Center
Calvert, Carol Elaine
2014-01-01
This case study relates to distance learning students on open access courses. It demonstrates the use of predictive analytics to generate a model of the probabilities of success and retention at different points, or milestones, in a student journey. A core set of explanatory variables has been established and their varying relative importance at…
Carr, Thomas W
2017-02-01
In an SIRS compartment model for a disease we consider the effect of different probability distributions for remaining immune. We show that to first approximation the first three moments of the corresponding probability densities are sufficient to well describe oscillatory solutions corresponding to recurrent epidemics. Specifically, increasing the fraction who lose immunity, increasing the mean immunity time, and decreasing the heterogeneity of the population all favor the onset of epidemics and increase their severity. We consider six different distributions, some symmetric about their mean and some asymmetric, and show that by tuning their parameters such that they have equivalent moments that they all exhibit equivalent dynamical behavior.
Unit Root Testing and Estimation in Nonlinear ESTAR Models with Normal and Non-Normal Errors
Alamgir; Ali, Amjad; Khan, Dost Muhammad; Khan, Sajjad Ahmad; Khan, Zardad
2016-01-01
Exponential Smooth Transition Autoregressive (ESTAR) models can capture non-linear adjustment of the deviations from equilibrium conditions which may explain the economic behavior of many variables that appear non stationary from a linear viewpoint. Many researchers employ the Kapetanios test which has a unit root as the null and a stationary nonlinear model as the alternative. However this test statistics is based on the assumption of normally distributed errors in the DGP. Cook has analyzed the size of the nonlinear unit root of this test in the presence of heavy-tailed innovation process and obtained the critical values for both finite variance and infinite variance cases. However the test statistics of Cook are oversized. It has been found by researchers that using conventional tests is dangerous though the best performance among these is a HCCME. The over sizing for LM tests can be reduced by employing fixed design wild bootstrap remedies which provide a valuable alternative to the conventional tests. In this paper the size of the Kapetanios test statistic employing hetroscedastic consistent covariance matrices has been derived and the results are reported for various sample sizes in which size distortion is reduced. The properties for estimates of ESTAR models have been investigated when errors are assumed non-normal. We compare the results obtained through the fitting of nonlinear least square with that of the quantile regression fitting in the presence of outliers and the error distribution was considered to be from t-distribution for various sample sizes. PMID:27898702
Dong, Jing; Mahmassani, Hani S.
2011-01-01
This paper proposes a methodology to produce random flow breakdown endogenously in a mesoscopic operational model, by capturing breakdown probability and duration. Based on previous research findings that probability of flow breakdown can be represented as a function of flow rate and the duration can be characterized by a hazard model. By generating random flow breakdown at various levels and capturing the traffic characteristics at the onset of the breakdown, the stochastic network simulation model provides a tool for evaluating travel time variability. The proposed model can be used for (1) providing reliability related traveler information; (2) designing ITS (intelligent transportation systems) strategies to improve reliability; and (3) evaluating reliability-related performance measures of the system.
Schmidt, W.; Niemeyer, J. C.; Ciaraldi-Schoolmann, F.; Roepke, F. K.; Hillebrandt, W.
2010-02-20
The delayed detonation model describes the observational properties of the majority of Type Ia supernovae very well. Using numerical data from a three-dimensional deflagration model for Type Ia supernovae, the intermittency of the turbulent velocity field and its implications on the probability of a deflagration-to-detonation (DDT) transition are investigated. From structure functions of the turbulent velocity fluctuations, we determine intermittency parameters based on the log-normal and the log-Poisson models. The bulk of turbulence in the ash regions appears to be less intermittent than predicted by the standard log-normal model and the She-Leveque model. On the other hand, the analysis of the turbulent velocity fluctuations in the vicinity of the flame front by Roepke suggests a much higher probability of large velocity fluctuations on the grid scale in comparison to the log-normal intermittency model. Following Pan et al., we computed probability density functions for a DDT for the different distributions. The determination of the total number of regions at the flame surface, in which DDTs can be triggered, enables us to estimate the total number of events. Assuming that a DDT can occur in the stirred flame regime, as proposed by Woosley et al., the log-normal model would imply a delayed detonation between 0.7 and 0.8 s after the beginning of the deflagration phase for the multi-spot ignition scenario used in the simulation. However, the probability drops to virtually zero if a DDT is further constrained by the requirement that the turbulent velocity fluctuations reach about 500 km s{sup -1}. Under this condition, delayed detonations are only possible if the distribution of the velocity fluctuations is not log-normal. From our calculations follows that the distribution obtained by Roepke allow for multiple DDTs around 0.8 s after ignition at a transition density close to 1 x 10{sup 7} g cm{sup -3}.
Probability distributions for parameters of the Munson-Dawson salt creep model
Fossum, A.F.; Pfeifle, T.W.; Mellegard, K.D.
1993-12-31
Stress-related probability distribution functions are determined for the random variable material model parameters of the Munson-Dawson multi-mechanism deformation creep model for salt. These functions are obtained indirectly from experimental creep data for clean salt. The parameter distribution functions will form the basis for numerical calculations to generate an appropriate distribution function for room closure. Also included is a table that gives the values of the parameters for individual specimens of clean salt under different stresses.
An energy-dependent numerical model for the condensation probability, γj
NASA Astrophysics Data System (ADS)
Kerby, Leslie M.
2017-04-01
The "condensation" probability, γj, is an important variable in the preequilibrium stage of nuclear spallation reactions. It represents the probability that pj excited nucleons (excitons) will "condense" to form complex particle type j in the excited residual nucleus. It has a significant impact on the emission width, or probability of emitting fragment type j from the residual nucleus. Previous formulations for γj were energy-independent and valid for fragments up to 4He only. This paper explores the formulation of a new model for γj, one which is energy-dependent and valid for up to 28Mg, and which provides improved fits compared to experimental fragment spectra.
An energy-dependent numerical model for the condensation probability, γj
Kerby, Leslie Marie
2016-12-09
The “condensation” probability, γj, is an important variable in the preequilibrium stage of nuclear spallation reactions. It represents the probability that pj excited nucleons (excitons) will “condense” to form complex particle type j in the excited residual nucleus. In addition, it has a significant impact on the emission width, or probability of emitting fragment type j from the residual nucleus. Previous formulations for γj were energy-independent and valid for fragments up to 4He only. This paper explores the formulation of a new model for γj, one which is energy-dependent and valid for up to 28Mg, and which provides improved fitsmore » compared to experimental fragment spectra.« less
Assessment of uncertainty in chemical models by Bayesian probabilities: Why, when, how?
NASA Astrophysics Data System (ADS)
Sahlin, Ullrika
2015-07-01
A prediction of a chemical property or activity is subject to uncertainty. Which type of uncertainties to consider, whether to account for them in a differentiated manner and with which methods, depends on the practical context. In chemical modelling, general guidance of the assessment of uncertainty is hindered by the high variety in underlying modelling algorithms, high-dimensionality problems, the acknowledgement of both qualitative and quantitative dimensions of uncertainty, and the fact that statistics offers alternative principles for uncertainty quantification. Here, a view of the assessment of uncertainty in predictions is presented with the aim to overcome these issues. The assessment sets out to quantify uncertainty representing error in predictions and is based on probability modelling of errors where uncertainty is measured by Bayesian probabilities. Even though well motivated, the choice to use Bayesian probabilities is a challenge to statistics and chemical modelling. Fully Bayesian modelling, Bayesian meta-modelling and bootstrapping are discussed as possible approaches. Deciding how to assess uncertainty is an active choice, and should not be constrained by traditions or lack of validated and reliable ways of doing it.
A Statistical Model for the Prediction of Wind-Speed Probabilities in the Atmospheric Surface Layer
NASA Astrophysics Data System (ADS)
Efthimiou, G. C.; Hertwig, D.; Andronopoulos, S.; Bartzis, J. G.; Coceal, O.
2016-11-01
Wind fields in the atmospheric surface layer (ASL) are highly three-dimensional and characterized by strong spatial and temporal variability. For various applications such as wind-comfort assessments and structural design, an understanding of potentially hazardous wind extremes is important. Statistical models are designed to facilitate conclusions about the occurrence probability of wind speeds based on the knowledge of low-order flow statistics. Being particularly interested in the upper tail regions we show that the statistical behaviour of near-surface wind speeds is adequately represented by the Beta distribution. By using the properties of the Beta probability density function in combination with a model for estimating extreme values based on readily available turbulence statistics, it is demonstrated that this novel modelling approach reliably predicts the upper margins of encountered wind speeds. The model's basic parameter is derived from three substantially different calibrating datasets of flow in the ASL originating from boundary-layer wind-tunnel measurements and direct numerical simulation. Evaluating the model based on independent field observations of near-surface wind speeds shows a high level of agreement between the statistically modelled horizontal wind speeds and measurements. The results show that, based on knowledge of only a few simple flow statistics (mean wind speed, wind-speed fluctuations and integral time scales), the occurrence probability of velocity magnitudes at arbitrary flow locations in the ASL can be estimated with a high degree of confidence.
Normal versus Noncentral Chi-Square Asymptotics of Misspecified Models
ERIC Educational Resources Information Center
Chun, So Yeon; Shapiro, Alexander
2009-01-01
The noncentral chi-square approximation of the distribution of the likelihood ratio (LR) test statistic is a critical part of the methodology in structural equation modeling. Recently, it was argued by some authors that in certain situations normal distributions may give a better approximation of the distribution of the LR test statistic. The main…
Logit-normal mixed model for Indian monsoon precipitation
NASA Astrophysics Data System (ADS)
Dietz, L. R.; Chatterjee, S.
2014-09-01
Describing the nature and variability of Indian monsoon precipitation is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Four GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data. The logit-normal model was applied to light, moderate, and extreme rainfall. Findings indicated that physical constructs were preserved by the models, and random effects were significant in many cases. We also found GLMM estimation methods were sensitive to tuning parameters and assumptions and therefore, recommend use of multiple methods in applications. This work provides a novel use of GLMM and promotes its addition to the gamut of tools for analysis in studying climate phenomena.
Multistate modeling of habitat dynamics: Factors affecting Florida scrub transition probabilities
Breininger, D.R.; Nichols, J.D.; Duncan, B.W.; Stolen, Eric D.; Carter, G.M.; Hunt, D.K.; Drese, J.H.
2010-01-01
Many ecosystems are influenced by disturbances that create specific successional states and habitat structures that species need to persist. Estimating transition probabilities between habitat states and modeling the factors that influence such transitions have many applications for investigating and managing disturbance-prone ecosystems. We identify the correspondence between multistate capture-recapture models and Markov models of habitat dynamics. We exploit this correspondence by fitting and comparing competing models of different ecological covariates affecting habitat transition probabilities in Florida scrub and flatwoods, a habitat important to many unique plants and animals. We subdivided a large scrub and flatwoods ecosystem along central Florida's Atlantic coast into 10-ha grid cells, which approximated average territory size of the threatened Florida Scrub-Jay (Aphelocoma coerulescens), a management indicator species. We used 1.0-m resolution aerial imagery for 1994, 1999, and 2004 to classify grid cells into four habitat quality states that were directly related to Florida Scrub-Jay source-sink dynamics and management decision making. Results showed that static site features related to fire propagation (vegetation type, edges) and temporally varying disturbances (fires, mechanical cutting) best explained transition probabilities. Results indicated that much of the scrub and flatwoods ecosystem was resistant to moving from a degraded state to a desired state without mechanical cutting, an expensive restoration tool. We used habitat models parameterized with the estimated transition probabilities to investigate the consequences of alternative management scenarios on future habitat dynamics. We recommend this multistate modeling approach as being broadly applicable for studying ecosystem, land cover, or habitat dynamics. The approach provides maximum-likelihood estimates of transition parameters, including precision measures, and can be used to assess
A Mathematical Model for Calculating Detection Probability of a Diffusion Target.
1984-09-01
diffusion model. This model assumes that there is a stationary searcher which has a " cookie -cutter" sensor with radius R. In order to construct this model...stationary searcher which has a " cookie -cutter" sensor with radius R. In order to construct this model, a Monte Carlo simulation program is used to generate...of radius R. The dete:tior. probability of a target inside of this disk is 1 and outside is 0. :he searcher thus has a " cookie - cutter" sensor with
Dependence in probabilistic modeling, Dempster-Shafer theory, and probability bounds analysis.
Oberkampf, William Louis; Tucker, W. Troy; Zhang, Jianzhong; Ginzburg, Lev; Berleant, Daniel J.; Ferson, Scott; Hajagos, Janos; Nelsen, Roger B.
2004-10-01
This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.
Modelling secondary microseismic noise by normal mode summation
NASA Astrophysics Data System (ADS)
Gualtieri, L.; Stutzmann, E.; Capdeville, Y.; Ardhuin, F.; Schimmel, M.; Mangeney, A.; Morelli, A.
2013-06-01
Secondary microseisms recorded by seismic stations are generated in the ocean by the interaction of ocean gravity waves. We present here the theory for modelling secondary microseismic noise by normal mode summation. We show that the noise sources can be modelled by vertical forces and how to derive them from a realistic ocean wave model. We then show how to compute bathymetry excitation effect in a realistic earth model by using normal modes and a comparison with Longuet-Higgins approach. The strongest excitation areas in the oceans depends on the bathymetry and period and are different for each seismic mode. Seismic noise is then modelled by normal mode summation considering varying bathymetry. We derive an attenuation model that enables to fit well the vertical component spectra whatever the station location. We show that the fundamental mode of Rayleigh waves is the dominant signal in seismic noise. There is a discrepancy between real and synthetic spectra on the horizontal components that enables to estimate the amount of Love waves for which a different source mechanism is needed. Finally, we investigate noise generated in all the oceans around Africa and show that most of noise recorded in Algeria (TAM station) is generated in the Northern Atlantic and that there is a seasonal variability of the contribution of each ocean and sea.
NASA Astrophysics Data System (ADS)
Wei, Robert P.; Harlow, D. Gary
2005-01-01
Life prediction and reliability assessment are essential components for the life-cycle engineering and management (LCEM) of modern engineered systems. These systems can range from microelectronic and bio-medical devices to large machinery and structures. To be effective, the underlying approach to LCEM must be transformed to embody mechanistically based probability modelling, vis-à-vis the more traditional experientially based statistical modelling, for predicting damage evolution and distribution. In this paper, the probability and statistical approaches are compared and differentiated. The process of model development on the basis of mechanistic understanding derived from critical experiments is illustrated through selected examples. The efficacy of this approach is illustrated through an example of the evolution and distribution of corrosion and corrosion fatigue damage in aluminium alloys in relation to aircraft that had been in long-term service.
Impact of stray charge on interconnect wire via probability model of double-dot system
NASA Astrophysics Data System (ADS)
Xiangye, Chen; Li, Cai; Qiang, Zeng; Xinqiao, Wang
2016-02-01
The behavior of quantum cellular automata (QCA) under the influence of a stray charge is quantified. A new time-independent switching paradigm, a probability model of the double-dot system, is developed. Superiority in releasing the calculation operation is presented by the probability model compared to previous stray charge analysis utilizing ICHA or full-basis calculation. Simulation results illustrate that there is a 186-nm-wide region surrounding a QCA wire where a stray charge will cause the target cell to switch unsuccessfully. The failure is exhibited by two new states' dominating the target cell. Therefore, a bistable saturation model is no longer applicable for stray charge analysis. Project supported by the National Natural Science Foundation of China (No. 61172043) and the Key Program of Shaanxi Provincial Natural Science for Basic Research (No. 2011JZ015).
Universality of the crossing probability for the Potts model for q=1, 2, 3, 4.
Vasilyev, Oleg A
2003-08-01
The universality of the crossing probability pi(hs) of a system to percolate only in the horizontal direction was investigated numerically by a cluster Monte Carlo algorithm for the q-state Potts model for q=2, 3, 4 and for percolation q=1. We check the percolation through Fortuin-Kasteleyn clusters near the critical point on the square lattice by using representation of the Potts model as the correlated site-bond percolation model. It was shown that probability of a system to percolate only in the horizontal direction pi(hs) has the universal form pi(hs)=A(q)Q(z) for q=1,2,3,4 as a function of the scaling variable z=[b(q)L(1/nu(q))[p-p(c)(q,L)
A Probability Model of Decompression Sickness at 4.3 Psia after Exercise Prebreathe
NASA Technical Reports Server (NTRS)
Conkin, Johnny; Gernhardt, Michael L.; Powell, Michael R.; Pollock, Neal
2004-01-01
Exercise PB can reduce the risk of decompression sickness on ascent to 4.3 psia when performed at the proper intensity and duration. Data are from seven tests. PB times ranged from 90 to 150 min. High intensity, short duration dual-cycle ergometry was done during the PB. This was done alone, or combined with intermittent low intensity exercise or periods of rest for the remaining PB. Nonambulating men and women performed light exercise from a semi-recumbent position at 4.3 psia for four hrs. The Research Model with age tested the probability that DCS increases with advancing age. The NASA Model with gender hypothesized that the probability of DCS increases if gender is female. Accounting for exercise and rest during PB with a variable half-time compartment for computed tissue N2 pressure advances our probability modeling of hypobaric DCS. Both models show that a small increase in exercise intensity during PB reduces the risk of DCS, and a larger increase in exercise intensity dramatically reduces risk. These models support the hypothesis that aerobic fitness is an important consideration for the risk of hypobaric DCS when exercise is performed during the PB.
Li, Ning; Liu, Xueqin; Xie, Wei; Wu, Jidong; Zhang, Peng
2013-01-01
New features of natural disasters have been observed over the last several years. The factors that influence the disasters' formation mechanisms, regularity of occurrence and main characteristics have been revealed to be more complicated and diverse in nature than previously thought. As the uncertainty involved increases, the variables need to be examined further. This article discusses the importance and the shortage of multivariate analysis of natural disasters and presents a method to estimate the joint probability of the return periods and perform a risk analysis. Severe dust storms from 1990 to 2008 in Inner Mongolia were used as a case study to test this new methodology, as they are normal and recurring climatic phenomena on Earth. Based on the 79 investigated events and according to the dust storm definition with bivariate, the joint probability distribution of severe dust storms was established using the observed data of maximum wind speed and duration. The joint return periods of severe dust storms were calculated, and the relevant risk was analyzed according to the joint probability. The copula function is able to simulate severe dust storm disasters accurately. The joint return periods generated are closer to those observed in reality than the univariate return periods and thus have more value in severe dust storm disaster mitigation, strategy making, program design, and improvement of risk management. This research may prove useful in risk-based decision making. The exploration of multivariate analysis methods can also lay the foundation for further applications in natural disaster risk analysis.
Growth mixture modeling with non-normal distributions.
Muthén, Bengt; Asparouhov, Tihomir
2015-03-15
A limiting feature of previous work on growth mixture modeling is the assumption of normally distributed variables within each latent class. With strongly non-normal outcomes, this means that several latent classes are required to capture the observed variable distributions. Being able to relax the assumption of within-class normality has the advantage that a non-normal observed distribution does not necessitate using more than one class to fit the distribution. It is valuable to add parameters representing the skewness and the thickness of the tails. A new growth mixture model of this kind is proposed drawing on recent work in a series of papers using the skew-t distribution. The new method is illustrated using the longitudinal development of body mass index in two data sets. The first data set is from the National Longitudinal Survey of Youth covering ages 12-23 years. Here, the development is related to an antecedent measuring socioeconomic background. The second data set is from the Framingham Heart Study covering ages 25-65 years. Here, the development is related to the concurrent event of treatment for hypertension using a joint growth mixture-survival model.
ERIC Educational Resources Information Center
Gugel, John F.
A new method for estimating the parameters of the normal ogive three-parameter model for multiple-choice test items--the normalized direct (NDIR) procedure--is examined. The procedure is compared to a more commonly used estimation procedure, Lord's LOGIST, using computer simulations. The NDIR procedure uses the normalized (mid-percentile)…
Logit-normal mixed model for Indian Monsoon rainfall extremes
NASA Astrophysics Data System (ADS)
Dietz, L. R.; Chatterjee, S.
2014-03-01
Describing the nature and variability of Indian monsoon rainfall extremes is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Several GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data procured from the National Climatic Data Center. The logit-normal model was applied with fixed covariates of latitude, longitude, elevation, daily minimum and maximum temperatures with a random intercept by weather station. In general, the estimation methods concurred in their suggestion of a relationship between the El Niño Southern Oscillation (ENSO) and extreme rainfall variability estimates. This work provides a valuable starting point for extending GLMM to incorporate the intricate dependencies in extreme climate events.
A predictive model to estimate the pretest probability of metastasis in patients with osteosarcoma.
Wang, Sisheng; Zheng, Shaoluan; Hu, Kongzu; Sun, Heyan; Zhang, Jinling; Rong, Genxiang; Gao, Jie; Ding, Nan; Gui, Binjie
2017-01-01
Osteosarcomas (OSs) represent a huge challenge to improve the overall survival, especially in metastatic patients. Increasing evidence indicates that both tumor-associated elements but also on host-associated elements are under a remarkable effect on the prognosis of cancer patients, especially systemic inflammatory response. By analyzing a series prognosis of factors, including age, gender, primary tumor size, tumor location, tumor grade, and histological classification, monocyte ratio, and NLR ratio, a clinical predictive model was established by using stepwise logistic regression involved circulating leukocyte to compute the estimated probabilities of metastases for OS patients. The clinical predictive model was described by the following equations: probability of developing metastases = ex/(1 + ex), x = -2.150 + (1.680 × monocyte ratio) + (1.533 × NLR ratio), where is the base of the natural logarithm, the assignment to each of the 2 variables is 1 if the ratio >1 (otherwise 0). The calculated AUC of the receiver-operating characteristic curve as 0.793 revealed well accuracy of this model (95% CI, 0.740-0.845). The predicted probabilities that we generated with the cross-validation procedure had a similar AUC (0.743; 95% CI, 0.684-0.803). The present model could be used to improve the outcomes of the metastases by developing a predictive model considering circulating leukocyte influence to estimate the pretest probability of developing metastases in patients with OS.
Salmon, Octavio R; Crokidakis, Nuno; Nobre, Fernando D
2009-02-04
A random-field Ising model that is capable of exhibiting a rich variety of multicritical phenomena, as well as a smearing of such behavior, is investigated. The model consists of an infinite-range-interaction Ising ferromagnet in the presence of a triple Gaussian random magnetic field, which is defined as a superposition of three Gaussian distributions with the same width σ, centered at H = 0 and H = ± H(0), with probabilities p and (1-p)/2, respectively. Such a distribution is very general and recovers, as limiting cases, the trimodal, bimodal and Gaussian probability distributions. In particular, the special case of the random-field Ising model in the presence of a trimodal probability distribution (limit [Formula: see text]) is able to present a rather nontrivial multicritical behavior. It is argued that the triple Gaussian probability distribution is appropriate for a physical description of some diluted antiferromagnets in the presence of a uniform external field, for which the corresponding physical realization consists of an Ising ferromagnet under random fields whose distribution appears to be well represented in terms of a superposition of two parts, namely a trimodal and a continuous contribution. The model is investigated by means of the replica method, and phase diagrams are obtained within the replica-symmetric solution, which is known to be stable for the present system. A rich variety of phase diagrams is presented, with one or two distinct ferromagnetic phases, continuous and first-order transition lines, tricritical, fourth-order, critical end points and many other interesting multicritical phenomena. Additionally, the present model carries the possibility of destroying such multicritical phenomena due to an increase in the randomness, i.e. increasing σ, which represents a very common feature in real systems.
A predictive model to estimate the pretest probability of metastasis in patients with osteosarcoma
Wang, Sisheng; Zheng, Shaoluan; Hu, Kongzu; Sun, Heyan; Zhang, Jinling; Rong, Genxiang; Gao, Jie; Ding, Nan; Gui, Binjie
2017-01-01
Abstract Osteosarcomas (OSs) represent a huge challenge to improve the overall survival, especially in metastatic patients. Increasing evidence indicates that both tumor-associated elements but also on host-associated elements are under a remarkable effect on the prognosis of cancer patients, especially systemic inflammatory response. By analyzing a series prognosis of factors, including age, gender, primary tumor size, tumor location, tumor grade, and histological classification, monocyte ratio, and NLR ratio, a clinical predictive model was established by using stepwise logistic regression involved circulating leukocyte to compute the estimated probabilities of metastases for OS patients. The clinical predictive model was described by the following equations: probability of developing metastases = ex/(1 + ex), x = −2.150 + (1.680 × monocyte ratio) + (1.533 × NLR ratio), where is the base of the natural logarithm, the assignment to each of the 2 variables is 1 if the ratio >1 (otherwise 0). The calculated AUC of the receiver-operating characteristic curve as 0.793 revealed well accuracy of this model (95% CI, 0.740–0.845). The predicted probabilities that we generated with the cross-validation procedure had a similar AUC (0.743; 95% CI, 0.684–0.803). The present model could be used to improve the outcomes of the metastases by developing a predictive model considering circulating leukocyte influence to estimate the pretest probability of developing metastases in patients with OS. PMID:28099353
Modeling the presence probability of invasive plant species with nonlocal dispersal.
Strickland, Christopher; Dangelmayr, Gerhard; Shipman, Patrick D
2014-08-01
Mathematical models for the spread of invading plant organisms typically utilize population growth and dispersal dynamics to predict the time-evolution of a population distribution. In this paper, we revisit a particular class of deterministic contact models obtained from a stochastic birth process for invasive organisms. These models were introduced by Mollison (J R Stat Soc 39(3):283, 1977). We derive the deterministic integro-differential equation of a more general contact model and show that the quantity of interest may be interpreted not as population size, but rather as the probability of species occurrence. We proceed to show how landscape heterogeneity can be included in the model by utilizing the concept of statistical habitat suitability models which condense diverse ecological data into a single statistic. As ecologists often deal with species presence data rather than population size, we argue that a model for probability of occurrence allows for a realistic determination of initial conditions from data. Finally, we present numerical results of our deterministic model and compare them to simulations of the underlying stochastic process.
Neale, Michael C.; Clark, Shaunna L.; Dolan, Conor V.; Hunter, Michael D.
2015-01-01
A linear latent growth curve mixture model with regime switching is extended in 2 ways. Previously, the matrix of first-order Markov switching probabilities was specified to be time-invariant, regardless of the pair of occasions being considered. The first extension, time-varying transitions, specifies different Markov transition matrices between each pair of occasions. The second extension is second-order time-invariant Markov transition probabilities, such that the probability of switching depends on the states at the 2 previous occasions. The models are implemented using the R package OpenMx, which facilitates data handling, parallel computation, and further model development. It also enables the extraction and display of relative likelihoods for every individual in the sample. The models are illustrated with previously published data on alcohol use observed on 4 occasions as part of the National Longitudinal Survey of Youth, and demonstrate improved fit to the data. PMID:26924921
A Model-Free Machine Learning Method for Risk Classification and Survival Probability Prediction.
Geng, Yuan; Lu, Wenbin; Zhang, Hao Helen
2014-01-01
Risk classification and survival probability prediction are two major goals in survival data analysis since they play an important role in patients' risk stratification, long-term diagnosis, and treatment selection. In this article, we propose a new model-free machine learning framework for risk classification and survival probability prediction based on weighted support vector machines. The new procedure does not require any specific parametric or semiparametric model assumption on data, and is therefore capable of capturing nonlinear covariate effects. We use numerous simulation examples to demonstrate finite sample performance of the proposed method under various settings. Applications to a glioma tumor data and a breast cancer gene expression survival data are shown to illustrate the new methodology in real data analysis.
NASA Astrophysics Data System (ADS)
Zhong, H.; van Overloop, P.-J.; van Gelder, P. H. A. J. M.
2013-07-01
The Lower Rhine Delta, a transitional area between the River Rhine and Meuse and the North Sea, is at risk of flooding induced by infrequent events of a storm surge or upstream flooding, or by more infrequent events of a combination of both. A joint probability analysis of the astronomical tide, the wind induced storm surge, the Rhine flow and the Meuse flow at the boundaries is established in order to produce the joint probability distribution of potential flood events. Three individual joint probability distributions are established corresponding to three potential flooding causes: storm surges and normal Rhine discharges, normal sea levels and high Rhine discharges, and storm surges and high Rhine discharges. For each category, its corresponding joint probability distribution is applied, in order to stochastically simulate a large number of scenarios. These scenarios can be used as inputs to a deterministic 1-D hydrodynamic model in order to estimate the high water level frequency curves at the transitional locations. The results present the exceedance probability of the present design water level for the economically important cities of Rotterdam and Dordrecht. The calculated exceedance probability is evaluated and compared to the governmental norm. Moreover, the impact of climate change on the high water level frequency curves is quantified for the year 2050 in order to assist in decisions regarding the adaptation of the operational water management system and the flood defense system.
Probability of ventricular fibrillation: allometric model based on the ST deviation
2011-01-01
Background Allometry, in general biology, measures the relative growth of a part in relation to the whole living organism. Using reported clinical data, we apply this concept for evaluating the probability of ventricular fibrillation based on the electrocardiographic ST-segment deviation values. Methods Data collected by previous reports were used to fit an allometric model in order to estimate ventricular fibrillation probability. Patients presenting either with death, myocardial infarction or unstable angina were included to calculate such probability as, VFp = δ + β (ST), for three different ST deviations. The coefficients δ and β were obtained as the best fit to the clinical data extended over observational periods of 1, 6, 12 and 48 months from occurrence of the first reported chest pain accompanied by ST deviation. Results By application of the above equation in log-log representation, the fitting procedure produced the following overall coefficients: Average β = 0.46, with a maximum = 0.62 and a minimum = 0.42; Average δ = 1.28, with a maximum = 1.79 and a minimum = 0.92. For a 2 mm ST-deviation, the full range of predicted ventricular fibrillation probability extended from about 13% at 1 month up to 86% at 4 years after the original cardiac event. Conclusions These results, at least preliminarily, appear acceptable and still call for full clinical test. The model seems promising, especially if other parameters were taken into account, such as blood cardiac enzyme concentrations, ischemic or infarcted epicardial areas or ejection fraction. It is concluded, considering these results and a few references found in the literature, that the allometric model shows good predictive practical value to aid medical decisions. PMID:21226961
Gruber, Susan; Logan, Roger W; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A
2015-01-15
Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However, a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V-fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results.
NASA Astrophysics Data System (ADS)
Karakostas, Vassilis; Papadimitriou, Eleftheria; Gospodinov, Dragomir
2014-04-01
The 2013 January 8 Mw 5.8 North Aegean earthquake sequence took place on one of the ENE-WSW trending parallel dextral strike slip fault branches in this area, in the continuation of 1968 large (M = 7.5) rupture. The source mechanism of the main event indicates predominantly strike slip faulting in agreement with what is expected from regional seismotectonics. It was the largest event to have occurred in the area since the establishment of the Hellenic Unified Seismological Network (HUSN), with an adequate number of stations in close distances and full azimuthal coverage, thus providing the chance of an exhaustive analysis of its aftershock sequence. The main shock was followed by a handful of aftershocks with M ≥ 4.0 and tens with M ≥ 3.0. Relocation was performed by using the recordings from HUSN and a proper crustal model for the area, along with time corrections in each station relative to the model used. Investigation of the spatial and temporal behaviour of seismicity revealed possible triggering of adjacent fault segments. Theoretical static stress changes from the main shock give a preliminary explanation for the aftershock distribution aside from the main rupture. The off-fault seismicity is perfectly explained if μ > 0.5 and B = 0.0, evidencing high fault friction. In an attempt to forecast occurrence probabilities of the strong events (Mw ≥ 5.0), estimations were performed following the Restricted Epidemic Type Aftershock Sequence (RETAS) model. The identified best-fitting MOF model was used to execute 1-d forecasts for such aftershocks and follow the probability evolution in time during the sequence. Forecasting was also implemented on the base of a temporal model of aftershock occurrence, different from the modified Omori formula (the ETAS model), which resulted in probability gain (though small) in strong aftershock forecasting for the beginning of the sequence.
Syntactic error modeling and scoring normalization in speech recognition
NASA Technical Reports Server (NTRS)
Olorenshaw, Lex
1991-01-01
The objective was to develop the speech recognition system to be able to detect speech which is pronounced incorrectly, given that the text of the spoken speech is known to the recognizer. Research was performed in the following areas: (1) syntactic error modeling; (2) score normalization; and (3) phoneme error modeling. The study into the types of errors that a reader makes will provide the basis for creating tests which will approximate the use of the system in the real world. NASA-Johnson will develop this technology into a 'Literacy Tutor' in order to bring innovative concepts to the task of teaching adults to read.
A spatial model of bird abundance as adjusted for detection probability
Gorresen, P.M.; Mcmillan, G.P.; Camp, R.J.; Pratt, T.K.
2009-01-01
Modeling the spatial distribution of animals can be complicated by spatial and temporal effects (i.e. spatial autocorrelation and trends in abundance over time) and other factors such as imperfect detection probabilities and observation-related nuisance variables. Recent advances in modeling have demonstrated various approaches that handle most of these factors but which require a degree of sampling effort (e.g. replication) not available to many field studies. We present a two-step approach that addresses these challenges to spatially model species abundance. Habitat, spatial and temporal variables were handled with a Bayesian approach which facilitated modeling hierarchically structured data. Predicted abundance was subsequently adjusted to account for imperfect detection and the area effectively sampled for each species. We provide examples of our modeling approach for two endemic Hawaiian nectarivorous honeycreepers: 'i'iwi Vestiaria coccinea and 'apapane Himatione sanguinea. ?? 2009 Ecography.
A cellular automata model of traffic flow with variable probability of randomization
NASA Astrophysics Data System (ADS)
Zheng, Wei-Fan; Zhang, Ji-Ye
2015-05-01
Research on the stochastic behavior of traffic flow is important to understand the intrinsic evolution rules of a traffic system. By introducing an interactional potential of vehicles into the randomization step, an improved cellular automata traffic flow model with variable probability of randomization is proposed in this paper. In the proposed model, the driver is affected by the interactional potential of vehicles before him, and his decision-making process is related to the interactional potential. Compared with the traditional cellular automata model, the modeling is more suitable for the driver’s random decision-making process based on the vehicle and traffic situations in front of him in actual traffic. From the improved model, the fundamental diagram (flow-density relationship) is obtained, and the detailed high-density traffic phenomenon is reproduced through numerical simulation. Project supported by the National Natural Science Foundation of China (Grant Nos. 11172247, 61273021, 61373009, and 61100118).
Heglund, P.J.; Nichols, J.D.; Hines, J.E.; Sauer, J.; Fallon, J.; Fallon, F.; Field, Rebecca; Warren, Robert J.; Okarma, Henryk; Sievert, Paul R.
2001-01-01
Point counts are a controversial sampling method for bird populations because the counts are not censuses, and the proportion of birds missed during counting generally is not estimated. We applied a double-observer approach to estimate detection rates of birds from point counts in Maryland, USA, and test whether detection rates differed between point counts conducted in field habitats as opposed to wooded habitats. We conducted 2 analyses. The first analysis was based on 4 clusters of counts (routes) surveyed by a single pair of observers. A series of models was developed with differing assumptions about sources of variation in detection probabilities and fit using program SURVIV. The most appropriate model was selected using Akaike's Information Criterion. The second analysis was based on 13 routes (7 woods and 6 field routes) surveyed by various observers in which average detection rates were estimated by route and compared using a t-test. In both analyses, little evidence existed for variation in detection probabilities in relation to habitat. Double-observer methods provide a reasonable means of estimating detection probabilities and testing critical assumptions needed for analysis of point counts.
NASA Astrophysics Data System (ADS)
Li, Zhanling; Li, Zhanjie; Li, Chengcheng
2014-05-01
Probability modeling of hydrological extremes is one of the major research areas in hydrological science. Most basins in humid and semi-humid south and east of China are concerned for probability modeling analysis of high flow extremes. While, for the inland river basin which occupies about 35% of the country area, there is a limited presence of such studies partly due to the limited data availability and a relatively low mean annual flow. The objective of this study is to carry out probability modeling of high flow extremes in the upper reach of Heihe River basin, the second largest inland river basin in China, by using the peak over threshold (POT) method and Generalized Pareto Distribution (GPD), in which the selection of threshold and inherent assumptions for POT series are elaborated in details. For comparison, other widely used probability distributions including generalized extreme value (GEV), Lognormal, Log-logistic and Gamma are employed as well. Maximum likelihood estimate is used for parameter estimations. Daily flow data at Yingluoxia station from 1978 to 2008 are used. Results show that, synthesizing the approaches of mean excess plot, stability features of model parameters, return level plot and the inherent independence assumption of POT series, an optimum threshold of 340m3/s is finally determined for high flow extremes in Yingluoxia watershed. The resulting POT series is proved to be stationary and independent based on Mann-Kendall test, Pettitt test and autocorrelation test. In terms of Kolmogorov-Smirnov test, Anderson-Darling test and several graphical diagnostics such as quantile and cumulative density function plots, GPD provides the best fit to high flow extremes in the study area. The estimated high flows for long return periods demonstrate that, as the return period increasing, the return level estimates are probably more uncertain. The frequency of high flow extremes exhibits a very slight but not significant decreasing trend from 1978 to
Protein single-model quality assessment by feature-based probability density functions.
Cao, Renzhi; Cheng, Jianlin
2016-04-04
Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method-Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob.
Greis, Tillman; Helmholz, Kathrin; Schöniger, Hans Matthias; Haarstrick, Andreas
2012-06-01
In this study, a 3D urban groundwater model is presented which serves for calculation of multispecies contaminant transport in the subsurface on the regional scale. The total model consists of two submodels, the groundwater flow and reactive transport model, and is validated against field data. The model equations are solved applying finite element method. A sensitivity analysis is carried out to perform parameter identification of flow, transport and reaction processes. Coming from the latter, stochastic variation of flow, transport, and reaction input parameters and Monte Carlo simulation are used in calculating probabilities of pollutant occurrence in the domain. These probabilities could be part of determining future spots of contamination and their measure of damages. Application and validation is exemplarily shown for a contaminated site in Braunschweig (Germany), where a vast plume of chlorinated ethenes pollutes the groundwater. With respect to field application, the methods used for modelling reveal feasible and helpful tools to assess natural attenuation (MNA) and the risk that might be reduced by remediation actions.
Halliwell, J. J.
2009-12-15
In the quantization of simple cosmological models (minisuperspace models) described by the Wheeler-DeWitt equation, an important step is the construction, from the wave function, of a probability distribution answering various questions of physical interest, such as the probability of the system entering a given region of configuration space at any stage in its entire history. A standard but heuristic procedure is to use the flux of (components of) the wave function in a WKB approximation. This gives sensible semiclassical results but lacks an underlying operator formalism. In this paper, we address the issue of constructing probability distributions linked to the Wheeler-DeWitt equation using the decoherent histories approach to quantum theory. The key step is the construction of class operators characterizing questions of physical interest. Taking advantage of a recent decoherent histories analysis of the arrival time problem in nonrelativistic quantum mechanics, we show that the appropriate class operators in quantum cosmology are readily constructed using a complex potential. The class operator for not entering a region of configuration space is given by the S matrix for scattering off a complex potential localized in that region. We thus derive the class operators for entering one or more regions in configuration space. The class operators commute with the Hamiltonian, have a sensible classical limit, and are closely related to an intersection number operator. The definitions of class operators given here handle the key case in which the underlying classical system has multiple crossings of the boundaries of the regions of interest. We show that oscillatory WKB solutions to the Wheeler-DeWitt equation give approximate decoherence of histories, as do superpositions of WKB solutions, as long as the regions of configuration space are sufficiently large. The corresponding probabilities coincide, in a semiclassical approximation, with standard heuristic procedures
Simulation of reactive nanolaminates using reduced models: II. Normal propagation
Salloum, Maher; Knio, Omar M.
2010-03-15
Transient normal flame propagation in reactive Ni/Al multilayers is analyzed computationally. Two approaches are implemented, based on generalization of earlier methodology developed for axial propagation, and on extension of the model reduction formalism introduced in Part I. In both cases, the formulation accommodates non-uniform layering as well as the presence of inert layers. The equations of motion for the reactive system are integrated using a specially-tailored integration scheme, that combines extended-stability, Runge-Kutta-Chebychev (RKC) integration of diffusion terms with exact treatment of the chemical source term. The detailed and reduced models are first applied to the analysis of self-propagating fronts in uniformly-layered materials. Results indicate that both the front velocities and the ignition threshold are comparable for normal and axial propagation. Attention is then focused on analyzing the effect of a gap composed of inert material on reaction propagation. In particular, the impacts of gap width and thermal conductivity are briefly addressed. Finally, an example is considered illustrating reaction propagation in reactive composites combining regions corresponding to two bilayer widths. This setup is used to analyze the effect of the layering frequency on the velocity of the corresponding reaction fronts. In all cases considered, good agreement is observed between the predictions of the detailed model and the reduced model, which provides further support for adoption of the latter. (author)
Lavé, Thierry; Caruso, Antonello; Parrott, Neil; Walz, Antje
In this review we present ways in which translational PK/PD modeling can address opportunities to enhance probability of success in drug discovery and early development. This is achieved by impacting efficacy and safety-driven attrition rates, through increased focus on the quantitative understanding and modeling of translational PK/PD. Application of the proposed principles early in the discovery and development phases is anticipated to bolster confidence of successfully evaluating proof of mechanism in humans and ultimately improve Phase II success. The present review is centered on the application of predictive modeling and simulation approaches during drug discovery and early development, and more specifically of mechanism-based PK/PD modeling. Case studies are presented, focused on the relevance of M&S contributions to real-world questions and the impact on decision making.
Modeling the probability of arsenic in groundwater in New England as a tool for exposure assessment
Ayotte, J.D.; Nolan, B.T.; Nuckols, J.R.; Cantor, K.P.; Robinson, G.R.; Baris, D.; Hayes, L.; Karagas, M.; Bress, W.; Silverman, D.T.; Lubin, J.H.
2006-01-01
We developed a process-based model to predict the probability of arsenic exceeding 5 ??g/L in drinking water wells in New England bedrock aquifers. The model is being used for exposure assessment in an epidemiologic study of bladder cancer. One important study hypothesis that may explain increased bladder cancer risk is elevated concentrations of inorganic arsenic in drinking water. In eastern New England, 20-30% of private wells exceed the arsenic drinking water standard of 10 micrograms per liter. Our predictive model significantly improves the understanding of factors associated with arsenic contamination in New England. Specific rock types, high arsenic concentrations in stream sediments, geochemical factors related to areas of Pleistocene marine inundation and proximity to intrusive granitic plutons, and hydrologic and landscape variables relating to groundwater residence time increase the probability of arsenic occurrence in groundwater. Previous studies suggest that arsenic in bedrock groundwater may be partly from past arsenical pesticide use. Variables representing historic agricultural inputs do not improve the model, indicating that this source does not significantly contribute to current arsenic concentrations. Due to the complexity of the fractured bedrock aquifers in the region, well depth and related variables also are not significant predictors. ?? 2006 American Chemical Society.
Modeling and estimation of stage-specific daily survival probabilities of nests
Stanley, T.R.
2000-01-01
In studies of avian nesting success, it is often of interest to estimate stage-specific daily survival probabilities of nests. When data can be partitioned by nesting stage (e.g., incubation stage, nestling stage), piecewise application of the Mayfield method or Johnsona??s method is appropriate. However, when the data contain nests where the transition from one stage to the next occurred during the interval between visits, piecewise approaches are inappropriate. In this paper, I present a model that allows joint estimation of stage-specific daily survival probabilities even when the time of transition between stages is unknown. The model allows interval lengths between visits to nests to vary, and the exact time of failure of nests does not need to be known. The performance of the model at various sample sizes and interval lengths between visits was investigated using Monte Carlo simulations, and it was found that the model performed quite well: bias was small and confidence-interval coverage was at the nominal 95% rate. A SAS program for obtaining maximum likelihood estimates of parameters, and their standard errors, is provided in the Appendix.
A formalism to generate probability distributions for performance-assessment modeling
Kaplan, P.G.
1990-12-31
A formalism is presented for generating probability distributions of parameters used in performance-assessment modeling. The formalism is used when data are either sparse or nonexistent. The appropriate distribution is a function of the known or estimated constraints and is chosen to maximize a quantity known as Shannon`s informational entropy. The formalism is applied to a parameter used in performance-assessment modeling. The functional form of the model that defines the parameter, data from the actual field site, and natural analog data are analyzed to estimate the constraints. A beta probability distribution of the example parameter is generated after finding four constraints. As an example of how the formalism is applied to the site characterization studies of Yucca Mountain, the distribution is generated for an input parameter in a performance-assessment model currently used to estimate compliance with disposal of high-level radioactive waste in geologic repositories, 10 CFR 60.113(a)(2), commonly known as the ground water travel time criterion. 8 refs., 2 figs.
Empirical probability model of cold plasma environment in the Jovian magnetosphere
NASA Astrophysics Data System (ADS)
Futaana, Yoshifumi; Wang, Xiao-Dong; Barabash, Stas; Roussos, Elias; Truscott, Pete
2015-04-01
We analyzed the Galileo PLS dataset to produce a new cold plasma environment model for the Jovian magneto- sphere. Although there exist many sophisticated radiation models, treating energetic plasma (e.g. JOSE, GIRE, or Salammbo), only a limited number of simple models has been utilized for cold plasma environment. By extend- ing the existing cold plasma models toward the probability domain, we can predict the extreme periods of Jovian environment by specifying the percentile of the environmental parameters. The new model was produced in the following procedure. We first referred to the existing cold plasma models of Divine and Garrett, 1983 (DG83) or Bagenal and Delamere 2011 (BD11). These models are scaled to fit the statistical median of the parameters obtained from Galileo PLS data. The scaled model (also called as "mean model") indicates the median environment of Jovian magnetosphere. Then, assuming that the deviations in the Galileo PLS parameters are purely due to variations in the environment, we extended the mean model toward the percentile domain. The input parameter of the model is simply the position of the spacecraft (distance, magnetic longitude and lati- tude) and the specific percentile (e.g. 0.5 for the mean model). All the parameters in the model are described in mathematical forms; therefore the needed computational resources are quite low. The new model can be used for assessing the JUICE mission profile. The spatial extent of the model covers the main phase of the JUICE mission; namely from the Europa orbit to 40 Rj (where Rj is the radius of Jupiter). In addition, theoretical extensions toward the latitudinal direction are also included in the model to support the high latitude orbit of the JUICE spacecraft.
NASA Astrophysics Data System (ADS)
Zhong, Rumian; Zong, Zhouhong; Niu, Jie; Liu, Qiqi; Zheng, Peijuan
2016-05-01
Modeling and simulation are routinely implemented to predict the behavior of complex structures. These tools powerfully unite theoretical foundations, numerical models and experimental data which include associated uncertainties and errors. A new methodology for multi-scale finite element (FE) model validation is proposed in this paper. The method is based on two-step updating method, a novel approach to obtain coupling parameters in the gluing sub-regions of a multi-scale FE model, and upon Probability Box (P-box) theory that can provide a lower and upper bound for the purpose of quantifying and transmitting the uncertainty of structural parameters. The structural health monitoring data of Guanhe Bridge, a composite cable-stayed bridge with large span, and Monte Carlo simulation were used to verify the proposed method. The results show satisfactory accuracy, as the overlap ratio index of each modal frequency is over 89% without the average absolute value of relative errors, and the CDF of normal distribution has a good coincidence with measured frequencies of Guanhe Bridge. The validated multiscale FE model may be further used in structural damage prognosis and safety prognosis.
Probability Distributions for U.S. Climate Change Using Multi-Model Ensembles
NASA Astrophysics Data System (ADS)
Preston, B. L.
2004-12-01
Projections of future climate change vary considerable among different atmosphere ocean general circulation models (AOGCMs) and climate forcing scenarios, and thus understanding of future climate change and its consequences is highly dependent upon the range of models and scenarios taken into consideration. To compensate for this limitation, a number of authors have proposed using multi-model ensembles to develop mean or probabilistic projections of future climate conditions. Here, a simple climate model (MAGICC/SCENGEN) was used to project future seasonal and annual changes in coterminous U.S. temperature and precipitation in 2025, 2050, and 2100 using seven AOGCMs (CSIRO, CSM, ECHM4, GFDL, HADCM2, HADCM3, PCM) and the Intergovernmental Panel on Climate Change's six SRES marker scenarios. Model results were used to calculate cumulative probability distributions for temperature and precipitation changes. Different weighting schemes were applied to the AOGCM results reflecting different assumptions about the relative likelihood of different models and forcing scenarios. EQUAL results were unweighted, while SENS and REA results were weighted by climate sensitivity and model performance, respectively. For each of these assumptions, additional results were also generated using weighted forcing scenarios (SCENARIO), for a total of six probability distributions for each season and time period. Average median temperature and precipitation changes in 2100 among the probability distributions were +3.4° C (1.6-6.6° C) and +2.4% (-1.3-10%), respectively. Greater warming was projected for June, July, and August (JJA) relative to other seasons, and modest decreases in precipitation were projected for JJA while modest increases were projected for other seasons. The EQUAL and REA distributions were quite similar, while REA distributions were significantly constrained in comparison. Weighting of forcing scenarios reduced the upper 95% confidence limit for temperature and
Modelling the probability of ionospheric irregularity occurrence over African low latitude region
NASA Astrophysics Data System (ADS)
Mungufeni, Patrick; Jurua, Edward; Bosco Habarulema, John; Anguma Katrini, Simon
2015-06-01
This study presents models of geomagnetically quiet time probability of occurrence of ionospheric irregularities over the African low latitude region. GNSS-derived ionospheric total electron content data from Mbarara, Uganda (0.60°S, 30.74°E, geographic, 10.22°S, magnetic) and Libreville, Gabon (0.35°N, 9.68°E, geographic, 8.05°S, magnetic) during the period 2001-2012 were used. First, we established the rate of change of total electron content index (ROTI) value associated with background ionospheric irregularity over the region. This was done by analysing GNSS carrier-phases at L-band frequencies L1 and L2 with the aim of identifying cycle slip events associated with ionospheric irregularities. We identified at both stations a total of 699 events of cycle slips. The corresponding median ROTI value at the epochs of the cycle slip events was 0.54 TECU/min. The probability of occurrence of ionospheric irregularities associated with ROTI ≥ 0.5 TECU / min was then modelled by fitting cubic B-splines to the data. The aspects the model captured included diurnal, seasonal, and solar flux dependence patterns of the probability of occurrence of ionospheric irregularities. The model developed over Mbarara was validated with data over Mt. Baker, Uganda (0.35°N, 29.90°E, geographic, 9.25°S, magnetic), Kigali, Rwanda (1.94°S, 30.09°E, geographic, 11.62°S, magnetic), and Kampala, Uganda (0.34°N, 32.60°E, geographic, 9.29°S, magnetic). For the period validated at Mt. Baker (approximately, 137.64 km, north west), Kigali (approximately, 162.42 km, south west), and Kampala (approximately, 237.61 km, north east) the percentages of the number of errors (difference between the observed and the modelled probability of occurrence of ionospheric irregularity) less than 0.05 are 97.3, 89.4, and 81.3, respectively.
Arnold, Nina R; Bayen, Ute J; Kuhlmann, Beatrice G; Vaterrodt, Bianca
2013-04-01
According to the probability-matching account of source guessing (Spaniol & Bayen, Journal of Experimental Psychology: Learning, Memory, and Cognition 28:631-651, 2002), when people do not remember the source of an item in a source-monitoring task, they match the source-guessing probabilities to the perceived contingencies between sources and item types. In a source-monitoring experiment, half of the items presented by each of two sources were consistent with schematic expectations about this source, whereas the other half of the items were consistent with schematic expectations about the other source. Participants' source schemas were activated either at the time of encoding or just before the source-monitoring test. After test, the participants judged the contingency of the item type and source. Individual parameter estimates of source guessing were obtained via beta-multinomial processing tree modeling (beta-MPT; Smith & Batchelder, Journal of Mathematical Psychology 54:167-183, 2010). We found a significant correlation between the perceived contingency and source guessing, as well as a correlation between the deviation of the guessing bias from the true contingency and source memory when participants did not receive the schema information until retrieval. These findings support the probability-matching account.
Potish, R.A.; Boen, J.; Jones, T.K. Jr.; Levitt, S.H.
1981-07-01
In order to predict radiation-related enteric damage, 92 women were studied who had received identical radiation doses for cancer of the ovary from 1970 through 1977. A logistic model was used to predict the probability of complication as a function of number of laparotomies, hypertension, and thin physique. The utility and limitations of such probability models are presented.
Huang, Yangxin; Chen, Jiaqing; Yin, Ping
2017-02-01
It is a common practice to analyze longitudinal data frequently arisen in medical studies using various mixed-effects models in the literature. However, the following issues may standout in longitudinal data analysis: (i) In clinical practice, the profile of each subject's response from a longitudinal study may follow a "broken stick" like trajectory, indicating multiple phases of increase, decline and/or stable in response. Such multiple phases (with changepoints) may be an important indicator to help quantify treatment effect and improve management of patient care. To estimate changepoints, the various mixed-effects models become a challenge due to complicated structures of model formulations; (ii) an assumption of homogeneous population for models may be unrealistically obscuring important features of between-subject and within-subject variations; (iii) normality assumption for model errors may not always give robust and reliable results, in particular, if the data exhibit non-normality; and (iv) the response may be missing and the missingness may be non-ignorable. In the literature, there has been considerable interest in accommodating heterogeneity, non-normality or missingness in such models. However, there has been relatively little work concerning all of these features simultaneously. There is a need to fill up this gap as longitudinal data do often have these characteristics. In this article, our objectives are to study simultaneous impact of these data features by developing a Bayesian mixture modeling approach-based Finite Mixture of Changepoint (piecewise) Mixed-Effects (FMCME) models with skew distributions, allowing estimates of both model parameters and class membership probabilities at population and individual levels. Simulation studies are conducted to assess the performance of the proposed method, and an AIDS clinical data example is analyzed to demonstrate the proposed methodologies and to compare modeling results of potential mixture models
Neurophysiological model of the normal and abnormal human pupil
NASA Technical Reports Server (NTRS)
Krenz, W.; Robin, M.; Barez, S.; Stark, L.
1985-01-01
Anatomical, experimental, and computer simulation studies were used to determine the structure of the neurophysiological model of the pupil size control system. The computer simulation of this model demonstrates the role played by each of the elements in the neurological pathways influencing the size of the pupil. Simulations of the effect of drugs and common abnormalities in the system help to illustrate the workings of the pathways and processes involved. The simulation program allows the user to select pupil condition (normal or an abnormality), specific site along the neurological pathway (retina, hypothalamus, etc.) drug class input (barbiturate, narcotic, etc.), stimulus/response mode, display mode, stimulus type and input waveform, stimulus or background intensity and frequency, the input and output conditions, and the response at the neuroanatomical site. The model can be used as a teaching aid or as a tool for testing hypotheses regarding the system.
Physical models for the normal YORP and diurnal Yarkovsky effects
NASA Astrophysics Data System (ADS)
Golubov, O.; Kravets, Y.; Krugly, Yu. N.; Scheeres, D. J.
2016-06-01
We propose an analytic model for the normal Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) and diurnal Yarkovsky effects experienced by a convex asteroid. Both the YORP torque and the Yarkovsky force are expressed as integrals of a universal function over the surface of an asteroid. Although in general this function can only be calculated numerically from the solution of the heat conductivity equation, approximate solutions can be obtained in quadratures for important limiting cases. We consider three such simplified models: Rubincam's approximation (zero heat conductivity), low thermal inertia limit (including the next order correction and thus valid for small heat conductivity), and high thermal inertia limit (valid for large heat conductivity). All three simplified models are compared with the exact solution.
Modeling secondary microseismic noise by normal mode summation
NASA Astrophysics Data System (ADS)
Gualtieri, Lucia; Stutzmann, Eleonore; Capdeville, Yann; Ardhuin, Fabrice; Schimmel, Martin; Mangenay, Anne; Morelli, Andrea
2013-04-01
Seismic noise is the continuous oscillation of the ground recorded by seismic stations in the period band 5-20s. In particular, secondary microseisms occur in the period band 5-12s and are generated in the ocean by the interaction of ocean gravity waves. We present the theory for modeling secondary microseismic noise by normal mode summation. We show that the noise sources can be modeled by vertical forces and how to derive them from a realistic ocean wave model. During the computation we take into account the bathymetry. We show how to compute bathymetry excitation effect in a realistic Earth model using normal modes and a comparison with Longuet-Higgins (1950) approach. The strongest excitation areas in the oceans depends on the bathymetry and period and are different for each seismic mode. We derive an attenuation model than enables to fit well the vertical component spectra whatever the station location. We show that the fundamental mode of Rayleigh wave is the dominant signal in seismic noise and it is sufficient to reproduce the main features of noise spectra amplitude. We also model horizontal components. There is a discrepancy between real and synthetic spectra on the horizontal components that enables to estimate the amount of Love waves for which a different source mechanism is needed. Finally, we investigate noise generated in all the oceans around Africa and show that most of noise recorded in Algeria (TAM station) is generated in the Northern Atlantic and that there is a seasonal variability of the contribution of each ocean and sea. Moreover, we also show that the Mediterranean Sea contributes significantly to the short period noise in winter.
Normality index of ventricular contraction based on a statistical model from FADS.
Jiménez-Ángeles, Luis; Valdés-Cristerna, Raquel; Vallejo, Enrique; Bialostozky, David; Medina-Bañuelos, Verónica
2013-01-01
Radionuclide-based imaging is an alternative to evaluate ventricular function and synchrony and may be used as a tool for the identification of patients that could benefit from cardiac resynchronization therapy (CRT). In a previous work, we used Factor Analysis of Dynamic Structures (FADS) to analyze the contribution and spatial distribution of the 3 most significant factors (3-MSF) present in a dynamic series of equilibrium radionuclide angiography images. In this work, a probability density function model of the 3-MSF extracted from FADS for a control group is presented; also an index, based on the likelihood between the control group's contraction model and a sample of normal subjects is proposed. This normality index was compared with those computed for two cardiopathic populations, satisfying the clinical criteria to be considered as candidates for a CRT. The proposed normality index provides a measure, consistent with the phase analysis currently used in clinical environment, sensitive enough to show contraction differences between normal and abnormal groups, which suggests that it can be related to the degree of severity in the ventricular contraction dyssynchrony, and therefore shows promise as a follow-up procedure for patients under CRT.
Normality Index of Ventricular Contraction Based on a Statistical Model from FADS
Jiménez-Ángeles, Luis; Valdés-Cristerna, Raquel; Vallejo, Enrique; Bialostozky, David; Medina-Bañuelos, Verónica
2013-01-01
Radionuclide-based imaging is an alternative to evaluate ventricular function and synchrony and may be used as a tool for the identification of patients that could benefit from cardiac resynchronization therapy (CRT). In a previous work, we used Factor Analysis of Dynamic Structures (FADS) to analyze the contribution and spatial distribution of the 3 most significant factors (3-MSF) present in a dynamic series of equilibrium radionuclide angiography images. In this work, a probability density function model of the 3-MSF extracted from FADS for a control group is presented; also an index, based on the likelihood between the control group's contraction model and a sample of normal subjects is proposed. This normality index was compared with those computed for two cardiopathic populations, satisfying the clinical criteria to be considered as candidates for a CRT. The proposed normality index provides a measure, consistent with the phase analysis currently used in clinical environment, sensitive enough to show contraction differences between normal and abnormal groups, which suggests that it can be related to the degree of severity in the ventricular contraction dyssynchrony, and therefore shows promise as a follow-up procedure for patients under CRT. PMID:23634177
Duffy, Stephen
2013-09-09
This project will implement inelastic constitutive models that will yield the requisite stress-strain information necessary for graphite component design. Accurate knowledge of stress states (both elastic and inelastic) is required to assess how close a nuclear core component is to failure. Strain states are needed to assess deformations in order to ascertain serviceability issues relating to failure, e.g., whether too much shrinkage has taken place for the core to function properly. Failure probabilities, as opposed to safety factors, are required in order to capture the bariability in failure strength in tensile regimes. The current stress state is used to predict the probability of failure. Stochastic failure models will be developed that can accommodate possible material anisotropy. This work will also model material damage (i.e., degradation of mechanical properties) due to radiation exposure. The team will design tools for components fabricated from nuclear graphite. These tools must readily interact with finite element software--in particular, COMSOL, the software algorithm currently being utilized by the Idaho National Laboratory. For the eleastic response of graphite, the team will adopt anisotropic stress-strain relationships available in COMSO. Data from the literature will be utilized to characterize the appropriate elastic material constants.
NASA Astrophysics Data System (ADS)
Li, Qi-Lang; Wong, S. C.; Min, Jie; Tian, Shuo; Wang, Bing-Hong
2016-08-01
This study examines the cellular automata traffic flow model, which considers the heterogeneity of vehicle acceleration and the delay probability of vehicles. Computer simulations are used to identify three typical phases in the model: free-flow, synchronized flow, and wide moving traffic jam. In the synchronized flow region of the fundamental diagram, the low and high velocity vehicles compete with each other and play an important role in the evolution of the system. The analysis shows that there are two types of bistable phases. However, in the original Nagel and Schreckenberg cellular automata traffic model, there are only two kinds of traffic conditions, namely, free-flow and traffic jams. The synchronized flow phase and bistable phase have not been found.
NASA Astrophysics Data System (ADS)
Kondoh, Hiroshi; Matsushita, Mitsugu
1986-10-01
Diffusion-limited aggregation (DLA) model with anisotropic sticking probability Ps is computer-simulated on two dimensional square lattice. The cluster grows from a seed particle at the origin in the positive y area with the absorption-type boundary along x-axis. The cluster is found to grow anisotropically as R//˜Nν// and R\\bot˜Nν\\bot, where R\\bot and R// are the radii of gyration of the cluster along x- and y-axes, respectively, and N is the particle number constituting the cluster. The two exponents are shown to become assymptotically ν//{=}2/3, ν\\bot{=}1/3 whenever the sticking anisotropy exists. It is also found that the present model is fairly consistent with Hack’s law of river networks, suggesting that it is a good candidate of a prototype model for the evolution of the river network.
Void probability as a function of the void's shape and scale-invariant models
NASA Technical Reports Server (NTRS)
Elizalde, E.; Gaztanaga, E.
1991-01-01
The dependence of counts in cells on the shape of the cell for the large scale galaxy distribution is studied. A very concrete prediction can be done concerning the void distribution for scale invariant models. The prediction is tested on a sample of the CfA catalog, and good agreement is found. It is observed that the probability of a cell to be occupied is bigger for some elongated cells. A phenomenological scale invariant model for the observed distribution of the counts in cells, an extension of the negative binomial distribution, is presented in order to illustrate how this dependence can be quantitatively determined. An original, intuitive derivation of this model is presented.
Transition probability estimates for non-Markov multi-state models.
Titman, Andrew C
2015-12-01
Non-parametric estimation of the transition probabilities in multi-state models is considered for non-Markov processes. Firstly, a generalization of the estimator of Pepe et al., (1991) (Statistics in Medicine) is given for a class of progressive multi-state models based on the difference between Kaplan-Meier estimators. Secondly, a general estimator for progressive or non-progressive models is proposed based upon constructed univariate survival or competing risks processes which retain the Markov property. The properties of the estimators and their associated standard errors are investigated through simulation. The estimators are demonstrated on datasets relating to survival and recurrence in patients with colon cancer and prothrombin levels in liver cirrhosis patients.
A new probability distribution model of turbulent irradiance based on Born perturbation theory
NASA Astrophysics Data System (ADS)
Wang, Hongxing; Liu, Min; Hu, Hao; Wang, Qian; Liu, Xiguo
2010-10-01
The subject of the PDF (Probability Density Function) of the irradiance fluctuations in a turbulent atmosphere is still unsettled. Theory reliably describes the behavior in the weak turbulence regime, but theoretical description in the strong and whole turbulence regimes are still controversial. Based on Born perturbation theory, the physical manifestations and correlations of three typical PDF models (Rice-Nakagami, exponential-Bessel and negative-exponential distribution) were theoretically analyzed. It is shown that these models can be derived by separately making circular-Gaussian, strong-turbulence and strong-turbulence-circular-Gaussian approximations in Born perturbation theory, which denies the viewpoint that the Rice-Nakagami model is only applicable in the extremely weak turbulence regime and provides theoretical arguments for choosing rational models in practical applications. In addition, a common shortcoming of the three models is that they are all approximations. A new model, called the Maclaurin-spread distribution, is proposed without any approximation except for assuming the correlation coefficient to be zero. So, it is considered that the new model can exactly reflect the Born perturbation theory. Simulated results prove the accuracy of this new model.
Modeling normal shock velocity curvature relations for heterogeneous explosives
NASA Astrophysics Data System (ADS)
Yoo, Sunhee; Crochet, Michael; Pemberton, Steven
2017-01-01
The theory of Detonation Shock Dynamics (DSD) is, in part, an asymptotic method to model a functional form of the relation between the shock normal, its time rate and shock curvature κ. In addition, the shock polar analysis provides a relation between shock angle θ and the detonation velocity Dn that is dependent on the equations of state (EOS) of two adjacent materials. For the axial detonation of an explosive material confined by a cylinder, the shock angle is defined as the angle between the shock normal and the normal to the cylinder liner, located at the intersection of the shock front and cylinder inner wall. Therefore, given an ideal explosive such as PBX-9501 with two functional models determined, a unique, smooth detonation front shape ψ can be determined that approximates the steady state detonation shock front of the explosive. However, experimental measurements of the Dn(κ) relation for heterogeneous explosives such as PBXN-111 [D. K. Kennedy, 2000] are challenging due to the non-smoothness and asymmetry usually observed in the experimental streak records of explosion fronts. Out of many possibilities the asymmetric character may be attributed to the heterogeneity of the explosives; here, material heterogeneity refers to compositions with multiple components and having a grain morphology that can be modeled statistically. Therefore in extending the formulation of DSD to modern novel explosives, we pose two questions: (1) is there any simple hydrodynamic model that can simulate such an asymmetric shock evolution, and (2) what statistics can be derived for the asymmetry using simulations with defined structural heterogeneity in the unreacted explosive? Saenz, Taylor and Stewart [1] studied constitutive models for derivation of the Dn(κ) relation for porous homogeneous explosives and carried out simulations in a spherical coordinate frame. In this paper we extend their model to account for heterogeneity and present shock evolutions in heterogeneous
NASA Astrophysics Data System (ADS)
James, P.
2011-12-01
With a growing need for housing in the U.K., the government has proposed increased development of brownfield sites. However, old mine workings and natural cavities represent a potential hazard before, during and after construction on such sites, and add further complication to subsurface parameters. Cavities are hence a limitation to certain redevelopment and their detection is an ever important consideration. The current standard technique for cavity detection is a borehole grid, which is intrusive, non-continuous, slow and expensive. A new robust investigation standard in the detection of cavities is sought and geophysical techniques offer an attractive alternative. Geophysical techniques have previously been utilised successfully in the detection of cavities in various geologies, but still has an uncertain reputation in the engineering industry. Engineers are unsure of the techniques and are inclined to rely on well known techniques than utilise new technologies. Bad experiences with geophysics are commonly due to the indiscriminate choice of particular techniques. It is imperative that a geophysical survey is designed with the specific site and target in mind at all times, and the ability and judgement to rule out some, or all, techniques. To this author's knowledge no comparative software exists to aid technique choice. Also, previous modelling software limit the shapes of bodies and hence typical cavity shapes are not represented. Here, we introduce 3D modelling software (Matlab) which computes and compares the response to various cavity targets from a range of techniques (gravity, gravity gradient, magnetic, magnetic gradient and GPR). Typical near surface cavity shapes are modelled including shafts, bellpits, various lining and capping materials, and migrating voids. The probability of cavity detection is assessed in typical subsurface and noise conditions across a range of survey parameters. Techniques can be compared and the limits of detection distance
The coupon collector urn model with unequal probabilities in ecology and evolution.
Zoroa, N; Lesigne, E; Fernández-Sáez, M J; Zoroa, P; Casas, J
2017-02-01
The sequential sampling of populations with unequal probabilities and with replacement in a closed population is a recurrent problem in ecology and evolution. Examples range from biodiversity sampling, epidemiology to the estimation of signal repertoire in animal communication. Many of these questions can be reformulated as urn problems, often as special cases of the coupon collector problem, most simply expressed as the number of coupons that must be collected to have a complete set. We aimed to apply the coupon collector model in a comprehensive manner to one example-hosts (balls) being searched (draws) and parasitized (ball colour change) by parasitic wasps-to evaluate the influence of differences in sampling probabilities between items on collection speed. Based on the model of a complete multinomial process over time, we define the distribution, distribution function, expectation and variance of the number of hosts parasitized after a given time, as well as the inverse problem, estimating the sampling effort. We develop the relationship between the risk distribution on the set of hosts and the speed of parasitization and propose a more elegant proof of the weak stochastic dominance among speeds of parasitization, using the concept of Schur convexity and the 'Robin Hood transfer' numerical operation. Numerical examples are provided and a conjecture about strong dominance-an ordering characteristic of random variables-is proposed. The speed at which new items are discovered is a function of the entire shape of the sampling probability distribution. The sole comparison of values of variances is not sufficient to compare speeds associated with different distributions, as generally assumed in ecological studies.
The coupon collector urn model with unequal probabilities in ecology and evolution
Lesigne, E.; Fernández-Sáez, M. J.; Zoroa, P.; Casas, J.
2017-01-01
The sequential sampling of populations with unequal probabilities and with replacement in a closed population is a recurrent problem in ecology and evolution. Examples range from biodiversity sampling, epidemiology to the estimation of signal repertoire in animal communication. Many of these questions can be reformulated as urn problems, often as special cases of the coupon collector problem, most simply expressed as the number of coupons that must be collected to have a complete set. We aimed to apply the coupon collector model in a comprehensive manner to one example—hosts (balls) being searched (draws) and parasitized (ball colour change) by parasitic wasps—to evaluate the influence of differences in sampling probabilities between items on collection speed. Based on the model of a complete multinomial process over time, we define the distribution, distribution function, expectation and variance of the number of hosts parasitized after a given time, as well as the inverse problem, estimating the sampling effort. We develop the relationship between the risk distribution on the set of hosts and the speed of parasitization and propose a more elegant proof of the weak stochastic dominance among speeds of parasitization, using the concept of Schur convexity and the ‘Robin Hood transfer’ numerical operation. Numerical examples are provided and a conjecture about strong dominance—an ordering characteristic of random variables—is proposed. The speed at which new items are discovered is a function of the entire shape of the sampling probability distribution. The sole comparison of values of variances is not sufficient to compare speeds associated with different distributions, as generally assumed in ecological studies. PMID:28179550
Predicting Mortality in Low-Income Country ICUs: The Rwanda Mortality Probability Model (R-MPM)
Kiviri, Willy; Fowler, Robert A.; Mueller, Ariel; Novack, Victor; Banner-Goodspeed, Valerie M.; Weinkauf, Julia L.; Talmor, Daniel S.; Twagirumugabe, Theogene
2016-01-01
Introduction Intensive Care Unit (ICU) risk prediction models are used to compare outcomes for quality improvement initiatives, benchmarking, and research. While such models provide robust tools in high-income countries, an ICU risk prediction model has not been validated in a low-income country where ICU population characteristics are different from those in high-income countries, and where laboratory-based patient data are often unavailable. We sought to validate the Mortality Probability Admission Model, version III (MPM0-III) in two public ICUs in Rwanda and to develop a new Rwanda Mortality Probability Model (R-MPM) for use in low-income countries. Methods We prospectively collected data on all adult patients admitted to Rwanda’s two public ICUs between August 19, 2013 and October 6, 2014. We described demographic and presenting characteristics and outcomes. We assessed the discrimination and calibration of the MPM0-III model. Using stepwise selection, we developed a new logistic model for risk prediction, the R-MPM, and used bootstrapping techniques to test for optimism in the model. Results Among 427 consecutive adults, the median age was 34 (IQR 25–47) years and mortality was 48.7%. Mechanical ventilation was initiated for 85.3%, and 41.9% received vasopressors. The MPM0-III predicted mortality with area under the receiver operating characteristic curve of 0.72 and Hosmer-Lemeshow chi-square statistic p = 0.024. We developed a new model using five variables: age, suspected or confirmed infection within 24 hours of ICU admission, hypotension or shock as a reason for ICU admission, Glasgow Coma Scale score at ICU admission, and heart rate at ICU admission. Using these five variables, the R-MPM predicted outcomes with area under the ROC curve of 0.81 with 95% confidence interval of (0.77, 0.86), and Hosmer-Lemeshow chi-square statistic p = 0.154. Conclusions The MPM0-III has modest ability to predict mortality in a population of Rwandan ICU patients. The R
SAR amplitude probability density function estimation based on a generalized Gaussian model.
Moser, Gabriele; Zerubia, Josiane; Serpico, Sebastiano B
2006-06-01
In the context of remotely sensed data analysis, an important problem is the development of accurate models for the statistics of the pixel intensities. Focusing on synthetic aperture radar (SAR) data, this modeling process turns out to be a crucial task, for instance, for classification or for denoising purposes. In this paper, an innovative parametric estimation methodology for SAR amplitude data is proposed that adopts a generalized Gaussian (GG) model for the complex SAR backscattered signal. A closed-form expression for the corresponding amplitude probability density function (PDF) is derived and a specific parameter estimation algorithm is developed in order to deal with the proposed model. Specifically, the recently proposed "method-of-log-cumulants" (MoLC) is applied, which stems from the adoption of the Mellin transform (instead of the usual Fourier transform) in the computation of characteristic functions and from the corresponding generalization of the concepts of moment and cumulant. For the developed GG-based amplitude model, the resulting MoLC estimates turn out to be numerically feasible and are also analytically proved to be consistent. The proposed parametric approach was validated by using several real ERS-1, XSAR, E-SAR, and NASA/JPL airborne SAR images, and the experimental results prove that the method models the amplitude PDF better than several previously proposed parametric models for backscattering phenomena.
Royle, J. Andrew; Chandler, Richard B.; Yackulic, Charles; Nichols, James D.
2012-01-01
1. Understanding the factors affecting species occurrence is a pre-eminent focus of applied ecological research. However, direct information about species occurrence is lacking for many species. Instead, researchers sometimes have to rely on so-called presence-only data (i.e. when no direct information about absences is available), which often results from opportunistic, unstructured sampling. MAXENT is a widely used software program designed to model and map species distribution using presence-only data. 2. We provide a critical review of MAXENT as applied to species distribution modelling and discuss how it can lead to inferential errors. A chief concern is that MAXENT produces a number of poorly defined indices that are not directly related to the actual parameter of interest – the probability of occurrence (ψ). This focus on an index was motivated by the belief that it is not possible to estimate ψ from presence-only data; however, we demonstrate that ψ is identifiable using conventional likelihood methods under the assumptions of random sampling and constant probability of species detection. 3. The model is implemented in a convenient r package which we use to apply the model to simulated data and data from the North American Breeding Bird Survey. We demonstrate that MAXENT produces extreme under-predictions when compared to estimates produced by logistic regression which uses the full (presence/absence) data set. We note that MAXENT predictions are extremely sensitive to specification of the background prevalence, which is not objectively estimated using the MAXENT method. 4. As with MAXENT, formal model-based inference requires a random sample of presence locations. Many presence-only data sets, such as those based on museum records and herbarium collections, may not satisfy this assumption. However, when sampling is random, we believe that inference should be based on formal methods that facilitate inference about interpretable ecological quantities
Pharmacokinetic modeling of ascorbate diffusion through normal and tumor tissue.
Kuiper, Caroline; Vissers, Margreet C M; Hicks, Kevin O
2014-12-01
Ascorbate is delivered to cells via the vasculature, but its ability to penetrate into tissues remote from blood vessels is unknown. This is particularly relevant to solid tumors, which often contain regions with dysfunctional vasculature, with impaired oxygen and nutrient delivery, resulting in upregulation of the hypoxic response and also the likely depletion of essential plasma-derived biomolecules, such as ascorbate. In this study, we have utilized a well-established multicell-layered, three-dimensional pharmacokinetic model to measure ascorbate diffusion and transport parameters through dense tissue in vitro. Ascorbate was found to penetrate the tissue at a slightly lower rate than mannitol and to travel via the paracellular route. Uptake parameters into the cells were also determined. These data were fitted to the diffusion model, and simulations of ascorbate pharmacokinetics in normal tissue and in hypoxic tumor tissue were performed with varying input concentrations, ranging from normal dietary plasma levels (10-100 μM) to pharmacological levels (>1 mM) as seen with intravenous infusion. The data and simulations demonstrate heterogeneous distribution of ascorbate in tumor tissue at physiological blood levels and provide insight into the range of plasma ascorbate concentrations and exposure times needed to saturate all regions of a tumor. The predictions suggest that supraphysiological plasma ascorbate concentrations (>100 μM) are required to achieve effective delivery of ascorbate to poorly vascularized tumor tissue.
Rooney, Katherine E; Wallace, Lane J
2015-11-01
Dopamine in the striatum signals the saliency of current environmental input and is involved in learned formation of appropriate responses. The regular baseline-firing rate of dopaminergic neurons suggests that baseline dopamine is essential for proper brain function. The first goal of the study was to estimate the likelihood of full exocytotic dopamine release associated with each firing event under baseline conditions. A computer model of extracellular space associated with a single varicosity was developed using the program MCell to estimate kinetics of extracellular dopamine. Because the literature provides multiple kinetic values for dopamine uptake depending on the system tested, simulations were run using different kinetic parameters. With all sets of kinetic parameters evaluated, at most, 25% of a single vesicle per varicosity would need to be released per firing event to maintain a 5-10 nM extracellular dopamine concentration, the level reported by multiple microdialysis experiments. The second goal was to estimate the fraction of total amount of stored dopamine released during a highly stimulated condition. This was done using the same model system to simulate published measurements of extracellular dopamine following electrical stimulation of striatal slices in vitro. The results suggest the amount of dopamine release induced by a single electrical stimulation may be as large as the contents of two vesicles per varicosity. We conclude that dopamine release probability at any particular varicosity is low. This suggests that factors capable of increasing release probability could have a powerful effect on sculpting dopamine signals.
Fakir, Hatim; Hlatky, Lynn; Li, Huamin; Sachs, Rainer
2013-12-15
Purpose: Optimal treatment planning for fractionated external beam radiation therapy requires inputs from radiobiology based on recent thinking about the “five Rs” (repopulation, radiosensitivity, reoxygenation, redistribution, and repair). The need is especially acute for the newer, often individualized, protocols made feasible by progress in image guided radiation therapy and dose conformity. Current stochastic tumor control probability (TCP) models incorporating tumor repopulation effects consider “stem-like cancer cells” (SLCC) to be independent, but the authors here propose that SLCC-SLCC interactions may be significant. The authors present a new stochastic TCP model for repopulating SLCC interacting within microenvironmental niches. Our approach is meant mainly for comparing similar protocols. It aims at practical generalizations of previous mathematical models. Methods: The authors consider protocols with complete sublethal damage repair between fractions. The authors use customized open-source software and recent mathematical approaches from stochastic process theory for calculating the time-dependent SLCC number and thereby estimating SLCC eradication probabilities. As specific numerical examples, the authors consider predicted TCP results for a 2 Gy per fraction, 60 Gy protocol compared to 64 Gy protocols involving early or late boosts in a limited volume to some fractions. Results: In sample calculations with linear quadratic parameters α = 0.3 per Gy, α/β = 10 Gy, boosting is predicted to raise TCP from a dismal 14.5% observed in some older protocols for advanced NSCLC to above 70%. This prediction is robust as regards: (a) the assumed values of parameters other than α and (b) the choice of models for intraniche SLCC-SLCC interactions. However, α = 0.03 per Gy leads to a prediction of almost no improvement when boosting. Conclusions: The predicted efficacy of moderate boosts depends sensitively on α. Presumably, the larger values of α are
Modeling and forecasting foreign exchange daily closing prices with normal inverse Gaussian
NASA Astrophysics Data System (ADS)
Teneng, Dean
2013-09-01
We fit the normal inverse Gaussian(NIG) distribution to foreign exchange closing prices using the open software package R and select best models by Käärik and Umbleja (2011) proposed strategy. We observe that daily closing prices (12/04/2008 - 07/08/2012) of CHF/JPY, AUD/JPY, GBP/JPY, NZD/USD, QAR/CHF, QAR/EUR, SAR/CHF, SAR/EUR, TND/CHF and TND/EUR are excellent fits while EGP/EUR and EUR/GBP are good fits with a Kolmogorov-Smirnov test p-value of 0.062 and 0.08 respectively. It was impossible to estimate normal inverse Gaussian parameters (by maximum likelihood; computational problem) for JPY/CHF but CHF/JPY was an excellent fit. Thus, while the stochastic properties of an exchange rate can be completely modeled with a probability distribution in one direction, it may be impossible the other way around. We also demonstrate that foreign exchange closing prices can be forecasted with the normal inverse Gaussian (NIG) Lévy process, both in cases where the daily closing prices can and cannot be modeled by NIG distribution.
Inferring Pairwise Interactions from Biological Data Using Maximum-Entropy Probability Models
Stein, Richard R.; Marks, Debora S.; Sander, Chris
2015-01-01
Maximum entropy-based inference methods have been successfully used to infer direct interactions from biological datasets such as gene expression data or sequence ensembles. Here, we review undirected pairwise maximum-entropy probability models in two categories of data types, those with continuous and categorical random variables. As a concrete example, we present recently developed inference methods from the field of protein contact prediction and show that a basic set of assumptions leads to similar solution strategies for inferring the model parameters in both variable types. These parameters reflect interactive couplings between observables, which can be used to predict global properties of the biological system. Such methods are applicable to the important problems of protein 3-D structure prediction and association of gene–gene networks, and they enable potential applications to the analysis of gene alteration patterns and to protein design. PMID:26225866
NASA Astrophysics Data System (ADS)
Bakosi, J.; Franzese, P.; Boybeyi, Z.
2007-11-01
Dispersion of a passive scalar from concentrated sources in fully developed turbulent channel flow is studied with the probability density function (PDF) method. The joint PDF of velocity, turbulent frequency and scalar concentration is represented by a large number of Lagrangian particles. A stochastic near-wall PDF model combines the generalized Langevin model of Haworth and Pope [Phys. Fluids 29, 387 (1986)] with Durbin's [J. Fluid Mech. 249, 465 (1993)] method of elliptic relaxation to provide a mathematically exact treatment of convective and viscous transport with a nonlocal representation of the near-wall Reynolds stress anisotropy. The presence of walls is incorporated through the imposition of no-slip and impermeability conditions on particles without the use of damping or wall-functions. Information on the turbulent time scale is supplied by the gamma-distribution model of van Slooten et al. [Phys. Fluids 10, 246 (1998)]. Two different micromixing models are compared that incorporate the effect of small scale mixing on the transported scalar: the widely used interaction by exchange with the mean and the interaction by exchange with the conditional mean model. Single-point velocity and concentration statistics are compared to direct numerical simulation and experimental data at Reτ=1080 based on the friction velocity and the channel half width. The joint model accurately reproduces a wide variety of conditional and unconditional statistics in both physical and composition space.
NASA Technical Reports Server (NTRS)
Smith, N. S. A.; Frolov, S. M.; Bowman, C. T.
1996-01-01
Two types of mixing sub-models are evaluated in connection with a joint-scalar probability density function method for turbulent nonpremixed combustion. Model calculations are made and compared to simulation results for homogeneously distributed methane-air reaction zones mixing and reacting in decaying turbulence within a two-dimensional enclosed domain. The comparison is arranged to ensure that both the simulation and model calculations a) make use of exactly the same chemical mechanism, b) do not involve non-unity Lewis number transport of species, and c) are free from radiation loss. The modified Curl mixing sub-model was found to provide superior predictive accuracy over the simple relaxation-to-mean submodel in the case studied. Accuracy to within 10-20% was found for global means of major species and temperature; however, nitric oxide prediction accuracy was lower and highly dependent on the choice of mixing sub-model. Both mixing submodels were found to produce non-physical mixing behavior for mixture fractions removed from the immediate reaction zone. A suggestion for a further modified Curl mixing sub-model is made in connection with earlier work done in the field.
Time series modeling of pathogen-specific disease probabilities with subsampled data.
Fisher, Leigh; Wakefield, Jon; Bauer, Cici; Self, Steve
2017-03-01
Many diseases arise due to exposure to one of multiple possible pathogens. We consider the situation in which disease counts are available over time from a study region, along with a measure of clinical disease severity, for example, mild or severe. In addition, we suppose a subset of the cases are lab tested in order to determine the pathogen responsible for disease. In such a context, we focus interest on modeling the probabilities of disease incidence given pathogen type. The time course of these probabilities is of great interest as is the association with time-varying covariates such as meteorological variables. In this set up, a natural Bayesian approach would be based on imputation of the unsampled pathogen information using Markov Chain Monte Carlo but this is computationally challenging. We describe a practical approach to inference that is easy to implement. We use an empirical Bayes procedure in a first step to estimate summary statistics. We then treat these summary statistics as the observed data and develop a Bayesian generalized additive model. We analyze data on hand, foot, and mouth disease (HFMD) in China in which there are two pathogens of primary interest, enterovirus 71 (EV71) and Coxackie A16 (CA16). We find that both EV71 and CA16 are associated with temperature, relative humidity, and wind speed, with reasonably similar functional forms for both pathogens. The important issue of confounding by time is modeled using a penalized B-spline model with a random effects representation. The level of smoothing is addressed by a careful choice of the prior on the tuning variance.
Estimating the probability for a protein to have a new fold: A statistical computational model
Portugaly, Elon; Linial, Michal
2000-01-01
Structural genomics aims to solve a large number of protein structures that represent the protein space. Currently an exhaustive solution for all structures seems prohibitively expensive, so the challenge is to define a relatively small set of proteins with new, currently unknown folds. This paper presents a method that assigns each protein with a probability of having an unsolved fold. The method makes extensive use of protomap, a sequence-based classification, and scop, a structure-based classification. According to protomap, the protein space encodes the relationship among proteins as a graph whose vertices correspond to 13,354 clusters of proteins. A representative fold for a cluster with at least one solved protein is determined after superposition of all scop (release 1.37) folds onto protomap clusters. Distances within the protomap graph are computed from each representative fold to the neighboring folds. The distribution of these distances is used to create a statistical model for distances among those folds that are already known and those that have yet to be discovered. The distribution of distances for solved/unsolved proteins is significantly different. This difference makes it possible to use Bayes' rule to derive a statistical estimate that any protein has a yet undetermined fold. Proteins that score the highest probability to represent a new fold constitute the target list for structural determination. Our predicted probabilities for unsolved proteins correlate very well with the proportion of new folds among recently solved structures (new scop 1.39 records) that are disjoint from our original training set. PMID:10792051
Schindler, Dirk; Grebhan, Karin; Albrecht, Axel; Schönborn, Jochen
2009-11-01
The wind damage probability (P (DAM)) in the forests in the federal state of Baden-Wuerttemberg (Southwestern Germany) was calculated using weights of evidence (WofE) methodology and a logistic regression model (LRM) after the winter storm 'Lothar' in December 1999. A geographic information system (GIS) was used for the area-wide spatial prediction and mapping of P (DAM). The combination of the six evidential themes forest type, soil type, geology, soil moisture, soil acidification, and the 'Lothar' maximum gust field predicted wind damage best and was used to map P (DAM) in a 50 x 50 m resolution grid. GIS software was utilised to produce probability maps, which allowed the identification of areas of low, moderate, and high P (DAM) across the study area. The highest P (DAM) values were calculated for coniferous forest growing on acidic, fresh to moist soils on bunter sandstone formations-provided that 'Lothar' maximum gust speed exceeded 35 m s(-1) in the areas in question. One of the most significant benefits associated with the results of this study is that, for the first time, there is a GIS-based area-wide quantification of P (DAM) in the forests in Southwestern Germany. In combination with the experience and expert knowledge of local foresters, the probability maps produced can be used as an important tool for decision support with respect to future silvicultural activities aimed at reducing wind damage. One limitation of the P (DAM)-predictions is that they are based on only one major storm event. At the moment it is not possible to relate storm event intensity to the amount of wind damage in forests due to the lack of comprehensive long-term tree and stand damage data across the study area.
Analytical expression for the exit probability of the q -voter model in one dimension
NASA Astrophysics Data System (ADS)
Timpanaro, André M.; Galam, Serge
2015-07-01
We present in this paper an approximation that is able to give an analytical expression for the exit probability of the q -voter model in one dimension. This expression gives a better fit for the more recent data about simulations in large networks [A. M. Timpanaro and C. P. C. do Prado, Phys. Rev. E 89, 052808 (2014), 10.1103/PhysRevE.89.052808] and as such departs from the expression ρ/qρq+(1-ρ ) q found in papers that investigated small networks only [R. Lambiotte and S. Redner, Europhys. Lett. 82, 18007 (2008), 10.1209/0295-5075/82/18007; P. Przybyła et al., Phys. Rev. E 84, 031117 (2011), 10.1103/PhysRevE.84.031117; F. Slanina et al., Europhys. Lett. 82, 18006 (2008), 10.1209/0295-5075/82/18006]. The approximation consists in assuming a large separation on the time scales at which active groups of agents convince inactive ones and the time taken in the competition between active groups. Some interesting findings are that for q =2 we still have ρ/2ρ2+(1-ρ ) 2 as the exit probability and for q >2 we can obtain a lower-order approximation of the form ρ/sρs+(1-ρ ) s with s varying from q for low values of q to q -1/2 for large values of q . As such, this work can also be seen as a deduction for why the exit probability ρ/qρq+(1-ρ ) q gives a good fit, without relying on mean-field arguments or on the assumption that only the first step is nondeterministic, as q and q -1/2 will give very similar results when q →∞ .
The international normalized ratio and uncertainty. Validation of a probabilistic model.
Critchfield, G C; Bennett, S T
1994-07-01
The motivation behind the creation of the International Normalized Ratio (INR) was to improve interlaboratory comparison for patients on anticoagulation therapy. In principle, a laboratory that reports the prothrombin time (PT) as an INR can standardize its PT measurements to an international reference thromboplastin. Using probability theory, the authors derived the equation for the probability distribution of the INR based on the PT, the International Sensitivity Index (ISI), and the geometric mean PT of the reference population. With Monte Carlo and numeric integration techniques, the model is validated on data from three different laboratories. The model allows computation of confidence intervals for the INR as a function of PT, ISI, and reference mean. The probabilistic model illustrates that confidence in INR measurements degrades for higher INR values. This occurs primarily as a result of amplification of between-run measurement errors in the PT, which is inherent in the mathematical transformation from the PT to the INR. The probabilistic model can be used by any laboratory to study the reliability of its own INR for any measured PT. This framework provides better insight into the problems of monitoring oral anticoagulation.
Probability based remaining capacity estimation using data-driven and neural network model
NASA Astrophysics Data System (ADS)
Wang, Yujie; Yang, Duo; Zhang, Xu; Chen, Zonghai
2016-05-01
Since large numbers of lithium-ion batteries are composed in pack and the batteries are complex electrochemical devices, their monitoring and safety concerns are key issues for the applications of battery technology. An accurate estimation of battery remaining capacity is crucial for optimization of the vehicle control, preventing battery from over-charging and over-discharging and ensuring the safety during its service life. The remaining capacity estimation of a battery includes the estimation of state-of-charge (SOC) and state-of-energy (SOE). In this work, a probability based adaptive estimator is presented to obtain accurate and reliable estimation results for both SOC and SOE. For the SOC estimation, an n ordered RC equivalent circuit model is employed by combining an electrochemical model to obtain more accurate voltage prediction results. For the SOE estimation, a sliding window neural network model is proposed to investigate the relationship between the terminal voltage and the model inputs. To verify the accuracy and robustness of the proposed model and estimation algorithm, experiments under different dynamic operation current profiles are performed on the commercial 1665130-type lithium-ion batteries. The results illustrate that accurate and robust estimation can be obtained by the proposed method.
NASA Astrophysics Data System (ADS)
Blessent, Daniela; Therrien, René; Lemieux, Jean-Michel
2011-12-01
This paper presents numerical simulations of a series of hydraulic interference tests conducted in crystalline bedrock at Olkiluoto (Finland), a potential site for the disposal of the Finnish high-level nuclear waste. The tests are in a block of crystalline bedrock of about 0.03 km3 that contains low-transmissivity fractures. Fracture density, orientation, and fracture transmissivity are estimated from Posiva Flow Log (PFL) measurements in boreholes drilled in the rock block. On the basis of those data, a geostatistical approach relying on a transitional probability and Markov chain models is used to define a conceptual model based on stochastic fractured rock facies. Four facies are defined, from sparsely fractured bedrock to highly fractured bedrock. Using this conceptual model, three-dimensional groundwater flow is then simulated to reproduce interference pumping tests in either open or packed-off boreholes. Hydraulic conductivities of the fracture facies are estimated through automatic calibration using either hydraulic heads or both hydraulic heads and PFL flow rates as targets for calibration. The latter option produces a narrower confidence interval for the calibrated hydraulic conductivities, therefore reducing the associated uncertainty and demonstrating the usefulness of the measured PFL flow rates. Furthermore, the stochastic facies conceptual model is a suitable alternative to discrete fracture network models to simulate fluid flow in fractured geological media.
An investigation of a quantum probability model for the constructive effect of affective evaluation.
White, Lee C; Barqué-Duran, Albert; Pothos, Emmanuel M
2016-01-13
The idea that choices can have a constructive effect has received a great deal of empirical support. The act of choosing appears to influence subsequent preferences for the options available. Recent research has proposed a cognitive model based on quantum probability (QP), which suggests that whether or not a participant provides an affective evaluation for a positively or negatively valenced stimulus can also be constructive and so, for example, influence the affective evaluation of a second oppositely valenced stimulus. However, there are some outstanding methodological questions in relation to this previous research. This paper reports the results of three experiments designed to resolve these questions. Experiment 1, using a binary response format, provides partial support for the interaction predicted by the QP model; and Experiment 2, which controls for the length of time participants have to respond, fully supports the QP model. Finally, Experiment 3 sought to determine whether the key effect can generalize beyond affective judgements about visual stimuli. Using judgements about the trustworthiness of well-known people, the predictions of the QP model were confirmed. Together, these three experiments provide further support for the QP model of the constructive effect of simple evaluations.
PHOTOMETRIC REDSHIFTS AND QUASAR PROBABILITIES FROM A SINGLE, DATA-DRIVEN GENERATIVE MODEL
Bovy, Jo; Hogg, David W.; Weaver, Benjamin A.; Myers, Adam D.; Hennawi, Joseph F.; McMahon, Richard G.; Schiminovich, David; Sheldon, Erin S.; Brinkmann, Jon; Schneider, Donald P.
2012-04-10
We describe a technique for simultaneously classifying and estimating the redshift of quasars. It can separate quasars from stars in arbitrary redshift ranges, estimate full posterior distribution functions for the redshift, and naturally incorporate flux uncertainties, missing data, and multi-wavelength photometry. We build models of quasars in flux-redshift space by applying the extreme deconvolution technique to estimate the underlying density. By integrating this density over redshift, one can obtain quasar flux densities in different redshift ranges. This approach allows for efficient, consistent, and fast classification and photometric redshift estimation. This is achieved by combining the speed obtained by choosing simple analytical forms as the basis of our density model with the flexibility of non-parametric models through the use of many simple components with many parameters. We show that this technique is competitive with the best photometric quasar classification techniques-which are limited to fixed, broad redshift ranges and high signal-to-noise ratio data-and with the best photometric redshift techniques when applied to broadband optical data. We demonstrate that the inclusion of UV and NIR data significantly improves photometric quasar-star separation and essentially resolves all of the redshift degeneracies for quasars inherent to the ugriz filter system, even when included data have a low signal-to-noise ratio. For quasars spectroscopically confirmed by the SDSS 84% and 97% of the objects with Galaxy Evolution Explorer UV and UKIDSS NIR data have photometric redshifts within 0.1 and 0.3, respectively, of the spectroscopic redshift; this amounts to about a factor of three improvement over ugriz-only photometric redshifts. Our code to calculate quasar probabilities and redshift probability distributions is publicly available.
Photometric redshifts and quasar probabilities from a single, data-driven generative model
Bovy, Jo; Myers, Adam D.; Hennawi, Joseph F.; Hogg, David W.; McMahon, Richard G.; Schiminovich, David; Sheldon, Erin S.; Brinkmann, Jon; Schneider, Donald P.; Weaver, Benjamin A.
2012-03-20
We describe a technique for simultaneously classifying and estimating the redshift of quasars. It can separate quasars from stars in arbitrary redshift ranges, estimate full posterior distribution functions for the redshift, and naturally incorporate flux uncertainties, missing data, and multi-wavelength photometry. We build models of quasars in flux-redshift space by applying the extreme deconvolution technique to estimate the underlying density. By integrating this density over redshift, one can obtain quasar flux densities in different redshift ranges. This approach allows for efficient, consistent, and fast classification and photometric redshift estimation. This is achieved by combining the speed obtained by choosing simple analytical forms as the basis of our density model with the flexibility of non-parametric models through the use of many simple components with many parameters. We show that this technique is competitive with the best photometric quasar classification techniques—which are limited to fixed, broad redshift ranges and high signal-to-noise ratio data—and with the best photometric redshift techniques when applied to broadband optical data. We demonstrate that the inclusion of UV and NIR data significantly improves photometric quasar-star separation and essentially resolves all of the redshift degeneracies for quasars inherent to the ugriz filter system, even when included data have a low signal-to-noise ratio. For quasars spectroscopically confirmed by the SDSS 84% and 97% of the objects with Galaxy Evolution Explorer UV and UKIDSS NIR data have photometric redshifts within 0.1 and 0.3, respectively, of the spectroscopic redshift; this amounts to about a factor of three improvement over ugriz-only photometric redshifts. Our code to calculate quasar probabilities and redshift probability distributions is publicly available.
McClure, Meredith L; Burdett, Christopher L; Farnsworth, Matthew L; Lutman, Mark W; Theobald, David M; Riggs, Philip D; Grear, Daniel A; Miller, Ryan S
2015-01-01
Wild pigs (Sus scrofa), also known as wild swine, feral pigs, or feral hogs, are one of the most widespread and successful invasive species around the world. Wild pigs have been linked to extensive and costly agricultural damage and present a serious threat to plant and animal communities due to their rooting behavior and omnivorous diet. We modeled the current distribution of wild pigs in the United States to better understand the physiological and ecological factors that may determine their invasive potential and to guide future study and eradication efforts. Using national-scale wild pig occurrence data reported between 1982 and 2012 by wildlife management professionals, we estimated the probability of wild pig occurrence across the United States using a logistic discrimination function and environmental covariates hypothesized to influence the distribution of the species. Our results suggest the distribution of wild pigs in the U.S. was most strongly limited by cold temperatures and availability of water, and that they were most likely to occur where potential home ranges had higher habitat heterogeneity, providing access to multiple key resources including water, forage, and cover. High probability of occurrence was also associated with frequent high temperatures, up to a high threshold. However, this pattern is driven by pigs' historic distribution in warm climates of the southern U.S. Further study of pigs' ability to persist in cold northern climates is needed to better understand whether low temperatures actually limit their distribution. Our model highlights areas at risk of invasion as those with habitat conditions similar to those found in pigs' current range that are also near current populations. This study provides a macro-scale approach to generalist species distribution modeling that is applicable to other generalist and invasive species.
McClure, Meredith L.; Burdett, Christopher L.; Farnsworth, Matthew L.; Lutman, Mark W.; Theobald, David M.; Riggs, Philip D.; Grear, Daniel A.; Miller, Ryan S.
2015-01-01
Wild pigs (Sus scrofa), also known as wild swine, feral pigs, or feral hogs, are one of the most widespread and successful invasive species around the world. Wild pigs have been linked to extensive and costly agricultural damage and present a serious threat to plant and animal communities due to their rooting behavior and omnivorous diet. We modeled the current distribution of wild pigs in the United States to better understand the physiological and ecological factors that may determine their invasive potential and to guide future study and eradication efforts. Using national-scale wild pig occurrence data reported between 1982 and 2012 by wildlife management professionals, we estimated the probability of wild pig occurrence across the United States using a logistic discrimination function and environmental covariates hypothesized to influence the distribution of the species. Our results suggest the distribution of wild pigs in the U.S. was most strongly limited by cold temperatures and availability of water, and that they were most likely to occur where potential home ranges had higher habitat heterogeneity, providing access to multiple key resources including water, forage, and cover. High probability of occurrence was also associated with frequent high temperatures, up to a high threshold. However, this pattern is driven by pigs’ historic distribution in warm climates of the southern U.S. Further study of pigs’ ability to persist in cold northern climates is needed to better understand whether low temperatures actually limit their distribution. Our model highlights areas at risk of invasion as those with habitat conditions similar to those found in pigs’ current range that are also near current populations. This study provides a macro-scale approach to generalist species distribution modeling that is applicable to other generalist and invasive species. PMID:26267266
NASA Astrophysics Data System (ADS)
Neupauer, Roseanna M.; Lin, Ranhao; O'Shea, Heather
2007-11-01
If contamination is observed in an aquifer, a backward probability model can be used to obtain information about the former position of the observed contamination. A backward location probability density function (PDF) describes the possible former positions of the observed contaminant particle at a specified time in the past. If the source release time is known or can be estimated, the backward location PDF can be used to identify possible source locations. For sorbing solutes, the location PDF depends on the phase (aqueous or sorbed) of the observed contamination and on the phase of the contamination at the source. These PDFs are related to adjoint states of aqueous and sorbed phase concentrations. The adjoint states, however, do not take into account the measured concentrations. Neupauer and Lin (2006) presented an approach for conditioning backward location PDFs on measured concentrations of non-reactive solutes. In this paper, we present a related conditioning method to identify the location of an instantaneous point source of a solute that exhibits first-order decay and linear equilibrium or non-equilibrium sorption. We derive the conditioning equations and present an illustrative example to demonstrate important features of the technique. Finally, we illustrate the use of the conditioned location PDF to identify possible sources of contamination by using data from a trichloroethylene plume at the Massachusetts Military Reservation.
Fixation probability and the crossing time in the Wright-Fisher multiple alleles model
NASA Astrophysics Data System (ADS)
Gill, Wonpyong
2009-08-01
The fixation probability and crossing time in the Wright-Fisher multiple alleles model, which describes a finite haploid population, were calculated by switching on an asymmetric sharply-peaked landscape with a positive asymmetric parameter, r, such that the reversal allele of the optimal allele has higher fitness than the optimal allele. The fixation probability, which was evaluated as the ratio of the first arrival time at the reversal allele to the origination time, was double the selective advantage of the reversal allele compared with the optimal allele in the strong selection region, where the fitness parameter, k, is much larger than the critical fitness parameter, kc. The crossing time in a finite population for r>0 and k
LaBudde, Robert A; Harnly, James M
2012-01-01
A qualitative botanical identification method (BIM) is an analytical procedure that returns a binary result (1 = Identified, 0 = Not Identified). A BIM may be used by a buyer, manufacturer, or regulator to determine whether a botanical material being tested is the same as the target (desired) material, or whether it contains excessive nontarget (undesirable) material. The report describes the development and validation of studies for a BIM based on the proportion of replicates identified, or probability of identification (POI), as the basic observed statistic. The statistical procedures proposed for data analysis follow closely those of the probability of detection, and harmonize the statistical concepts and parameters between quantitative and qualitative method validation. Use of POI statistics also harmonizes statistical concepts for botanical, microbiological, toxin, and other analyte identification methods that produce binary results. The POI statistical model provides a tool for graphical representation of response curves for qualitative methods, reporting of descriptive statistics, and application of performance requirements. Single collaborator and multicollaborative study examples are given.
Modeling Normal Shock Velocity Curvature Relation for Heterogeneous Explosives
NASA Astrophysics Data System (ADS)
Yoo, Sunhee; Crochet, Michael; Pemberton, Steve
2015-06-01
The normal shock velocity and curvature, Dn(κ) , relation on a detonation shock surface has been an important functional quantity to measure to understand the shock strength exerted against the material interface between a main explosive charge and the case of an explosive munition. The Dn(κ) relation is considered an intrinsic property of an explosive, and can be experimentally deduced by rate stick tests at various charge diameters. However, experimental measurements of the Dn(κ) relation for heterogeneous explosives such as PBXN-111 are challenging due to the non-smoothness and asymmetry usually observed in the experimental streak records of explosion fronts. Out of the many possibilities, the asymmetric character may be attributed to the heterogeneity of the explosives, a hypothesis which begs two questions: (1) is there any simple hydrodynamic model that can explain such an asymmetric shock evolution, and (2) what statistics can be derived for the asymmetry using simulations with defined structural heterogeneity in the unreacted explosive? Saenz, Taylor and Stewart studied constitutive models for derivation of the Dn(κ) relation on porous `homogeneous' explosives and carried out simulations in a spherical coordinate frame. In this paper, we extend their model to account for `heterogeneity' and present shock evolutions in heterogeneous explosives using 2-D hydrodynamic simulations with some statistical examination. (96TW-2015-0004)
A radiation damage repair model for normal tissues
NASA Astrophysics Data System (ADS)
Partridge, Mike
2008-07-01
A cellular Monte Carlo model describing radiation damage and repair in normal epithelial tissues is presented. The deliberately simplified model includes cell cycling, cell motility and radiation damage response (cell cycle arrest and cell death) only. Results demonstrate that the model produces a stable equilibrium system for mean cell cycle times in the range 24-96 h. Simulated irradiation of these stable equilibrium systems produced a range of responses that are shown to be consistent with experimental and clinical observation, including (i) re-epithelialization of radiation-induced lesions by a mixture of cell migration into the wound and repopulation at the periphery; (ii) observed radiosensitivity that is quantitatively consistent with both rate of induction of irreparable DNA lesions and, independently, with the observed acute oral and pharyngeal mucosal reactions to radiotherapy; (iii) an observed time between irradiation and maximum toxicity that is consistent with experimental data for skin; (iv) quantitatively accurate predictions of low-dose hyper-radiosensitivity; (v) Gomperzian repopulation for very small lesions (~2000 cells) and (vi) a linear rate of re-epithelialization of 5-10 µm h-1 for large lesions (>15 000 cells).
Kukla, G.; Gavin, J.
1994-05-01
This report was prepared at the Lamont-Doherty Geological Observatory of Columbia University at Palisades, New York, under subcontract to Pacific Northwest Laboratory it is a part of a larger project of global climate studies which supports site characterization work required for the selection of a potential high-level nuclear waste repository and forms part of the Performance Assessment Scientific Support (PASS) Program at PNL. The work under the PASS Program is currently focusing on the proposed site at Yucca Mountain, Nevada, and is under the overall direction of the Yucca Mountain Project Office US Department of Energy, Las Vegas, Nevada. The final results of the PNL project will provide input to global atmospheric models designed to test specific climate scenarios which will be used in the site specific modeling work of others. The primary purpose of the data bases compiled and of the astronomic predictive models is to aid in the estimation of the probabilities of future climate states. The results will be used by two other teams working on the global climate study under contract to PNL. They are located at and the University of Maine in Orono, Maine, and the Applied Research Corporation in College Station, Texas. This report presents the results of the third year`s work on the global climate change models and the data bases describing past climates.
Fisicaro, E; Braibanti, A; Sambasiva Rao, R; Compari, C; Ghiozzi, A; Nageswara Rao, G
1998-04-01
An algorithm is proposed for the estimation of binding parameters for the interaction of biologically important macromolecules with smaller ones from electrometric titration data. The mathematical model is based on the representation of equilibria in terms of probability concepts of statistical molecular thermodynamics. The refinement of equilibrium concentrations of the components and estimation of binding parameters (log site constant and cooperativity factor) is performed using singular value decomposition, a chemometric technique which overcomes the general obstacles due to near singularity. The present software is validated with a number of biochemical systems of varying number of sites and cooperativity factors. The effect of random errors of realistic magnitude in experimental data is studied using the simulated primary data for some typical systems. The safe area within which approximate binding parameters ensure convergence has been reported for the non-self starting optimization algorithms.
A generative probability model of joint label fusion for multi-atlas based brain segmentation.
Wu, Guorong; Wang, Qian; Zhang, Daoqiang; Nie, Feiping; Huang, Heng; Shen, Dinggang
2014-08-01
Automated labeling of anatomical structures in medical images is very important in many neuroscience studies. Recently, patch-based labeling has been widely investigated to alleviate the possible mis-alignment when registering atlases to the target image. However, the weights used for label fusion from the registered atlases are generally computed independently and thus lack the capability of preventing the ambiguous atlas patches from contributing to the label fusion. More critically, these weights are often calculated based only on the simple patch similarity, thus not necessarily providing optimal solution for label fusion. To address these limitations, we propose a generative probability model to describe the procedure of label fusion in a multi-atlas scenario, for the goal of labeling each point in the target image by the best representative atlas patches that also have the largest labeling unanimity in labeling the underlying point correctly. Specifically, sparsity constraint is imposed upon label fusion weights, in order to select a small number of atlas patches that best represent the underlying target patch, thus reducing the risks of including the misleading atlas patches. The labeling unanimity among atlas patches is achieved by exploring their dependencies, where we model these dependencies as the joint probability of each pair of atlas patches in correctly predicting the labels, by analyzing the correlation of their morphological error patterns and also the labeling consensus among atlases. The patch dependencies will be further recursively updated based on the latest labeling results to correct the possible labeling errors, which falls to the Expectation Maximization (EM) framework. To demonstrate the labeling performance, we have comprehensively evaluated our patch-based labeling method on the whole brain parcellation and hippocampus segmentation. Promising labeling results have been achieved with comparison to the conventional patch-based labeling
NASA Astrophysics Data System (ADS)
Zhao, Tongtiegang; Schepen, Andrew; Wang, Q. J.
2016-10-01
The Bayesian joint probability (BJP) modelling approach is used operationally to produce seasonal (three-month-total) ensemble streamflow forecasts in Australia. However, water resource managers are calling for more informative sub-seasonal forecasts. Taking advantage of BJP's capability of handling multiple predictands, ensemble forecasting of sub-seasonal to seasonal streamflows is investigated for 23 catchments around Australia. Using antecedent streamflow and climate indices as predictors, monthly forecasts are developed for the three-month period ahead. Forecast reliability and skill are evaluated for the period 1982-2011 using a rigorous leave-five-years-out cross validation strategy. BJP ensemble forecasts of monthly streamflow volumes are generally reliable in ensemble spread. Forecast skill, relative to climatology, is positive in 74% of cases in the first month, decreasing to 57% and 46% respectively for streamflow forecasts for the final two months of the season. As forecast skill diminishes with increasing lead time, the monthly forecasts approach climatology. Seasonal forecasts accumulated from monthly forecasts are found to be similarly skilful to forecasts from BJP models based on seasonal totals directly. The BJP modelling approach is demonstrated to be a viable option for producing ensemble time-series sub-seasonal to seasonal streamflow forecasts.
Towards smart prosthetic hand: Adaptive probability based skeletan muscle fatigue model.
Kumar, Parmod; Sebastian, Anish; Potluri, Chandrasekhar; Urfer, Alex; Naidu, D; Schoen, Marco P
2010-01-01
Skeletal muscle force can be estimated using surface electromyographic (sEMG) signals. Usually, the surface location for the sensors is near the respective muscle motor unit points. Skeletal muscles generate a spatial EMG signal, which causes cross talk between different sEMG signal sensors. In this study, an array of three sEMG sensors is used to capture the information of muscle dynamics in terms of sEMG signals. The recorded sEMG signals are filtered utilizing optimized nonlinear Half-Gaussian Bayesian filters parameters, and the muscle force signal using a Chebyshev type-II filter. The filter optimization is accomplished using Genetic Algorithms. Three discrete time state-space muscle fatigue models are obtained using system identification and modal transformation for three sets of sensors for single motor unit. The outputs of these three muscle fatigue models are fused with a probabilistic Kullback Information Criterion (KIC) for model selection. The final fused output is estimated with an adaptive probability of KIC, which provides improved force estimates.
Moment-Based Probability Modeling and Extreme Response Estimation, The FITS Routine Version 1.2
MANUEL,LANCE; KASHEF,TINA; WINTERSTEIN,STEVEN R.
1999-11-01
This report documents the use of the FITS routine, which provides automated fits of various analytical, commonly used probability models from input data. It is intended to complement the previously distributed FITTING routine documented in RMS Report 14 (Winterstein et al., 1994), which implements relatively complex four-moment distribution models whose parameters are fit with numerical optimization routines. Although these four-moment fits can be quite useful and faithful to the observed data, their complexity can make them difficult to automate within standard fitting algorithms. In contrast, FITS provides more robust (lower moment) fits of simpler, more conventional distribution forms. For each database of interest, the routine estimates the distribution of annual maximum response based on the data values and the duration, T, over which they were recorded. To focus on the upper tails of interest, the user can also supply an arbitrary lower-bound threshold, {chi}{sub low}, above which a shifted distribution model--exponential or Weibull--is fit.
Probability-changing cluster algorithm for two-dimensional XY and clock models
NASA Astrophysics Data System (ADS)
Tomita, Yusuke; Okabe, Yutaka
2002-05-01
We extend the newly proposed probability-changing cluster (PCC) Monte Carlo algorithm to the study of systems with the vector order parameter. Wolff's idea of the embedded cluster formalism is used for assigning clusters. The Kosterlitz-Thouless (KT) transitions for the two-dimensional (2D) XY and q-state clock models are studied by using the PCC algorithm. Combined with the finite-size scaling analysis based on the KT form of the correlation length, ξ~exp(c/(T/TKT-1)), we determine the KT transition temperature and the decay exponent η as TKT=0.8933(6) and η=0.243(4) for the 2D XY model. We investigate two transitions of the KT type for the 2D q-state clock models with q=6,8,12 and confirm the prediction of η=4/q2 at T1, the low-temperature critical point between the ordered and XY-like phases, systematically.
de Mul, Frits F M; Blaauw, Judith; Aarnoudse, Jan G; Smit, Andries J; Rakhorst, Gerhard
2007-01-01
We present a physical model to describe iontophoresis time recordings. The model is a combination of monodimensional material diffusion and decay, probably due to transport by blood flow. It has four adjustable parameters, the diffusion coefficient, the decay constant, the height of the response, and the shot saturation constant, a parameter representing the relative importance of subsequent shots (in case of saturation). We test the model with measurements of blood perfusion in the capillary bed of the fingers of women who recently had preeclampsia and in women with a history of normal pregnancy. From the fits to the measurements, we conclude that the model provides a useful physical description of the iontophoresis process.
NASA Astrophysics Data System (ADS)
Merdan, Ziya; Karakuş, Özlem
2016-11-01
The six dimensional Ising model with nearest-neighbor pair interactions has been simulated and verified numerically on the Creutz Cellular Automaton by using five bit demons near the infinite-lattice critical temperature with the linear dimensions L=4,6,8,10. The order parameter probability distribution for six dimensional Ising model has been calculated at the critical temperature. The constants of the analytical function have been estimated by fitting to probability function obtained numerically at the finite size critical point.
Stacey, W.M.
1992-12-01
A new computational model for neutral particle transport in the outer regions of a diverted tokamak plasma chamber is presented. The model is based on the calculation of transmission and escape probabilities using first-flight integral transport theory and the balancing of fluxes across the surfaces bounding the various regions. The geometrical complexity of the problem is included in precomputed probabilities which depend only on the mean free path of the region.
Kausar, A S M Zahid; Reza, Ahmed Wasif; Wo, Lau Chun; Ramiah, Harikrishnan
2014-01-01
Although ray tracing based propagation prediction models are popular for indoor radio wave propagation characterization, most of them do not provide an integrated approach for achieving the goal of optimum coverage, which is a key part in designing wireless network. In this paper, an accelerated technique of three-dimensional ray tracing is presented, where rough surface scattering is included for making a more accurate ray tracing technique. Here, the rough surface scattering is represented by microfacets, for which it becomes possible to compute the scattering field in all possible directions. New optimization techniques, like dual quadrant skipping (DQS) and closest object finder (COF), are implemented for fast characterization of wireless communications and making the ray tracing technique more efficient. In conjunction with the ray tracing technique, probability based coverage optimization algorithm is accumulated with the ray tracing technique to make a compact solution for indoor propagation prediction. The proposed technique decreases the ray tracing time by omitting the unnecessary objects for ray tracing using the DQS technique and by decreasing the ray-object intersection time using the COF technique. On the other hand, the coverage optimization algorithm is based on probability theory, which finds out the minimum number of transmitters and their corresponding positions in order to achieve optimal indoor wireless coverage. Both of the space and time complexities of the proposed algorithm surpass the existing algorithms. For the verification of the proposed ray tracing technique and coverage algorithm, detailed simulation results for different scattering factors, different antenna types, and different operating frequencies are presented. Furthermore, the proposed technique is verified by the experimental results.
Carruthers, Robert L; Chitnis, Tanuja; Healy, Brian C
2014-05-01
JCV serologic status is used to determine PML risk in natalizumab-treated patients. Given two cases of natalizumab-associated PML in JCV sero-negative patients and two publications that question the false negative rate of the JCV serologic test, clinicians may question whether our understanding of PML risk is adequate. Given that there is no gold standard for diagnosing previous JCV exposure, the test characteristics of the JCV serologic test are unknowable. We propose a model of PML risk in JCV sero-negative natalizumab patients. Using the numbers of JCV sero-positive and -negative patients from a study of PML risk by JCV serologic status (sero-positive: 13,950 and sero-negative: 11,414), we apply a range of sensitivities and specificities in order calculate the number of JCV-exposed but JCV sero-negative patients (false negatives). We then apply a range of rates of developing PML in sero-negative patients to calculate the expected number of PML cases. By using the binomial function, we calculate the probability of a given number of JCV sero-negative PML cases. With this model, one has a means to establish a threshold number of JCV sero-negative natalizumab-associated PML cases at which it is improbable that our understanding of PML risk in JCV sero-negative patients is adequate.
A Mechanistic Beta-Binomial Probability Model for mRNA Sequencing Data.
Smith, Gregory R; Birtwistle, Marc R
2016-01-01
A main application for mRNA sequencing (mRNAseq) is determining lists of differentially-expressed genes (DEGs) between two or more conditions. Several software packages exist to produce DEGs from mRNAseq data, but they typically yield different DEGs, sometimes markedly so. The underlying probability model used to describe mRNAseq data is central to deriving DEGs, and not surprisingly most softwares use different models and assumptions to analyze mRNAseq data. Here, we propose a mechanistic justification to model mRNAseq as a binomial process, with data from technical replicates given by a binomial distribution, and data from biological replicates well-described by a beta-binomial distribution. We demonstrate good agreement of this model with two large datasets. We show that an emergent feature of the beta-binomial distribution, given parameter regimes typical for mRNAseq experiments, is the well-known quadratic polynomial scaling of variance with the mean. The so-called dispersion parameter controls this scaling, and our analysis suggests that the dispersion parameter is a continually decreasing function of the mean, as opposed to current approaches that impose an asymptotic value to the dispersion parameter at moderate mean read counts. We show how this leads to current approaches overestimating variance for moderately to highly expressed genes, which inflates false negative rates. Describing mRNAseq data with a beta-binomial distribution thus may be preferred since its parameters are relatable to the mechanistic underpinnings of the technique and may improve the consistency of DEG analysis across softwares, particularly for moderately to highly expressed genes.
Model assisted probability of detection for a guided waves based SHM technique
NASA Astrophysics Data System (ADS)
Memmolo, V.; Ricci, F.; Maio, L.; Boffa, N. D.; Monaco, E.
2016-04-01
Guided wave (GW) Structural Health Monitoring (SHM) allows to assess the health of aerostructures thanks to the great sensitivity to delamination and/or debondings appearance. Due to the several complexities affecting wave propagation in composites, an efficient GW SHM system requires its effective quantification associated to a rigorous statistical evaluation procedure. Probability of Detection (POD) approach is a commonly accepted measurement method to quantify NDI results and it can be effectively extended to an SHM context. However, it requires a very complex setup arrangement and many coupons. When a rigorous correlation with measurements is adopted, Model Assisted POD (MAPOD) is an efficient alternative to classic methods. This paper is concerned with the identification of small emerging delaminations in composite structural components. An ultrasonic GW tomography focused to impact damage detection in composite plate-like structures recently developed by authors is investigated, getting the bases for a more complex MAPOD analysis. Experimental tests carried out on a typical wing composite structure demonstrated the effectiveness of modeling approach in order to detect damages with the tomographic algorithm. Environmental disturbances, which affect signal waveforms and consequently damage detection, are considered simulating a mathematical noise in the modeling stage. A statistical method is used for an effective making decision procedure. A Damage Index approach is implemented as metric to interpret the signals collected from a distributed sensor network and a subsequent graphic interpolation is carried out to reconstruct the damage appearance. A model validation and first reliability assessment results are provided, in view of performance system quantification and its optimization as well.
Modeling Longitudinal Data Containing Non-Normal Within Subject Errors
NASA Technical Reports Server (NTRS)
Feiveson, Alan; Glenn, Nancy L.
2013-01-01
The mission of the National Aeronautics and Space Administration’s (NASA) human research program is to advance safe human spaceflight. This involves conducting experiments, collecting data, and analyzing data. The data are longitudinal and result from a relatively few number of subjects; typically 10 – 20. A longitudinal study refers to an investigation where participant outcomes and possibly treatments are collected at multiple follow-up times. Standard statistical designs such as mean regression with random effects and mixed–effects regression are inadequate for such data because the population is typically not approximately normally distributed. Hence, more advanced data analysis methods are necessary. This research focuses on four such methods for longitudinal data analysis: the recently proposed linear quantile mixed models (lqmm) by Geraci and Bottai (2013), quantile regression, multilevel mixed–effects linear regression, and robust regression. This research also provides computational algorithms for longitudinal data that scientists can directly use for human spaceflight and other longitudinal data applications, then presents statistical evidence that verifies which method is best for specific situations. This advances the study of longitudinal data in a broad range of applications including applications in the sciences, technology, engineering and mathematics fields.
A generic probability based model to derive regional patterns of crops in time and space
NASA Astrophysics Data System (ADS)
Wattenbach, Martin; Luedtke, Stefan; Redweik, Richard; van Oijen, Marcel; Balkovic, Juraj; Reinds, Gert Jan
2015-04-01
Croplands are not only the key to human food supply, they also change the biophysical and biogeochemical properties of the land surface leading to changes in the water cycle, energy portioning, they influence soil erosion and substantially contribute to the amount of greenhouse gases entering the atmosphere. The effects of croplands on the environment depend on the type of crop and the associated management which both are related to the site conditions, economic boundary settings as well as preferences of individual farmers. The method described here is designed to predict the most probable crop to appear at a given location and time. The method uses statistical crop area information on NUTS2 level from EUROSTAT and the Common Agricultural Policy Regionalized Impact Model (CAPRI) as observation. These crops are then spatially disaggregated to the 1 x 1 km grid scale within the region, using the assumption that the probability of a crop appearing at a given location and a given year depends on a) the suitability of the land for the cultivation of the crop derived from the MARS Crop Yield Forecast System (MCYFS) and b) expert knowledge of agricultural practices. The latter includes knowledge concerning the feasibility of one crop following another (e.g. a late-maturing crop might leave too little time for the establishment of a winter cereal crop) and the need to combat weed infestations or crop diseases. The model is implemented in R and PostGIS. The quality of the generated crop sequences per grid cell is evaluated on the basis of the given statistics reported by the joint EU/CAPRI database. The assessment is given on NUTS2 level using per cent bias as a measure with a threshold of 15% as minimum quality. The results clearly indicates that crops with a large relative share within the administrative unit are not as error prone as crops that allocate only minor parts of the unit. However, still roughly 40% show an absolute per cent bias above the 15% threshold. This
Multivariate Models for Normal and Binary Responses in Intervention Studies
ERIC Educational Resources Information Center
Pituch, Keenan A.; Whittaker, Tiffany A.; Chang, Wanchen
2016-01-01
Use of multivariate analysis (e.g., multivariate analysis of variance) is common when normally distributed outcomes are collected in intervention research. However, when mixed responses--a set of normal and binary outcomes--are collected, standard multivariate analyses are no longer suitable. While mixed responses are often obtained in…
Influencing the Probability for Graduation at Four-Year Institutions: A Multi-Model Analysis
ERIC Educational Resources Information Center
Cragg, Kristina M.
2009-01-01
The purpose of this study is to identify student and institutional characteristics that influence the probability for graduation. The study delves further into the probability for graduation by examining how far the student deviates from the institutional mean with respect to academics and affordability; this concept is referred to as the "match."…
An EEG-Based Fuzzy Probability Model for Early Diagnosis of Alzheimer's Disease.
Chiang, Hsiu-Sen; Pao, Shun-Chi
2016-05-01
Alzheimer's disease is a degenerative brain disease that results in cardinal memory deterioration and significant cognitive impairments. The early treatment of Alzheimer's disease can significantly reduce deterioration. Early diagnosis is difficult, and early symptoms are frequently overlooked. While much of the literature focuses on disease detection, the use of electroencephalography (EEG) in Alzheimer's diagnosis has received relatively little attention. This study combines the fuzzy and associative Petri net methodologies to develop a model for the effective and objective detection of Alzheimer's disease. Differences in EEG patterns between normal subjects and Alzheimer patients are used to establish prediction criteria for Alzheimer's disease, potentially providing physicians with a reference for early diagnosis, allowing for early action to delay the disease progression.
NASA Astrophysics Data System (ADS)
Peng, Guanghan; Liu, Changqing; Tuo, Manxian
2015-10-01
In this paper, a new lattice model is proposed with the traffic interruption probability term in two-lane traffic system. The linear stability condition and the mKdV equation are derived from linear stability analysis and nonlinear analysis by introducing the traffic interruption probability of optimal current for two-lane traffic freeway, respectively. Numerical simulation shows that the traffic interruption probability corresponding to high reaction coefficient can efficiently improve the stability of two-lane traffic flow as traffic interruption occurs with lane changing.
A Tool for Modelling the Probability of Landslides Impacting Road Networks
NASA Astrophysics Data System (ADS)
Taylor, Faith E.; Santangelo, Michele; Marchesini, Ivan; Malamud, Bruce D.; Guzzetti, Fausto
2014-05-01
Triggers such as earthquakes or heavy rainfall can result in hundreds to thousands of landslides occurring across a region within a short space of time. These landslides can in turn result in blockages across the road network, impacting how people move about a region. Here, we show the development and application of a semi-stochastic model to simulate how landslides intersect with road networks during a triggered landslide event. This was performed by creating 'synthetic' triggered landslide inventory maps and overlaying these with a road network map to identify where road blockages occur. Our landslide-road model has been applied to two regions: (i) the Collazzone basin (79 km2) in Central Italy where 422 landslides were triggered by rapid snowmelt in January 1997, (ii) the Oat Mountain quadrangle (155 km2) in California, USA, where 1,350 landslides were triggered by the Northridge Earthquake (M = 6.7) in January 1994. For both regions, detailed landslide inventory maps for the triggered events were available, in addition to maps of landslide susceptibility and road networks of primary, secondary and tertiary roads. To create 'synthetic' landslide inventory maps, landslide areas (AL) were randomly selected from a three-parameter inverse gamma probability density function, consisting of a power law decay of about -2.4 for medium and large values of AL and an exponential rollover for small values of AL. The number of landslide areas selected was based on the observed density of landslides (number of landslides km-2) in the triggered event inventories. Landslide shapes were approximated as ellipses, where the ratio of the major and minor axes varies with AL. Landslides were then dropped over the region semi-stochastically, conditioned by a landslide susceptibility map, resulting in a synthetic landslide inventory map. The originally available landslide susceptibility maps did not take into account susceptibility changes in the immediate vicinity of roads, therefore
Gomberg, J.; Felzer, K.
2008-01-01
We have used observations from Felzer and Brodsky (2006) of the variation of linear aftershock densities (i.e., aftershocks per unit length) with the magnitude of and distance from the main shock fault to derive constraints on how the probability of a main shock triggering a single aftershock at a point, P(r, D), varies as a function of distance, r, and main shock rupture dimension, D. We find that P(r, D) becomes independent of D as the triggering fault is approached. When r ??? D P(r, D) scales as Dm where m-2 and decays with distance approximately as r-n with n = 2, with a possible change to r-(n-1) at r > h, where h is the closest distance between the fault and the boundaries of the seismogenic zone. These constraints may be used to test hypotheses about the types of deformations and mechanisms that trigger aftershocks. We illustrate this using dynamic deformations (i.e., radiated seismic waves) and a posited proportionality with P(r, D). Deformation characteristics examined include peak displacements, peak accelerations and velocities (proportional to strain rates and strains, respectively), and two measures that account for cumulative deformations. Our model indicates that either peak strains alone or strain rates averaged over the duration of rupture may be responsible for aftershock triggering.
Volkov, M. V.; Ostrovsky, V. N.
2007-02-15
Multistate generalizations of Landau-Zener model are studied by summing entire series of perturbation theory. A technique for analysis of the series is developed. Analytical expressions for probabilities of survival at the diabatic potential curves with extreme slope are proved. Degenerate situations are considered when there are several potential curves with extreme slope. Expressions for some state-to-state transition probabilities are derived in degenerate cases.
TU-CD-BRB-01: Normal Lung CT Texture Features Improve Predictive Models for Radiation Pneumonitis
Krafft, S; Briere, T; Court, L; Martel, M
2015-06-15
Purpose: Existing normal tissue complication probability (NTCP) models for radiation pneumonitis (RP) traditionally rely on dosimetric and clinical data but are limited in terms of performance and generalizability. Extraction of pre-treatment image features provides a potential new category of data that can improve NTCP models for RP. We consider quantitative measures of total lung CT intensity and texture in a framework for prediction of RP. Methods: Available clinical and dosimetric data was collected for 198 NSCLC patients treated with definitive radiotherapy. Intensity- and texture-based image features were extracted from the T50 phase of the 4D-CT acquired for treatment planning. A total of 3888 features (15 clinical, 175 dosimetric, and 3698 image features) were gathered and considered candidate predictors for modeling of RP grade≥3. A baseline logistic regression model with mean lung dose (MLD) was first considered. Additionally, a least absolute shrinkage and selection operator (LASSO) logistic regression was applied to the set of clinical and dosimetric features, and subsequently to the full set of clinical, dosimetric, and image features. Model performance was assessed by comparing area under the curve (AUC). Results: A simple logistic fit of MLD was an inadequate model of the data (AUC∼0.5). Including clinical and dosimetric parameters within the framework of the LASSO resulted in improved performance (AUC=0.648). Analysis of the full cohort of clinical, dosimetric, and image features provided further and significant improvement in model performance (AUC=0.727). Conclusions: To achieve significant gains in predictive modeling of RP, new categories of data should be considered in addition to clinical and dosimetric features. We have successfully incorporated CT image features into a framework for modeling RP and have demonstrated improved predictive performance. Validation and further investigation of CT image features in the context of RP NTCP
John, Mathew; Gopinath, Deepa
2013-06-01
Gestational diabetes mellitus diagnosed by classical oral glucose tolerance test can result in fetal complications like macrosomia and polyhydramnios. Guidelines exist on management of patients diagnose by abnormal oral glucose tolerance test with diet modification followed by insulin. Even patients with abnormal oral glucose tolerance test maintaining apparently normal blood sugars with diet are advised insulin if there is accelerated fetal growth. But patients with normal oral glucose tolerance test can present with macrosomia and polyhydramnios. These patients are labelled as not having gestational diabetes mellitus and are followed up with repeat oral glucose tolerance test. We hypothesise that these patients may have an altered placental threshold to glucose or abnormal sensitivity of fetal tissues to glucose. Meal related glucose monitoring in these patients can identify minor abnormalities in glucose disturbance and should be treated to targets similar to physiological levels of glucose in non pregnant adults.
A picosecond accuracy relativistic VLBI model via Fermi normal coordinates
NASA Technical Reports Server (NTRS)
Shahid-Saless, Bahman; Hellings, Ronald W.; Ashby, Neil
1991-01-01
Fermi normal coordinates are used to construct transformations relating solar system barycentric coordinates to local inertial geocentric coordinates. Relativistic corrections to terrestrial VLBI measurements are calculated, and this formalism is developed to include corrections needed for picosecond accuracy. A calculation of photon time delay which includes effects arising from the motion of gravitational sources is given.
Benndorf, Klaus; Kusch, Jana; Schulz, Eckhard
2012-01-01
Hyperpolarization-activated cyclic nucleotide-modulated (HCN) channels are voltage-gated tetrameric cation channels that generate electrical rhythmicity in neurons and cardiomyocytes. Activation can be enhanced by the binding of adenosine-3′,5′-cyclic monophosphate (cAMP) to an intracellular cyclic nucleotide binding domain. Based on previously determined rate constants for a complex Markovian model describing the gating of homotetrameric HCN2 channels, we analyzed probability fluxes within this model, including unidirectional probability fluxes and the probability flux along transition paths. The time-dependent probability fluxes quantify the contributions of all 13 transitions of the model to channel activation. The binding of the first, third and fourth ligand evoked robust channel opening whereas the binding of the second ligand obstructed channel opening similar to the empty channel. Analysis of the net probability fluxes in terms of the transition path theory revealed pronounced hysteresis for channel activation and deactivation. These results provide quantitative insight into the complex interaction of the four structurally equal subunits, leading to non-equality in their function. PMID:23093920
ERIC Educational Resources Information Center
Edwards, William F.; Shiflett, Ray C.; Shultz, Harris
2008-01-01
The mathematical model used to describe independence between two events in probability has a non-intuitive consequence called dependent spaces. The paper begins with a very brief history of the development of probability, then defines dependent spaces, and reviews what is known about finite spaces with uniform probability. The study of finite…
Bivariate Normal Wind Statistics model: User’s Manual.
1980-09-01
BIKI/ SLqTX. SOSTY, PROSTD COMMON /BLK2/ XBAR, YBAR COIIMON /BLK3/ CORR., DENOM DATA ITERM, IYES/-I.’ Y "/ I FORMAT (’ *** USAFETAC/DND WIND STATISTICS...THE FIVE BASIC PARAMETERS *** CCC 70 WRITE (ITERM,77) 77 FORMAT (’ INPUT MEAN X,STDEVX,MEAM YSTDE Y ,*CORR. COEFF.-’ READ (ITERP.8) XBAR, STDEVX. YBAR ...the X- Y axes through a given angle. Subroutine RSPGDR Gives the (conditional) probability of a specified range of wind speeds when the wind direction
A Collision Probability Model of Portal Vein Tumor Thrombus Formation in Hepatocellular Carcinoma
Xiong, Fei
2015-01-01
Hepatocellular carcinoma is one of the most common malignancies worldwide, with a high risk of portal vein tumor thrombus (PVTT). Some promising results have been achieved for venous metastases of hepatocellular carcinoma; however, the etiology of PVTT is largely unknown, and it is unclear why the incidence of PVTT is not proportional to its distance from the carcinoma. We attempted to address this issue using physical concepts and mathematical tools. Finally, we discuss the relationship between the probability of a collision event and the microenvironment of the PVTT. Our formulae suggest that the collision probability can alter the tumor microenvironment by increasing the number of tumor cells. PMID:26131562
Random forest models for the probable biological condition of streams and rivers in the USA
The National Rivers and Streams Assessment (NRSA) is a probability based survey conducted by the US Environmental Protection Agency and its state and tribal partners. It provides information on the ecological condition of the rivers and streams in the conterminous USA, and the ex...
ERIC Educational Resources Information Center
Rasanen, Okko
2011-01-01
Word segmentation from continuous speech is a difficult task that is faced by human infants when they start to learn their native language. Several studies indicate that infants might use several different cues to solve this problem, including intonation, linguistic stress, and transitional probabilities between subsequent speech sounds. In this…
Spatial prediction models for the probable biological condition of streams and rivers in the USA
The National Rivers and Streams Assessment (NRSA) is a probability-based survey conducted by the US Environmental Protection Agency and its state and tribal partners. It provides information on the ecological condition of the rivers and streams in the conterminous USA, and the ex...
Normalization and Implementation of Three Gravitational Acceleration Models
NASA Technical Reports Server (NTRS)
Eckman, Randy A.; Brown, Aaron J.; Adamo, Daniel R.; Gottlieb, Robert G.
2016-01-01
Unlike the uniform density spherical shell approximations of Newton, the consequence of spaceflight in the real universe is that gravitational fields are sensitive to the asphericity of their generating central bodies. The gravitational potential of an aspherical central body is typically resolved using spherical harmonic approximations. However, attempting to directly calculate the spherical harmonic approximations results in at least two singularities that must be removed to generalize the method and solve for any possible orbit, including polar orbits. Samuel Pines, Bill Lear, and Robert Gottlieb developed three unique algorithms to eliminate these singularities. This paper documents the methodical normalization of two of the three known formulations for singularity-free gravitational acceleration (namely, the Lear and Gottlieb algorithms) and formulates a general method for defining normalization parameters used to generate normalized Legendre polynomials and Associated Legendre Functions (ALFs) for any algorithm. A treatment of the conventional formulation of the gravitational potential and acceleration is also provided, in addition to a brief overview of the philosophical differences between the three known singularity-free algorithms.
On application of optimal control to SEIR normalized models: Pros and cons.
de Pinho, Maria do Rosario; Nogueira, Filipa Nunes
2017-02-01
In this work we normalize a SEIR model that incorporates exponential natural birth and death, as well as disease-caused death. We use optimal control to control by vaccination the spread of a generic infectious disease described by a normalized model with L1 cost. We discuss the pros and cons of SEIR normalized models when compared with classical models when optimal control with L1 costs are considered. Our discussion highlights the role of the cost. Additionally, we partially validate our numerical solutions for our optimal control problem with normalized models using the Maximum Principle.
Ding, Tian; Wang, Jun; Park, Myoung-Su; Hwang, Cheng-An; Oh, Deog-Hwan
2013-02-01
Bacillus cereus is frequently isolated from a variety of foods, including vegetables, dairy products, meats, and other raw and processed foods. The bacterium is capable of producing an enterotoxin and emetic toxin that can cause severe nausea, vomiting, and diarrhea. The objectives of this study were to assess and model the probability of enterotoxin production of B. cereus in a broth model as affected by the broth pH and storage temperature. A three-strain mixture of B. cereus was inoculated in tryptic soy broth adjusted to pH 5.0, 6.0, 7.2, 8.0, and 8.5, and the samples were stored at 15, 20, 25, 30, and 35°C for 24 h. A total of 25 combinations of pH and temperature, each with 10 samples, were tested. The presence of enterotoxin in broth was assayed using a commercial test kit. The probabilities of positive enterotoxin production in 25 treatments were fitted with a logistic regression to develop a probability model to describe the probability of toxin production as a function of pH and temperature. The resulting model showed that the probabilities of enterotoxin production of B. cereus in broth increased as the temperature increased and/or as the broth pH approached 7.0. The model described the experimental data satisfactorily and identified the boundary of pH and temperature for the production of enterotoxin. The model could provide information for assessing the food poisoning risk associated with enterotoxins of B. cereus and for the selection of product pH and storage temperature for foods to reduce the hazards associated with B. cereus.
NASA Technical Reports Server (NTRS)
1979-01-01
The computer model for erythropoietic control was adapted to the mouse system by altering system parameters originally given for the human to those which more realistically represent the mouse. Parameter values were obtained from a variety of literature sources. Using the mouse model, the mouse was studied as a potential experimental model for spaceflight. Simulation studies of dehydration and hypoxia were performed. A comparison of system parameters for the mouse and human models is presented. Aside from the obvious differences expected in fluid volumes, blood flows and metabolic rates, larger differences were observed in the following: erythrocyte life span, erythropoietin half-life, and normal arterial pO2.
Technology Transfer Automated Retrieval System (TEKTRAN)
Staphylococcus aureus is a foodborne pathogen widespread in the environment and found in various food products. This pathogen can produce enterotoxins that cause illnesses in humans. The objectives of this study were to develop a probability model of S. aureus enterotoxin production as affected by w...
We show that a conditional probability analysis that utilizes a stressor-response model based on a logistic regression provides a useful approach for developing candidate water quality criterai from empirical data. The critical step in this approach is transforming the response ...
Three Dimensional Deformation of Mining Area Detection by InSAR and Probability Integral Model
NASA Astrophysics Data System (ADS)
Fan, H. D.; Gao, X. X.; Cheng, D.; Zhao, W. Y.; Zhao, C. L.
2015-06-01
A new solution algorithm that combined D-InSAR and probability integral method was proposed to generate the three dimensional deformation in mining area. The details are as follows: according to the geological and mining data, the control points set should be established, which contains correct phase unwrapping points in subsidence basin edge generated by D-InSAR and several GPS points; Using the modulus method to calculate the optimum parameters of probability integral prediction; Finally, generate the three dimensional deformation of mining work face by the parameters. Using this method, the land subsidence with big deformation gradients in mining area were correctly generated by example TerraSAR-X images. The results of the example show that this method can generate the correct mining subsidence basin with a few surface observations, and it is much better than the results of D-InSAR.
NASA Astrophysics Data System (ADS)
Rowicka, Małgorzata; Otwinowski, Zbyszek
2004-04-01
Using the Maximum Entropy principle, we find probability distribution of torsion angles in proteins. We estimate parameters of this distribution numerically, by implementing the conjugate gradient method in Polak-Ribiere variant. We investigate practical approximations of the theoretical distribution. We discuss the information content of these approximations and compare them with standard histogram method. Our data are pairs of main chain torsion angles for a selected subset of high resolution non-homologous protein structures from Protein Data Bank.
A probability model for evaluating building contamination from an environmental event.
Spicer, R C; Gangloff, H J
2000-09-01
Asbestos dust and bioaerosol sampling data from suspected contaminated zones in buildings allowed development of an environmental data evaluation protocol based on the differences in frequency of detection of a target contaminant between zones of comparison. Under the assumption that the two test zones of comparison are similar, application of population proportion probability calculates the significance of observed differences in contaminant levels. This was used to determine whether levels of asbestos dust contamination detected after a fire were likely the result of smoke-borne contamination, or were caused by pre-existing/background conditions. Bioaerosol sampling from several sites was also used to develop the population proportion probability protocol. In this case, significant differences in indoor air contamination relative to the ambient conditions were identified that were consistent with the visual observations of contamination. Implicit in this type of probability analysis is a definition of "contamination" based on significant differences in contaminant levels relative to a control zone. Detection of a suspect contaminant can be assessed as to possible sources(s) as well as the contribution made by pre-existing (i.e., background) conditions, provided the test and control zones are subjected to the same sampling and analytical methods.
NASA Astrophysics Data System (ADS)
Rosa, A. N. F.; Wiatr, P.; Cavdar, C.; Carvalho, S. V.; Costa, J. C. W. A.; Wosinska, L.
2015-11-01
In Elastic Optical Network (EON), spectrum fragmentation refers to the existence of non-aligned, small-sized blocks of free subcarrier slots in the optical spectrum. Several metrics have been proposed in order to quantify a level of spectrum fragmentation. Approximation methods might be used for estimating average blocking probability and some fragmentation measures, but are so far unable to accurately evaluate the influence of different sizes of connection requests and do not allow in-depth investigation of blocking events and their relation to fragmentation. The analytical study of the effect of fragmentation on requests' blocking probability is still under-explored. In this work, we introduce new definitions for blocking that differentiate between the reasons for the blocking events. We developed a framework based on Markov modeling to calculate steady-state probabilities for the different blocking events and to analyze fragmentation related problems in elastic optical links under dynamic traffic conditions. This framework can also be used for evaluation of different definitions of fragmentation in terms of their relation to the blocking probability. We investigate how different allocation request sizes contribute to fragmentation and blocking probability. Moreover, we show to which extend blocking events, due to insufficient amount of available resources, become inevitable and, compared to the amount of blocking events due to fragmented spectrum, we draw conclusions on the possible gains one can achieve by system defragmentation. We also show how efficient spectrum allocation policies really are in reducing the part of fragmentation that in particular leads to actual blocking events. Simulation experiments are carried out showing good match with our analytical results for blocking probability in a small scale scenario. Simulated blocking probabilities for the different blocking events are provided for a larger scale elastic optical link.
2012-01-01
Background Osteoporotic hip fractures represent major cause of disability, loss of quality of life and even mortality among the elderly population. Decisions on drug therapy are based on the assessment of risk factors for fracture, from BMD measurements. The combination of biomechanical models with clinical studies could better estimate bone strength and supporting the specialists in their decision. Methods A model to assess the probability of fracture, based on the Damage and Fracture Mechanics has been developed, evaluating the mechanical magnitudes involved in the fracture process from clinical BMD measurements. The model is intended for simulating the degenerative process in the skeleton, with the consequent lost of bone mass and hence the decrease of its mechanical resistance which enables the fracture due to different traumatisms. Clinical studies were chosen, both in non-treatment conditions and receiving drug therapy, and fitted to specific patients according their actual BMD measures. The predictive model is applied in a FE simulation of the proximal femur. The fracture zone would be determined according loading scenario (sideway fall, impact, accidental loads, etc.), using the mechanical properties of bone obtained from the evolutionary model corresponding to the considered time. Results BMD evolution in untreated patients and in those under different treatments was analyzed. Evolutionary curves of fracture probability were obtained from the evolution of mechanical damage. The evolutionary curve of the untreated group of patients presented a marked increase of the fracture probability, while the curves of patients under drug treatment showed variable decreased risks, depending on the therapy type. Conclusion The FE model allowed to obtain detailed maps of damage and fracture probability, identifying high-risk local zones at femoral neck and intertrochanteric and subtrochanteric areas, which are the typical locations of osteoporotic hip fractures. The
Haber, M.; An, Q.; Foppa, I. M.; Shay, D. K.; Ferdinands, J. M.; Orenstein, W. A.
2014-01-01
Summary As influenza vaccination is now widely recommended, randomized clinical trials are no longer ethical in many populations. Therefore, observational studies on patients seeking medical care for acute respiratory illnesses (ARI) are a popular option for estimating influenza vaccine effectiveness (VE). We developed a probability model for evaluating and comparing bias and precision of estimates of VE against symptomatic influenza from two commonly-used case-control study designs: the test-negative design and the traditional case-control design. We show that when vaccination does not affect the probability of developing non-influenza ARI then VE estimates from test-negative design studies are unbiased even if vaccinees and non-vaccinees have different probabilities of seeking medical care against ARI, as long as the ratio of these probabilities is the same for illnesses resulting from influenza and non-influenza infections. Our numerical results suggest that in general, estimates from the test-negative design have smaller bias compared to estimates from the traditional case-control design as long as the probability of non-influenza ARI is similar among vaccinated and unvaccinated individuals. We did not find consistent differences between the standard errors of the estimates from the two study designs. PMID:25147970
Lexicographic Probability, Conditional Probability, and Nonstandard Probability
2009-11-11
the following conditions: CP1. µ(U |U) = 1 if U ∈ F ′. CP2 . µ(V1 ∪ V2 |U) = µ(V1 |U) + µ(V2 |U) if V1 ∩ V2 = ∅, U ∈ F ′, and V1, V2 ∈ F . CP3. µ(V |U...µ(V |X)× µ(X |U) if V ⊆ X ⊆ U , U,X ∈ F ′, V ∈ F . Note that it follows from CP1 and CP2 that µ(· |U) is a probability measure on (W,F) (and, in... CP2 hold. This is easily seen to determine µ. Moreover, µ vaciously satisfies CP3, since there do not exist distinct sets U and X in F ′ such that U
NASA Astrophysics Data System (ADS)
Tan, Elcin
A new physically-based methodology for probable maximum precipitation (PMP) estimation is developed over the American River Watershed (ARW) using the Weather Research and Forecast (WRF-ARW) model. A persistent moisture flux convergence pattern, called Pineapple Express, is analyzed for 42 historical extreme precipitation events, and it is found that Pineapple Express causes extreme precipitation over the basin of interest. An average correlation between moisture flux convergence and maximum precipitation is estimated as 0.71 for 42 events. The performance of the WRF model is verified for precipitation by means of calibration and independent validation of the model. The calibration procedure is performed only for the first ranked flood event 1997 case, whereas the WRF model is validated for 42 historical cases. Three nested model domains are set up with horizontal resolutions of 27 km, 9 km, and 3 km over the basin of interest. As a result of Chi-square goodness-of-fit tests, the hypothesis that "the WRF model can be used in the determination of PMP over the ARW for both areal average and point estimates" is accepted at the 5% level of significance. The sensitivities of model physics options on precipitation are determined using 28 microphysics, atmospheric boundary layer, and cumulus parameterization schemes combinations. It is concluded that the best triplet option is Thompson microphysics, Grell 3D ensemble cumulus, and YSU boundary layer (TGY), based on 42 historical cases, and this TGY triplet is used for all analyses of this research. Four techniques are proposed to evaluate physically possible maximum precipitation using the WRF: 1. Perturbations of atmospheric conditions; 2. Shift in atmospheric conditions; 3. Replacement of atmospheric conditions among historical events; and 4. Thermodynamically possible worst-case scenario creation. Moreover, climate change effect on precipitation is discussed by emphasizing temperature increase in order to determine the
Wall, Clifton; Boersma, Bendiks Jan; Moin, Parviz
2000-10-01
The assumed beta distribution model for the subgrid-scale probability density function (PDF) of the mixture fraction in large eddy simulation of nonpremixed, turbulent combustion is tested, a priori, for a reacting jet having significant heat release (density ratio of 5). The assumed beta distribution is tested as a model for both the subgrid-scale PDF and the subgrid-scale Favre PDF of the mixture fraction. The beta model is successful in approximating both types of PDF but is slightly more accurate in approximating the normal (non-Favre) PDF. To estimate the subgrid-scale variance of mixture fraction, which is required by the beta model, both a scale similarity model and a dynamic model are used. Predictions using the dynamic model are found to be more accurate. The beta model is used to predict the filtered value of a function chosen to resemble the reaction rate. When no model is used, errors in the predicted value are of the same order as the actual value. The beta model is found to reduce this error by about a factor of two, providing a significant improvement. (c) 2000 American Institute of Physics.
High-Latitude Filtering in a Global Grid-Point Model Using Model Normal Modes
NASA Technical Reports Server (NTRS)
Takacs, L. L.; Navon, I. M.; Kalnay, E.
1985-01-01
The aim of high-latitude filtering in the vicinity of the poles is to avoid the excessively short time steps imposed on an explicit time-differencing scheme by linear stability due to fast moving inertia-gravity waves near the poles. The model normal mode expansion toward the problem of high-latitude filtering in a global shallow water model using the same philosophy as that used by Daley for the problem for large timesteps in P.E. models with explicit time integration schemes was applied.
Adamovich, Igor V.
2014-04-15
A three-dimensional, nonperturbative, semiclassical analytic model of vibrational energy transfer in collisions between a rotating diatomic molecule and an atom, and between two rotating diatomic molecules (Forced Harmonic Oscillator–Free Rotation model) has been extended to incorporate rotational relaxation and coupling between vibrational, translational, and rotational energy transfer. The model is based on analysis of semiclassical trajectories of rotating molecules interacting by a repulsive exponential atom-to-atom potential. The model predictions are compared with the results of three-dimensional close-coupled semiclassical trajectory calculations using the same potential energy surface. The comparison demonstrates good agreement between analytic and numerical probabilities of rotational and vibrational energy transfer processes, over a wide range of total collision energies, rotational energies, and impact parameter. The model predicts probabilities of single-quantum and multi-quantum vibrational-rotational transitions and is applicable up to very high collision energies and quantum numbers. Closed-form analytic expressions for these transition probabilities lend themselves to straightforward incorporation into DSMC nonequilibrium flow codes.
A normal tissue dose response model of dynamic repair processes
NASA Astrophysics Data System (ADS)
Alber, Markus; Belka, Claus
2006-01-01
A model is presented for serial, critical element complication mechanisms for irradiated volumes from length scales of a few millimetres up to the entire organ. The central element of the model is the description of radiation complication as the failure of a dynamic repair process. The nature of the repair process is seen as reestablishing the structural organization of the tissue, rather than mere replenishment of lost cells. The interactions between the cells, such as migration, involved in the repair process are assumed to have finite ranges, which limits the repair capacity and is the defining property of a finite-sized reconstruction unit. Since the details of the repair processes are largely unknown, the development aims to make the most general assumptions about them. The model employs analogies and methods from thermodynamics and statistical physics. An explicit analytical form of the dose response of the reconstruction unit for total, partial and inhomogeneous irradiation is derived. The use of the model is demonstrated with data from animal spinal cord experiments and clinical data about heart, lung and rectum. The three-parameter model lends a new perspective to the equivalent uniform dose formalism and the established serial and parallel complication models. Its implications for dose optimization are discussed.
A two-stage approach in solving the state probabilities of the multi-queue M/G/1 model
NASA Astrophysics Data System (ADS)
Chen, Mu-Song; Yen, Hao-Wei
2016-04-01
The M/G/1 model is the fundamental basis of the queueing system in many network systems. Usually, the study of the M/G/1 is limited by the assumption of single queue and infinite capacity. In practice, however, these postulations may not be valid, particularly when dealing with many real-world problems. In this paper, a two-stage state-space approach is devoted to solving the state probabilities for the multi-queue finite-capacity M/G/1 model, i.e. q-M/G/1/Ki with Ki buffers in the ith queue. The state probabilities at departure instants are determined by solving a set of state transition equations. Afterward, an embedded Markov chain analysis is applied to derive the state probabilities with another set of state balance equations at arbitrary time instants. The closed forms of the state probabilities are also presented with theorems for reference. Applications of Little's theorem further present the corresponding results for queue lengths and average waiting times. Simulation experiments have demonstrated the correctness of the proposed approaches.
NASA Astrophysics Data System (ADS)
Smith, L.
2004-12-01
The aim of constructing a forecast from the best model(s) simulations should be distinguished from the aim of improving the model(s) whenever possible. The common confusion of these distinct aims in earth system science sometimes results both in the misinterpretation of results and in a less than ideal experimental design. The motivation, resource distribution, and scientific goals of these two aims almost always differ in the earth sciences. The goal of this talk is to illustrate these differences in the contexts of operational weather forecasting and that of climate modelling. We adopt the mathematical framework of indistinguishable states (Judd and Smith, Physica D, 2001 & 2004), which allows us to clarify fundamental limitations on any attempt to extract accountable (physically relevant) probability forecasts from imperfect models of any physical system, even relatively simple ones. Operational weather forecasts from ECMWF and NCEP are considered in the light of THORPEX societal goals. Monte Carlo experiments in general, and ensemble systems in particular, generate distributions of simulations, but the interpretation of the output depends on the design of the ensemble, and this in turn is rather different if the aim is to better understand the model rather than to better predict electricity demand. Also, we show that there are alternatives to interpreting the ensemble as a probability forecast, alternatives that are sometime more relevant to industrial applications. Extracting seasonal forecasts from multi-model, multi-initial condition ensembles of simulations is also discussed. Finally, two different approaches to interpreting ensembles of climate model simulations are discussed. Our main conclusions reflect the need to distinguish the ways and means of using geophysical ensembles for model improvement from their applications to socio-economic risk management and policy, and to verify the physical relevance of requested deliverables like probability forecasts.
Mesh-Based Entry Vehicle and Explosive Debris Re-Contact Probability Modeling
NASA Technical Reports Server (NTRS)
McPherson, Mark A.; Mendeck, Gavin F.
2011-01-01
The risk to a crewed vehicle arising from potential re-contact with fragments from an explosive breakup of any jettisoned spacecraft segments during entry has long sought to be quantified. However, great difficulty lies in efficiently capturing the potential locations of each fragment and their collective threat to the vehicle. The method presented in this paper addresses this problem by using a stochastic approach that discretizes simulated debris pieces into volumetric cells, and then assesses strike probabilities accordingly. Combining spatial debris density and relative velocity between the debris and the entry vehicle, the strike probability can be calculated from the integral of the debris flux inside each cell over time. Using this technique it is possible to assess the risk to an entry vehicle along an entire trajectory as it separates from the jettisoned segment. By decoupling the fragment trajectories from that of the entry vehicle, multiple potential separation maneuvers can then be evaluated rapidly to provide an assessment of the best strategy to mitigate the re-contact risk.
DotKnot: pseudoknot prediction using the probability dot plot under a refined energy model.
Sperschneider, Jana; Datta, Amitava
2010-04-01
RNA pseudoknots are functional structure elements with key roles in viral and cellular processes. Prediction of a pseudoknotted minimum free energy structure is an NP-complete problem. Practical algorithms for RNA structure prediction including restricted classes of pseudoknots suffer from high runtime and poor accuracy for longer sequences. A heuristic approach is to search for promising pseudoknot candidates in a sequence and verify those. Afterwards, the detected pseudoknots can be further analysed using bioinformatics or laboratory techniques. We present a novel pseudoknot detection method called DotKnot that extracts stem regions from the secondary structure probability dot plot and assembles pseudoknot candidates in a constructive fashion. We evaluate pseudoknot free energies using novel parameters, which have recently become available. We show that the conventional probability dot plot makes a wide class of pseudoknots including those with bulged stems manageable in an explicit fashion. The energy parameters now become the limiting factor in pseudoknot prediction. DotKnot is an efficient method for long sequences, which finds pseudoknots with higher accuracy compared to other known prediction algorithms. DotKnot is accessible as a web server at http://dotknot.csse.uwa.edu.au.
Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation
Jordan, P.D.; Oldenburg, C.M.; Nicot, J.-P.
2011-05-15
Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.
Nixon, Zachary; Michel, Jacqueline
2015-04-07
To better understand the distribution of remaining lingering subsurface oil residues from the Exxon Valdez oil spill (EVOS) along the shorelines of Prince William Sound (PWS), AK, we revised previous modeling efforts to allow spatially explicit predictions of the distribution of subsurface oil. We used a set of pooled field data and predictor variables stored as Geographic Information Systems (GIS) data to generate calibrated boosted tree models predicting the encounter probability of different categories of subsurface oil. The models demonstrated excellent predictive performance as evaluated by cross-validated performance statistics. While the average encounter probabilities at most shoreline locations are low across western PWS, clusters of shoreline locations with elevated encounter probabilities remain in the northern parts of the PWS, as well as more isolated locations. These results can be applied to estimate the location and amount of remaining oil, evaluate potential ongoing impacts, and guide remediation. This is the first application of quantitative machine-learning based modeling techniques in estimating the likelihood of ongoing, long-term shoreline oil persistence after a major oil spill.
NASA Astrophysics Data System (ADS)
Wenmackers, S.; Vanpoucke, D. E. P.; Douven, I.
2012-01-01
We present a model for studying communities of epistemically interacting agents who update their belief states by averaging (in a specified way) the belief states of other agents in the community. The agents in our model have a rich belief state, involving multiple independent issues which are interrelated in such a way that they form a theory of the world. Our main goal is to calculate the probability for an agent to end up in an inconsistent belief state due to updating (in the given way). To that end, an analytical expression is given and evaluated numerically, both exactly and using statistical sampling. It is shown that, under the assumptions of our model, an agent always has a probability of less than 2% of ending up in an inconsistent belief state. Moreover, this probability can be made arbitrarily small by increasing the number of independent issues the agents have to judge or by increasing the group size. A real-world situation to which this model applies is a group of experts participating in a Delphi-study.
Presenting Thin Media Models Affects Women's Choice of Diet or Normal Snacks
ERIC Educational Resources Information Center
Krahe, Barbara; Krause, Christina
2010-01-01
Our study explored the influence of thin- versus normal-size media models and of self-reported restrained eating behavior on women's observed snacking behavior. Fifty female undergraduates saw a set of advertisements for beauty products showing either thin or computer-altered normal-size female models, allegedly as part of a study on effective…
Confidence Probability versus Detection Probability
Axelrod, M
2005-08-18
In a discovery sampling activity the auditor seeks to vet an inventory by measuring (or inspecting) a random sample of items from the inventory. When the auditor finds every sample item in compliance, he must then make a confidence statement about the whole inventory. For example, the auditor might say: ''We believe that this inventory of 100 items contains no more than 5 defectives with 95% confidence.'' Note this is a retrospective statement in that it asserts something about the inventory after the sample was selected and measured. Contrast this to the prospective statement: ''We will detect the existence of more than 5 defective items in this inventory with 95% probability.'' The former uses confidence probability while the latter uses detection probability. For a given sample size, the two probabilities need not be equal, indeed they could differ significantly. Both these probabilities critically depend on the auditor's prior belief about the number of defectives in the inventory and how he defines non-compliance. In other words, the answer strongly depends on how the question is framed.
Lee, Soomin; Lee, Heeyoung; Lee, Joo-Yeon; Skandamis, Panagiotis; Park, Beom-Young; Oh, Mi-Hwa; Yoon, Yohan
2013-11-01
In this study, mathematical models were developed to predict the growth probability and kinetic behavior of Listeria monocytogenes on fresh pork skin during storage at different temperatures. A 10-strain mixture of L. monocytogenes was inoculated on fresh pork skin (3 by 5 cm) at 4 log CFU/cm(2). The inoculated samples were stored aerobically at 4, 7, and 10 °C for 240 h, at 15 and 20 °C for 96 h, and at 25 and 30 °C for 12 h. The Baranyi model was fitted to L. monocytogenes growth data on PALCAM agar to calculate the maximum specific growth rate, lag-phase duration, the lower asymptote, and the upper asymptote. The kinetic parameters were then further analyzed as a function of storage temperature. The model simulated growth of L. monocytogenes under constant and changing temperatures, and the performances of the models were evaluated by the root mean square error and bias factor (Bf). Of the 49 combinations (temperature × sampling time), the combinations with significant growth (P < 0.05) of L. monocytogenes were assigned a value of 1, and the combinations with nonsignificant growth (P > 0.05) were given a value of 0. These data were analyzed by logistic regression to develop a model predicting the probabilities of L. monocytogenes growth. At 4 to 10 °C, obvious L. monocytogenes growth was observable after 24 h of storage; but, at other temperatures, the pathogen had obvious growth after 12 h of storage. Because the root mean square error value (0.184) and Bf (1.01) were close to 0 and 1, respectively, the performance of the developed model was acceptable, and the probabilistic model also showed good performance. These results indicate that the developed model should be useful in predicting kinetic behavior and calculating growth probabilities of L. monocytogenes as a function of temperature and time.
2-D Model for Normal and Sickle Cell Blood Microcirculation
NASA Astrophysics Data System (ADS)
Tekleab, Yonatan; Harris, Wesley
2011-11-01
Sickle cell disease (SCD) is a genetic disorder that alters the red blood cell (RBC) structure and function such that hemoglobin (Hb) cannot effectively bind and release oxygen. Previous computational models have been designed to study the microcirculation for insight into blood disorders such as SCD. Our novel 2-D computational model represents a fast, time efficient method developed to analyze flow dynamics, O2 diffusion, and cell deformation in the microcirculation. The model uses a finite difference, Crank-Nicholson scheme to compute the flow and O2 concentration, and the level set computational method to advect the RBC membrane on a staggered grid. Several sets of initial and boundary conditions were tested. Simulation data indicate a few parameters to be significant in the perturbation of the blood flow and O2 concentration profiles. Specifically, the Hill coefficient, arterial O2 partial pressure, O2 partial pressure at 50% Hb saturation, and cell membrane stiffness are significant factors. Results were found to be consistent with those of Le Floch [2010] and Secomb [2006].
Detectability models and waveform design for multiple access Low-Probability-of-Intercept networks
NASA Astrophysics Data System (ADS)
Mills, Robert F.
1994-04-01
Increased connectivity demands in the tactical battlefield have led to the development of multiple access low probability-of-intercept (LPI) communication networks. Most detectability studies of LPI networks have focused on the individual network links, in which detectability calculations are carried out for a single network emitter. This report, however, presents a different approach to network detectability analysis: it is assumed that the interceptor does not attempt to distinguish one emitter from another, but rather decides only if a network is operating or not. What distinguishes this approach from conventional link intercept analysis is that detection decisions are based on energy received from multiple sources. The following multiple access schemes are considered: frequency division, time division, direct sequence code division, and frequency hop code division. The wideband radiometer and its hybrids, such as the channelized radiometer, are used as potential network intercept receivers.
Optimal estimation for regression models on τ-year survival probability.
Kwak, Minjung; Kim, Jinseog; Jung, Sin-Ho
2015-01-01
A logistic regression method can be applied to regressing the [Formula: see text]-year survival probability to covariates, if there are no censored observations before time [Formula: see text]. But if some observations are incomplete due to censoring before time [Formula: see text], then the logistic regression cannot be applied. Jung (1996) proposed to modify the score function for logistic regression to accommodate the right-censored observations. His modified score function, motivated for a consistent estimation of regression parameters, becomes a regular logistic score function if no observations are censored before time [Formula: see text]. In this article, we propose a modification of Jung's estimating function for an optimal estimation for the regression parameters in addition to consistency. We prove that the optimal estimator is more efficient than Jung's estimator. This theoretical comparison is illustrated with a real example data analysis and simulations.
Abanto-Valle, C. A.; Bandyopadhyay, D.; Lachos, V. H.; Enriquez, I.
2009-01-01
A Bayesian analysis of stochastic volatility (SV) models using the class of symmetric scale mixtures of normal (SMN) distributions is considered. In the face of non-normality, this provides an appealing robust alternative to the routine use of the normal distribution. Specific distributions examined include the normal, student-t, slash and the variance gamma distributions. Using a Bayesian paradigm, an efficient Markov chain Monte Carlo (MCMC) algorithm is introduced for parameter estimation. Moreover, the mixing parameters obtained as a by-product of the scale mixture representation can be used to identify outliers. The methods developed are applied to analyze daily stock returns data on S&P500 index. Bayesian model selection criteria as well as out-of- sample forecasting results reveal that the SV models based on heavy-tailed SMN distributions provide significant improvement in model fit as well as prediction to the S&P500 index data over the usual normal model. PMID:20730043
Reliability and structural integrity. [analytical model for calculating crack detection probability
NASA Technical Reports Server (NTRS)
Davidson, J. R.
1973-01-01
An analytic model is developed to calculate the reliability of a structure after it is inspected for cracks. The model accounts for the growth of undiscovered cracks between inspections and their effect upon the reliability after subsequent inspections. The model is based upon a differential form of Bayes' Theorem for reliability, and upon fracture mechanics for crack growth.
NASA Astrophysics Data System (ADS)
Salis, Michele; Arca, Bachisio; Bacciu, Valentina; Spano, Donatella; Duce, Pierpaolo; Santoni, Paul; Ager, Alan; Finney, Mark
2010-05-01
Characterizing the spatial pattern of large fire occurrence and severity is an important feature of the fire management planning in the Mediterranean region. The spatial characterization of fire probabilities, fire behavior distributions and value changes are key components for quantitative risk assessment and for prioritizing fire suppression resources, fuel treatments and law enforcement. Because of the growing wildfire severity and frequency in recent years (e.g.: Portugal, 2003 and 2005; Italy and Greece, 2007 and 2009), there is an increasing demand for models and tools that can aid in wildfire prediction and prevention. Newer wildfire simulation systems offer promise in this regard, and allow for fine scale modeling of wildfire severity and probability. Several new applications has resulted from the development of a minimum travel time (MTT) fire spread algorithm (Finney, 2002), that models the fire growth searching for the minimum time for fire to travel among nodes in a 2D network. The MTT approach makes computationally feasible to simulate thousands of fires and generate burn probability and fire severity maps over large areas. The MTT algorithm is imbedded in a number of research and fire modeling applications. High performance computers are typically used for MTT simulations, although the algorithm is also implemented in the FlamMap program (www.fire.org). In this work, we described the application of the MTT algorithm to estimate spatial patterns of burn probability and to analyze wildfire severity in three fire prone areas of the Mediterranean Basin, specifically Sardinia (Italy), Sicily (Italy) and Corsica (France) islands. We assembled fuels and topographic data for the simulations in 500 x 500 m grids for the study areas. The simulations were run using 100,000 ignitions under weather conditions that replicated severe and moderate weather conditions (97th and 70th percentile, July and August weather, 1995-2007). We used both random ignition locations
Ferrante, L; Bompadre, S; Leone, L; Montanari, M P
2005-06-01
Time-kill curves have frequently been employed to study the antimicrobial effects of antibiotics. The relevance of pharmacodynamic modeling to these investigations has been emphasized in many studies of bactericidal kinetics. Stochastic models are needed that take into account the randomness of the mechanisms of both bacterial growth and bacteria-drug interactions. However, most of the models currently used to describe antibiotic activity against microorganisms are deterministic. In this paper we examine a stochastic differential equation representing a stochastic version of a pharmacodynamic model of bacterial growth undergoing random fluctuations, and derive its solution, mean value and covariance structure. An explicit likelihood function is obtained both when the process is observed continuously over a period of time and when data is sampled at time points, as is the custom in these experimental conditions. Some asymptotic properties of the maximum likelihood estimators for the model parameters are discussed. The model is applied to analyze in vitro time-kill data and to estimate model parameters; the probability of the bacterial population size dropping below some critical threshold is also evaluated. Finally, the relationship between bacterial extinction probability and the pharmacodynamic parameters estimated is discussed.
NASA Technical Reports Server (NTRS)
Chase, Thomas D.; Splawn, Keith; Christiansen, Eric L.
2007-01-01
The NASA Extravehicular Mobility Unit (EMU) micrometeoroid and orbital debris protection ability has recently been assessed against an updated, higher threat space environment model. The new environment was analyzed in conjunction with a revised EMU solid model using a NASA computer code. Results showed that the EMU exceeds the required mathematical Probability of having No Penetrations (PNP) of any suit pressure bladder over the remaining life of the program (2,700 projected hours of 2 person spacewalks). The success probability was calculated to be 0.94, versus a requirement of >0.91, for the current spacesuit s outer protective garment. In parallel to the probability assessment, potential improvements to the current spacesuit s outer protective garment were built and impact tested. A NASA light gas gun was used to launch projectiles at test items, at speeds of approximately 7 km per second. Test results showed that substantial garment improvements could be made, with mild material enhancements and moderate assembly development. The spacesuit s PNP would improve marginally with the tested enhancements, if they were available for immediate incorporation. This paper discusses the results of the model assessment process and test program. These findings add confidence to the continued use of the existing NASA EMU during International Space Station (ISS) assembly and Shuttle Operations. They provide a viable avenue for improved hypervelocity impact protection for the EMU, or for future space suits.
A model for predicting GPS-GDOP and its probability using LiDAR data and ultra rapid product
NASA Astrophysics Data System (ADS)
Lohani, Bharat; Kumar, Raman
2008-12-01
This paper presents a model to predict the GDOP value (Geometric Dilution of Precision) and the probability of its occurrence at a point in space and time using airborne LiDAR (Light Detection and Ranging) data and the ultra-rapid product (URP) available from the International GPS Service. LiDAR data help to classify the terrain around a GPS (Global Positioning System) receiver into categories such as ground, opaque objects, translucent objects and transparent regions as per their response to the transmission of GPS signal. Through field experiments it is established that URP can be used satisfactorily to determine GDOP. Further experiments have shown that the translucent objects (mainly trees here) lower the GDOP quality as they obstruct the GPS signal. LiDAR data density on trees is used as a measure of the translucency of trees to the GPS signal. Through GPS observations taken in field a relationship has been established between LiDAR data density on trees and the probability that a satellite which is behind the tree is visible at the GPS receiver. A model is presented which, for all possible combinations of visible satellites, computes the GDOP value along with the probability of occurrence of this GDOP. A few results are presented to show the performance of the model developed and its possible application in location based queries.
NASA Astrophysics Data System (ADS)
Beyerlein, Irene Jane
Many next generation, structural composites are likely to be engineered from stiff fibers embedded in ceramic, metallic, or polymeric matrices. Ironically, complexity in composite failure response, rendering them superior to traditional materials, also makes them difficult to characterize for high reliability design. Challenges lie in modeling the interacting, randomly evolving micromechanical damage, such as fiber break nucleation and coalescence, and in the fact that strength, lifetime, and failure mode vary substantially between otherwise identical specimens. My thesis research involves developing (i) computational, micromechanical stress transfer models around multiple fiber breaks in fiber composites, (ii) Monte Carlo simulation models to reproduce their failure process, and (iii) interpretative probability models. In Chapter 1, a Monte Carlo model is developed to study the effects of fiber strength statistics on the fracture process and strength distribution of unnotched and notched N elastic composite laminae. The simulation model couples a micromechanical stress analysis, called break influence superposition, and Weibull fiber strengths, wherein fiber strength varies negligibly along fiber length. Examination of various statistical aspects of composite failure reveals mechanisms responsible for flaw intolerance in the short notch regime and for toughness in the long notch regime. Probability models and large N approximations are developed in Chapter 2 to model the effects of variation in fiber strength on statistical composite fracture response. Based on the probabilities of simple sequences of failure events, probability models for crack and distributed cluster growth and fracture resistance are developed. Comparisons with simulation results show that these models and approximations successfully predicted the unnotched and notched composite strength distributions and that fracture toughness grows slowly as (1nN)sp{1/gamma}, where gamma is the fiber Weibull
Davies, Christopher E; Giles, Lynne C; Glonek, Gary Fv
2017-01-01
One purpose of a longitudinal study is to gain insight of how characteristics at earlier points in time can impact on subsequent outcomes. Typically, the outcome variable varies over time and the data for each individual can be used to form a discrete path of measurements, that is a trajectory. Group-based trajectory modelling methods seek to identify subgroups of individuals within a population with trajectories that are more similar to each other than to trajectories in distinct groups. An approach to modelling the influence of covariates measured at earlier time points in the group-based setting is to consider models wherein these covariates affect the group membership probabilities. Models in which prior covariates impact the trajectories directly are also possible but are not considered here. In the present study, we compared six different methods for estimating the effect of covariates on the group membership probabilities, which have different approaches to account for the uncertainty in the group membership assignment. We found that when investigating the effect of one or several covariates on a group-based trajectory model, the full likelihood approach minimized the bias in the estimate of the covariate effect. In this '1-step' approach, the estimation of the effect of covariates and the trajectory model are carried out simultaneously. Of the '3-step' approaches, where the effect of the covariates is assessed subsequent to the estimation of the group-based trajectory model, only Vermunt's improved 3 step resulted in bias estimates similar in size to the full likelihood approach. The remaining methods considered resulted in considerably higher bias in the covariate effect estimates and should not be used. In addition to the bias empirically demonstrated for the probability regression approach, we have shown analytically that it is biased in general.
Probability and radical behaviorism
Espinosa, James M.
1992-01-01
The concept of probability appears to be very important in the radical behaviorism of Skinner. Yet, it seems that this probability has not been accurately defined and is still ambiguous. I give a strict, relative frequency interpretation of probability and its applicability to the data from the science of behavior as supplied by cumulative records. Two examples of stochastic processes are given that may model the data from cumulative records that result under conditions of continuous reinforcement and extinction, respectively. PMID:22478114
A First Comparison of Multiple Probability Hazard Outputs from Three Global Flood Models
NASA Astrophysics Data System (ADS)
Trigg, M. A.; Bates, P. D.; Fewtrell, T. J.; Yamazaki, D.; Pappenberger, F.; Winsemius, H.
2014-12-01
With research advances in algorithms, remote sensing data sets and computing power, global flood models are now a practical reality. There are a number of different research models currently available or in development, and as these models mature and output becomes available for use, there is great interest in how these different models compare and how useful they may be at different scales. At the kick-off meeting of the Global Flood Partnership (GFP) in March 2014, the need to compare these new global flood models was identified as a research priority, both for developers of the models and users of the output. The Global Flood Partnership (GFP) is an informal network of scientists and practitioners from public, private and international organisations providing or using global flood monitoring, modelling and forecasting. (http://portal.gdacs.org/Global-Flood-Partnership). On behalf of the GFP, The Willis Research Network is undertaking this comparison research and the work presented here is the result of the first phase of this comparison for three models; CaMa-Flood, GLOFRIS & ECMWF. The comparison analysis is undertaken for the entire African continent, identified by GFP members as the best location to facilitate data sharing by model teams and where there was the most interest from potential users of the model outputs. Initial analysis results include flooded area for a range of hazard return periods (25, 50, 100, 250, 500, 1000 years) and this is also compared against catchment sizes and climatic zone. Results will be discussed in the context of the different model structures and input data used, while also addressing scale issues and practicalities of use. Finally, plans for the validation of the models against microwave and optical remote sensing data will be outlined.
Jafarzadeh, S Reza; Johnson, Wesley O; Gardner, Ian A
2016-03-15
The area under the receiver operating characteristic (ROC) curve (AUC) is used as a performance metric for quantitative tests. Although multiple biomarkers may be available for diagnostic or screening purposes, diagnostic accuracy is often assessed individually rather than in combination. In this paper, we consider the interesting problem of combining multiple biomarkers for use in a single diagnostic criterion with the goal of improving the diagnostic accuracy above that of an individual biomarker. The diagnostic criterion created from multiple biomarkers is based on the predictive probability of disease, conditional on given multiple biomarker outcomes. If the computed predictive probability exceeds a specified cutoff, the corresponding subject is allocated as 'diseased'. This defines a standard diagnostic criterion that has its own ROC curve, namely, the combined ROC (cROC). The AUC metric for cROC, namely, the combined AUC (cAUC), is used to compare the predictive criterion based on multiple biomarkers to one based on fewer biomarkers. A multivariate random-effects model is proposed for modeling multiple normally distributed dependent scores. Bayesian methods for estimating ROC curves and corresponding (marginal) AUCs are developed when a perfect reference standard is not available. In addition, cAUCs are computed to compare the accuracy of different combinations of biomarkers for diagnosis. The methods are evaluated using simulations and are applied to data for Johne's disease (paratuberculosis) in cattle.
NASA Astrophysics Data System (ADS)
Mahanti, P.; Robinson, M. S.; Boyd, A. K.
2013-12-01
Craters ~20-km diameter and above significantly shaped the lunar landscape. The statistical nature of the slope distribution on their walls and floors dominate the overall slope distribution statistics for the lunar surface. Slope statistics are inherently useful for characterizing the current topography of the surface, determining accurate photometric and surface scattering properties, and in defining lunar surface trafficability [1-4]. Earlier experimental studies on the statistical nature of lunar surface slopes were restricted either by resolution limits (Apollo era photogrammetric studies) or by model error considerations (photoclinometric and radar scattering studies) where the true nature of slope probability distribution was not discernible at baselines smaller than a kilometer[2,3,5]. Accordingly, historical modeling of lunar surface slopes probability distributions for applications such as in scattering theory development or rover traversability assessment is more general in nature (use of simple statistical models such as the Gaussian distribution[1,2,5,6]). With the advent of high resolution, high precision topographic models of the Moon[7,8], slopes in lunar craters can now be obtained at baselines as low as 6-meters allowing unprecedented multi-scale (multiple baselines) modeling possibilities for slope probability distributions. Topographic analysis (Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) 2-m digital elevation models (DEM)) of ~20-km diameter Copernican lunar craters revealed generally steep slopes on interior walls (30° to 36°, locally exceeding 40°) over 15-meter baselines[9]. In this work, we extend the analysis from a probability distribution modeling point-of-view with NAC DEMs to characterize the slope statistics for the floors and walls for the same ~20-km Copernican lunar craters. The difference in slope standard deviations between the Gaussian approximation and the actual distribution (2-meter sampling) was
Blyton, Michaela D J; Banks, Sam C; Peakall, Rod; Lindenmayer, David B
2012-02-01
The formal testing of mating system theories with empirical data is important for evaluating the relative importance of different processes in shaping mating systems in wild populations. Here, we present a generally applicable probability modelling framework to test the role of local mate availability in determining a population's level of genetic monogamy. We provide a significance test for detecting departures in observed mating patterns from model expectations based on mate availability alone, allowing the presence and direction of behavioural effects to be inferred. The assessment of mate availability can be flexible and in this study it was based on population density, sex ratio and spatial arrangement. This approach provides a useful tool for (1) isolating the effect of mate availability in variable mating systems and (2) in combination with genetic parentage analyses, gaining insights into the nature of mating behaviours in elusive species. To illustrate this modelling approach, we have applied it to investigate the variable mating system of the mountain brushtail possum (Trichosurus cunninghami) and compared the model expectations with the outcomes of genetic parentage analysis over an 18-year study. The observed level of monogamy was higher than predicted under the model. Thus, behavioural traits, such as mate guarding or selective mate choice, may increase the population level of monogamy. We show that combining genetic parentage data with probability modelling can facilitate an improved understanding of the complex interactions between behavioural adaptations and demographic dynamics in driving mating system variation.
NASA Astrophysics Data System (ADS)
Taylor, Faith E.; Santangelo, Michele; Marchesini, Ivan; Malamud, Bruce D.
2013-04-01
During a landslide triggering event, the tens to thousands of landslides resulting from the trigger (e.g., earthquake, heavy rainfall) may block a number of sections of the road network, posing a risk to rescue efforts, logistics and accessibility to a region. Here, we present initial results from a semi-stochastic model we are developing to evaluate the probability of landslides intersecting a road network and the network-accessibility implications of this across a region. This was performed in the open source GRASS GIS software, where we took 'model' landslides and dropped them on a 79 km2 test area region in Collazzone, Umbria, Central Italy, with a given road network (major and minor roads, 404 km in length) and already determined landslide susceptibilities. Landslide areas (AL) were randomly selected from a three-parameter inverse gamma probability density function, consisting of a power-law decay of about -2.4 for medium and large values of AL and an exponential rollover for small values of AL; the rollover (maximum probability) occurs at about AL = 400 m.2 The number of landslide areas selected for each triggered event iteration was chosen to have an average density of 1 landslide km-2, i.e. 79 landslide areas chosen randomly for each iteration. Landslides were then 'dropped' over the region semi-stochastically: (i) random points were generated across the study region; (ii) based on the landslide susceptibility map, points were accepted/rejected based on the probability of a landslide occurring at that location. After a point was accepted, it was assigned a landslide area (AL) and length to width ratio. Landslide intersections with roads were then assessed and indices such as the location, number and size of road blockage recorded. The GRASS-GIS model was performed 1000 times in a Monte-Carlo type simulation. Initial results show that for a landslide triggering event of 1 landslide km-2 over a 79 km2 region with 404 km of road, the number of road blockages
NASA Astrophysics Data System (ADS)
Tomas, A.; Menendez, M.; Mendez, F. J.; Coco, G.; Losada, I. J.
2012-04-01
In the last decades, freak or rogue waves have become an important topic in engineering and science. Forecasting the occurrence probability of freak waves is a challenge for oceanographers, engineers, physicists and statisticians. There are several mechanisms responsible for the formation of freak waves, and different theoretical formulations (primarily based on numerical models with simplifying assumption) have been proposed to predict the occurrence probability of freak wave in a sea state as a function of N (number of individual waves) and kurtosis (k). On the other hand, different attempts to parameterize k as a function of spectral parameters such as the Benjamin-Feir Index (BFI) and the directional spreading (Mori et al., 2011) have been proposed. The objective of this work is twofold: (1) develop a statistical model to describe the uncertainty of maxima individual wave height, Hmax, considering N and k as covariates; (2) obtain a predictive formulation to estimate k as a function of aggregated sea state spectral parameters. For both purposes, we use free surface measurements (more than 300,000 20-minutes sea states) from the Spanish deep water buoy network (Puertos del Estado, Spanish Ministry of Public Works). Non-stationary extreme value models are nowadays widely used to analyze the time-dependent or directional-dependent behavior of extreme values of geophysical variables such as significant wave height (Izaguirre et al., 2010). In this work, a Generalized Extreme Value (GEV) statistical model for the dimensionless maximum wave height (x=Hmax/Hs) in every sea state is used to assess the probability of freak waves. We allow the location, scale and shape parameters of the GEV distribution to vary as a function of k and N. The kurtosis-dependency is parameterized using third-order polynomials and the model is fitted using standard log-likelihood theory, obtaining a very good behavior to predict the occurrence probability of freak waves (x>2). Regarding the
NASA Astrophysics Data System (ADS)
Timpanaro, André M.; Prado, Carmen P. C.
2014-05-01
We discuss the exit probability of the one-dimensional q-voter model and present tools to obtain estimates about this probability, both through simulations in large networks (around 107 sites) and analytically in the limit where the network is infinitely large. We argue that the result E(ρ )=ρq/ρq+(1-ρ)q, that was found in three previous works [F. Slanina, K. Sznajd-Weron, and P. Przybyła, Europhys. Lett. 82, 18006 (2008), 10.1209/0295-5075/82/18006; R. Lambiotte and S. Redner, Europhys. Lett. 82, 18007 (2008), 10.1209/0295-5075/82/18007, for the case q =2; and P. Przybyła, K. Sznajd-Weron, and M. Tabiszewski, Phys. Rev. E 84, 031117 (2011), 10.1103/PhysRevE.84.031117, for q >2] using small networks (around 103 sites), is a good approximation, but there are noticeable deviations that appear even for small systems and that do not disappear when the system size is increased (with the notable exception of the case q =2). We also show that, under some simple and intuitive hypotheses, the exit probability must obey the inequality ρq/ρq+(1-ρ)≤E(ρ)≤ρ/ρ +(1-ρ)q in the infinite size limit. We believe this settles in the negative the suggestion made [S. Galam and A. C. R. Martins, Europhys. Lett. 95, 48005 (2001), 10.1209/0295-5075/95/48005] that this result would be a finite size effect, with the exit probability actually being a step function. We also show how the result that the exit probability cannot be a step function can be reconciled with the Galam unified frame, which was also a source of controversy.
Equivariant minimax dominators of the MLE in the array normal model
Hoff, Peter
2015-01-01
Inference about dependencies in a multiway data array can be made using the array normal model, which corresponds to the class of multivariate normal distributions with separable covariance matrices. Maximum likelihood and Bayesian methods for inference in the array normal model have appeared in the literature, but there have not been any results concerning the optimality properties of such estimators. In this article, we obtain results for the array normal model that are analogous to some classical results concerning covariance estimation for the multivariate normal model. We show that under a lower triangular product group, a uniformly minimum risk equivariant estimator (UMREE) can be obtained via a generalized Bayes procedure. Although this UMREE is minimax and dominates the MLE, it can be improved upon via an orthogonally equivariant modification. Numerical comparisons of the risks of these estimators show that the equivariant estimators can have substantially lower risks than the MLE. PMID:25745274
Equivariant minimax dominators of the MLE in the array normal model.
Gerard, David; Hoff, Peter
2015-05-01
Inference about dependencies in a multiway data array can be made using the array normal model, which corresponds to the class of multivariate normal distributions with separable covariance matrices. Maximum likelihood and Bayesian methods for inference in the array normal model have appeared in the literature, but there have not been any results concerning the optimality properties of such estimators. In this article, we obtain results for the array normal model that are analogous to some classical results concerning covariance estimation for the multivariate normal model. We show that under a lower triangular product group, a uniformly minimum risk equivariant estimator (UMREE) can be obtained via a generalized Bayes procedure. Although this UMREE is minimax and dominates the MLE, it can be improved upon via an orthogonally equivariant modification. Numerical comparisons of the risks of these estimators show that the equivariant estimators can have substantially lower risks than the MLE.
Per capita invasion probabilities: an empirical model to predict rates of invasion via ballast water
Reusser, Deborah A.; Lee, Henry; Frazier, Melanie; Ruiz, Gregory M.; Fofonoff, Paul W.; Minton, Mark S.; Miller, A. Whitman
2013-01-01
Ballast water discharges are a major source of species introductions into marine and estuarine ecosystems. To mitigate the introduction of new invaders into these ecosystems, many agencies are proposing standards that establish upper concentration limits for organisms in ballast discharge. Ideally, ballast discharge standards will be biologically defensible and adequately protective of the marine environment. We propose a new technique, the per capita invasion probability (PCIP), for managers to quantitatively evaluate the relative risk of different concentration-based ballast water discharge standards. PCIP represents the likelihood that a single discharged organism will become established as a new nonindigenous species. This value is calculated by dividing the total number of ballast water invaders per year by the total number of organisms discharged from ballast. Analysis was done at the coast-wide scale for the Atlantic, Gulf, and Pacific coasts, as well as the Great Lakes, to reduce uncertainty due to secondary invasions between estuaries on a single coast. The PCIP metric is then used to predict the rate of new ballast-associated invasions given various regulatory scenarios. Depending upon the assumptions used in the risk analysis, this approach predicts that approximately one new species will invade every 10–100 years with the International Maritime Organization (IMO) discharge standard of 50 μm per m3 of ballast. This approach resolves many of the limitations associated with other methods of establishing ecologically sound discharge standards, and it allows policy makers to use risk-based methodologies to establish biologically defensible discharge standards.
Probability density function model equation for particle charging in a homogeneous dusty plasma.
Pandya, R V; Mashayek, F
2001-09-01
In this paper, we use the direct interaction approximation (DIA) to obtain an approximate integrodifferential equation for the probability density function (PDF) of charge (q) on dust particles in homogeneous dusty plasma. The DIA is used to solve the closure problem which appears in the PDF equation due to the interactions between the phase space density of plasma particles and the phase space density of dust particles. The equation simplifies to a differential form under the condition when the fluctuations in phase space density for plasma particles change very rapidly in time and is correlated for very short times. The result is a Fokker-Planck type equation with extra terms having third and fourth order differentials in q, which account for the discrete nature of distribution of plasma particles and the interaction between fluctuations. Approximate macroscopic equations for the time evolution of the average charge and the higher order moments of the fluctuations in charge on the dust particles are obtained from the differential PDF equation. These equations are computed, in the case of a Maxwellian plasma, to show the effect of density fluctuations of plasma particles on the statistics of dust charge.
Probability density functions of the stream flow discharge in linearized diffusion wave models
NASA Astrophysics Data System (ADS)
Chang, Ching-Min; Yeh, Hund-Der
2016-12-01
This article considers stream flow discharge moving through channels subject to the lateral inflow and described by a linearized diffusion wave equation. The variability of lateral inflow is manifested by random fluctuations in time, which is the only source of uncertainty as to flow discharge quantification. The stochastic nature of stream flow discharge is described by the probability density function (PDF) obtained using the theory of distributions. The PDF of the stream flow discharge depends on the hydraulic properties of the stream flow, such as the wave celerity and hydraulic diffusivity as well as the temporal correlation scale of the lateral inflow rate fluctuations. The focus in this analysis is placed on the influence of the temporal correlation scale and the wave celerity coefficient on the PDF of the flow discharge. The analysis demonstrates that a larger temporal correlation scale causes an increase of PDF of the lateral inflow rate and, in turn, the PDF of the flow discharge which is also affected positively by the wave celerity coefficient.
The model of a level crossing with a Coulomb band: exact probabilities of nonadiabatic transitions
NASA Astrophysics Data System (ADS)
Lin, J.; Sinitsyn, N. A.
2014-05-01
We derive an exact solution of an explicitly time-dependent multichannel model of quantum mechanical nonadiabatic transitions. Our model corresponds to the case of a single linear diabatic energy level interacting with a band of an arbitrary N states, for which the diabatic energies decay with time according to the Coulomb law. We show that the time-dependent Schrödinger equation for this system can be solved in terms of Meijer functions whose asymptotics at a large time can be compactly written in terms of elementary functions that depend on the roots of an Nth order characteristic polynomial. Our model can be considered a generalization of the Demkov-Osherov model. In comparison to the latter, our model allows one to explore the role of curvature of the band levels and diabatic avoided crossings.
NASA Astrophysics Data System (ADS)
Peruzzo, Paolo; Pietro Viero, Daniele; Defina, Andrea
2016-11-01
The seeds of many aquatic plants, as well as many propagulae and larvae, are buoyant and transported at the water surface. These particles are therefore subject to surface tension, which may enhance their capture by emergent vegetation through capillary attraction. In this work, we develop a semi-empirical model that predicts the probability that a floating particle is retained by plant stems and branches piercing the water surface, due to capillarity, against the drag force exerted by the flowing water. Specific laboratory experiments are also performed to calibrate and validate the model.
NASA Technical Reports Server (NTRS)
Courey, Karim J.; Asfour, Shihab S.; Onar, Arzu; Bayliss, Jon A.; Ludwig, Larry L.; Wright, Maria C.
2009-01-01
To comply with lead-free legislation, many manufacturers have converted from tin-lead to pure tin finishes of electronic components. However, pure tin finishes have a greater propensity to grow tin whiskers than tin-lead finishes. Since tin whiskers present an electrical short circuit hazard in electronic components, simulations have been developed to quantify the risk of said short circuits occurring. Existing risk simulations make the assumption that when a free tin whisker has bridged two adjacent exposed electrical conductors, the result is an electrical short circuit. This conservative assumption is made because shorting is a random event that had an unknown probability associated with it. Note however that due to contact resistance electrical shorts may not occur at lower voltage levels. In our first article we developed an empirical probability model for tin whisker shorting. In this paper, we develop a more comprehensive empirical model using a refined experiment with a larger sample size, in which we studied the effect of varying voltage on the breakdown of the contact resistance which leads to a short circuit. From the resulting data we estimated the probability distribution of an electrical short, as a function of voltage. In addition, the unexpected polycrystalline structure seen in the focused ion beam (FIB) cross section in the first experiment was confirmed in this experiment using transmission electron microscopy (TEM). The FIB was also used to cross section two card guides to facilitate the measurement of the grain size of each card guide's tin plating to determine its finish.
Alizadeh, Seyed Shamseddin; Mortazavi, Seyed Bagher; Sepehri, Mohammad Mehdi
2014-01-01
Background: Falls from height are one of the main causes of fatal occupational injuries. The objective of this study was to present a model for estimating occurrence probability of falling from height. Methods: In order to make a list of factors affecting falls, we used four expert group's judgment, literature review and an available database. Then the validity and reliability of designed questionnaire were determined and Bayesian networks were built. The built network, nodes and curves were quantified. For network sensitivity analysis, four types of analysis carried out. Results: A Bayesian network for assessment of posterior probabilities of falling from height proposed. The presented Bayesian network model shows the interrelationships among 37 causes affecting the falling from height and can calculate its posterior probabilities. The most important factors affecting falling were Non-compliance with safety instructions for work at height (0.127), Lack of safety equipment for work at height (0.094) and Lack of safety instructions for work at height (0.071) respectively. Conclusion: The proposed Bayesian network used to determine how different causes could affect the falling from height at work. The findings of this study can be used to decide on the falling accident prevention programs. PMID:25648498
NASA Technical Reports Server (NTRS)
Nemeth, Noel
2013-01-01
Models that predict the failure probability of monolithic glass and ceramic components under multiaxial loading have been developed by authors such as Batdorf, Evans, and Matsuo. These "unit-sphere" failure models assume that the strength-controlling flaws are randomly oriented, noninteracting planar microcracks of specified geometry but of variable size. This report develops a formulation to describe the probability density distribution of the orientation of critical strength-controlling flaws that results from an applied load. This distribution is a function of the multiaxial stress state, the shear sensitivity of the flaws, the Weibull modulus, and the strength anisotropy. Examples are provided showing the predicted response on the unit sphere for various stress states for isotropic and transversely isotropic (anisotropic) materials--including the most probable orientation of critical flaws for offset uniaxial loads with strength anisotropy. The author anticipates that this information could be used to determine anisotropic stiffness degradation or anisotropic damage evolution for individual brittle (or quasi-brittle) composite material constituents within finite element or micromechanics-based software
Generating Correlated, Non-normally Distributed Data Using a Non-linear Structural Model.
Auerswald, Max; Moshagen, Morten
2015-12-01
An approach to generate non-normality in multivariate data based on a structural model with normally distributed latent variables is presented. The key idea is to create non-normality in the manifest variables by applying non-linear linking functions to the latent part, the error part, or both. The algorithm corrects the covariance matrix for the applied function by approximating the deviance using an approximated normal variable. We show that the root mean square error (RMSE) for the covariance matrix converges to zero as sample size increases and closely approximates the RMSE as obtained when generating normally distributed variables. Our algorithm creates non-normality affecting every moment, is computationally undemanding, easy to apply, and particularly useful for simulation studies in structural equation modeling.
Greene, Earl A.; LaMotte, Andrew E.; Cullinan, Kerri-Ann
2005-01-01
The U.S. Geological Survey, in cooperation with the U.S. Environmental Protection Agency?s Regional Vulnerability Assessment Program, has developed a set of statistical tools to support regional-scale, ground-water quality and vulnerability assessments. The Regional Vulnerability Assessment Program?s goals are to develop and demonstrate approaches to comprehensive, regional-scale assessments that effectively inform managers and decision-makers as to the magnitude, extent, distribution, and uncertainty of current and anticipated environmental risks. The U.S. Geological Survey is developing and exploring the use of statistical probability models to characterize the relation between ground-water quality and geographic factors in the Mid-Atlantic Region. Available water-quality data obtained from U.S. Geological Survey National Water-Quality Assessment Program studies conducted in the Mid-Atlantic Region were used in association with geographic data (land cover, geology, soils, and others) to develop logistic-regression equations that use explanatory variables to predict the presence of a selected water-quality parameter exceeding a specified management concentration threshold. The resulting logistic-regression equations were transformed to determine the probability, P(X), of a water-quality parameter exceeding a specified management threshold. Additional statistical procedures modified by the U.S. Geological Survey were used to compare the observed values to model-predicted values at each sample point. In addition, procedures to evaluate the confidence of the model predictions and estimate the uncertainty of the probability value were developed and applied. The resulting logistic-regression models were applied to the Mid-Atlantic Region to predict the spatial probability of nitrate concentrations exceeding specified management thresholds. These thresholds are usually set or established by regulators or managers at National or local levels. At management thresholds of
Amundson, Courtney L.; Royle, J. Andrew; Handel, Colleen M.
2014-01-01
Imperfect detection during animal surveys biases estimates of abundance and can lead to improper conclusions regarding distribution and population trends. Farnsworth et al. (2005) developed a combined distance-sampling and time-removal model for point-transect surveys that addresses both availability (the probability that an animal is available for detection; e.g., that a bird sings) and perceptibility (the probability that an observer detects an animal, given that it is available for detection). We developed a hierarchical extension of the combined model that provides an integrated analysis framework for a collection of survey points at which both distance from the observer and time of initial detection are recorded. Implemented in a Bayesian framework, this extension facilitates evaluating covariates on abundance and detection probability, incorporating excess zero counts (i.e. zero-inflation), accounting for spatial autocorrelation, and estimating population density. Species-specific characteristics, such as behavioral displays and territorial dispersion, may lead to different patterns of availability and perceptibility, which may, in turn, influence the performance of such hierarchical models. Therefore, we first test our proposed model using simulated data under different scenarios of availability and perceptibility. We then illustrate its performance with empirical point-transect data for a songbird that consistently produces loud, frequent, primarily auditory signals, the Golden-crowned Sparrow (Zonotrichia atricapilla); and for 2 ptarmigan species (Lagopus spp.) that produce more intermittent, subtle, and primarily visual cues. Data were collected by multiple observers along point transects across a broad landscape in southwest Alaska, so we evaluated point-level covariates on perceptibility (observer and habitat), availability (date within season and time of day), and abundance (habitat, elevation, and slope), and included a nested point
Hedell, Ronny; Stephansson, Olga; Mostad, Petter; Andersson, Mats Gunnar
2017-01-16
Efficient and correct evaluation of sampling results with respect to hypotheses about the concentration or distribution of bacteria generally requires knowledge about the performance of the detection method. To assess the sensitivity of the detection method an experiment is usually performed where the target matrix is spiked (i.e. artificially contaminated) with different concentrations of the bacteria, followed by analyses of the samples using the pre-enrichment method and the analytical detection method of interest. For safety reasons or because of economic or time limits it is not always possible to perform exactly such an experiment, with the desired number of samples. In this paper, we show how heterogeneous data from diverse sources may be combined within a single model to obtain not only estimates of detection probabilities, but also, crucially, uncertainty estimates. We indicate how such results can then be used to obtain optimal conclusions about presence of bacteria, and illustrate how strongly the sampling results speak in favour of or against contamination. In our example, we consider the case when B. cereus is used as surrogate for B. anthracis, for safety reasons. The statistical modelling of the detection probabilities and of the growth characteristics of the bacteria types is based on data from four experiments where different matrices of food were spiked with B. anthracis or B. cereus and analysed using plate counts and qPCR. We show how flexible and complex Bayesian models, together with inference tools such as OpenBUGS, can be used to merge information about detection probability curves. Two different modelling approaches, differing in whether the pre-enrichment step and the PCR detection step are modelled separately or together, are applied. The relative importance on the detection curves for various existing data sets are evaluated and illustrated.
NASA Technical Reports Server (NTRS)
Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor)
2012-01-01
This invention develops a mathematical model to describe battery behavior during individual discharge cycles as well as over its cycle life. The basis for the form of the model has been linked to the internal processes of the battery and validated using experimental data. Effects of temperature and load current have also been incorporated into the model. Subsequently, the model has been used in a Particle Filtering framework to make predictions of remaining useful life for individual discharge cycles as well as for cycle life. The prediction performance was found to be satisfactory as measured by performance metrics customized for prognostics for a sample case. The work presented here provides initial steps towards a comprehensive health management solution for energy storage devices.
Detection of the optic disc in fundus images by combining probability models.
Harangi, Balazs; Hajdu, Andras
2015-10-01
In this paper, we propose a combination method for the automatic detection of the optic disc (OD) in fundus images based on ensembles of individual algorithms. We have studied and adapted some of the state-of-the-art OD detectors and finally organized them into a complex framework in order to maximize the accuracy of the localization of the OD. The detection of the OD can be considered as a single-object detection problem. This object can be localized with high accuracy by several algorithms extracting single candidates for the center of the OD and the final location can be defined using a single majority voting rule. To include more information to support the final decision, we can use member algorithms providing more candidates which can be ranked based on the confidence ordered by the algorithms. In this case, a spatial weighted graph is defined where the candidates are considered as its nodes, and the final OD position is determined in terms of finding a maximum-weighted clique. Now, we examine how to apply in our ensemble-based framework all the accessible information supplied by the member algorithms by making them return confidence values for each image pixel. These confidence values inform us about the probability that a given pixel is the center point of the object. We apply axiomatic and Bayesian approaches, as in the case of aggregation of judgments of experts in decision and risk analysis, to combine these confidence values. According to our experimental study, the accuracy of the localization of OD increases further. Besides single localization, this approach can be adapted for the precise detection of the boundary of the OD. Comparative experimental results are also given for several publicly available datasets.
A Mathematical Model for Calculating Non-Detection Probability of a Random Tour Target.
1985-12-01
model avoiding detection (i.e., surviving) to some specified time, t. This model assumes that there is a stationary searcher having a " cookie -cutter...a stationary searcher having a * cookie -cutter* sensor located in the center of the search area. A Monte-Carlo simulation/program was used to generate...thus has a " cookie -cutter" sensor with detection range R. [Ref. 1] 2. The Target Starting Position The target’s starting position is uniformly
A study of quantum mechanical probabilities in the classical Hodgkin-Huxley model.
Moradi, N; Scholkmann, F; Salari, V
2015-03-01
The Hodgkin-Huxley (HH) model is a powerful model to explain different aspects of spike generation in excitable cells. However, the HH model was proposed in 1952 when the real structure of the ion channel was unknown. It is now common knowledge that in many ion-channel proteins the flow of ions through the pore is governed by a gate, comprising a so-called "selectivity filter" inside the ion channel, which can be controlled by electrical interactions. The selectivity filter (SF) is believed to be responsible for the selection and fast conduction of particular ions across the membrane of an excitable cell. Other (generally larger) parts of the molecule such as the pore-domain gate control the access of ions to the channel protein. In fact, two types of gates are considered here for ion channels: the "external gate", which is the voltage sensitive gate, and the "internal gate" which is the selectivity filter gate (SFG). Some quantum effects are expected in the SFG due to its small dimensions, which may play an important role in the operation of an ion channel. Here, we examine parameters in a generalized model of HH to see whether any parameter affects the spike generation. Our results indicate that the previously suggested semi-quantum-classical equation proposed by Bernroider and Summhammer (BS) agrees strongly with the HH equation under different conditions and may even provide a better explanation in some cases. We conclude that the BS model can refine the classical HH model substantially.
Lambert, Amaury; Stadler, Tanja
2013-12-01
Forward-in-time models of diversification (i.e., speciation and extinction) produce phylogenetic trees that grow "vertically" as time goes by. Pruning the extinct lineages out of such trees leads to natural models for reconstructed trees (i.e., phylogenies of extant species). Alternatively, reconstructed trees can be modelled by coalescent point processes (CPPs), where trees grow "horizontally" by the sequential addition of vertical edges. Each new edge starts at some random speciation time and ends at the present time; speciation times are drawn from the same distribution independently. CPPs lead to extremely fast computation of tree likelihoods and simulation of reconstructed trees. Their topology always follows the uniform distribution on ranked tree shapes (URT). We characterize which forward-in-time models lead to URT reconstructed trees and among these, which lead to CPP reconstructed trees. We show that for any "asymmetric" diversification model in which speciation rates only depend on time and extinction rates only depend on time and on a non-heritable trait (e.g., age), the reconstructed tree is CPP, even if extant species are incompletely sampled. If rates additionally depend on the number of species, the reconstructed tree is (only) URT (but not CPP). We characterize the common distribution of speciation times in the CPP description, and discuss incomplete species sampling as well as three special model cases in detail: (1) the extinction rate does not depend on a trait; (2) rates do not depend on time; (3) mass extinctions may happen additionally at certain points in the past.
Moro, Marilyn; Westover, M. Brandon; Kelly, Jessica; Bianchi, Matt T.
2016-01-01
Study Objectives: Obstructive sleep apnea (OSA) is associated with increased morbidity and mortality, and treatment with positive airway pressure (PAP) is cost-effective. However, the optimal diagnostic strategy remains a subject of debate. Prior modeling studies have not consistently supported the widely held assumption that home sleep testing (HST) is cost-effective. Methods: We modeled four strategies: (1) treat no one; (2) treat everyone empirically; (3) treat those testing positive during in-laboratory polysomnography (PSG) via in-laboratory titration; and (4) treat those testing positive during HST with auto-PAP. The population was assumed to lack independent reasons for in-laboratory PSG (such as insomnia, periodic limb movements in sleep, complex apnea). We considered the third-party payer perspective, via both standard (quality-adjusted) and pure cost methods. Results: The preferred strategy depended on three key factors: pretest probability of OSA, cost of untreated OSA, and time horizon. At low prevalence and low cost of untreated OSA, the treat no one strategy was favored, whereas empiric treatment was favored for high prevalence and high cost of untreated OSA. In-laboratory backup for failures in the at-home strategy increased the preference for the at-home strategy. Without laboratory backup in the at-home arm, the in-laboratory strategy was increasingly preferred at longer time horizons. Conclusion: Using a model framework that captures a broad range of clinical possibilities, the optimal diagnostic approach to uncomplicated OSA depends on pretest probability, cost of untreated OSA, and time horizon. Estimating each of these critical factors remains a challenge warranting further investigation. Citation: Moro M, Westover MB, Kelly J, Bianchi MT. Decision modeling in sleep apnea: the critical roles of pretest probability, cost of untreated obstructive sleep apnea, and time horizon. J Clin Sleep Med 2016;12(3):409–418. PMID:26518699
Drakos, Nicole E; Wahl, Lindi M
2015-12-01
Theoretical approaches are essential to our understanding of the complex dynamics of mobile genetic elements (MGEs) within genomes. Recently, the birth-death-diversification model was developed to describe the dynamics of mobile promoters (MPs), a particular class of MGEs in prokaryotes. A unique feature of this model is that genetic diversification of elements was included. To explore the implications of diversification on the longterm fate of MGE lineages, in this contribution we analyze the extinction probabilities, extinction times and equilibrium solutions of the birth-death-diversification model. We find that diversification increases both the survival and growth rate of MGE families, but the strength of this effect depends on the rate of horizontal gene transfer (HGT). We also find that the distribution of MGE families per genome is not necessarily monotonically decreasing, as observed for MPs, but may have a peak in the distribution that is related to the HGT rate. For MPs specifically, we find that new families have a high extinction probability, and predict that the number of MPs is increasing, albeit at a very slow rate. Additionally, we develop an extension of the birth-death-diversification model which allows MGEs in different regions of the genome, for example coding and non-coding, to be described by different rates. This extension may offer a potential explanation as to why the majority of MPs are located in non-promoter regions of the genome.
Application of the Response Probability Density Function Technique to Biodynamic Models
1977-04-01
recent years there has been much research on skull fracture, particularly in relation to automobile accidents. The typical use which has been modeled is...response pdf technique for other injury predictions, such as skull fracture in automobile accidents, seems promis- ing, if sufficient data to establish
Analysis of a probability-based SATCOM situational awareness model for parameter estimation
NASA Astrophysics Data System (ADS)
Martin, Todd W.; Chang, Kuo-Chu; Tian, Xin; Chen, Genshe
2016-05-01
Emerging satellite communication (SATCOM) systems are envisioned to incorporate advanced capabilities for dynamically adapting link and network configurations to meet user performance needs. These advanced capabilities require an understanding of the operating environment as well as the potential outcomes of adaptation decisions. A SATCOM situational awareness and decision-making approach is needed that represents the cause and effect linkage of relevant phenomenology and operating conditions on link performance. Similarly, the model must enable a corresponding diagnostic capability that allows SATCOM payload managers to assess likely causes of observed effects. Prior work demonstrated the ability to use a probabilistic reasoning model for a SATCOM situational awareness model. It provided the theoretical basis and demonstrated the ability to realize such a model. This paper presents an analysis of the probabilistic reasoning approach in the context of its ability to be used for diagnostic purposes. A quantitative assessment is presented to demonstrate the impact of uncertainty on estimation accuracy for several key parameters. The paper also discusses how the results could be used by a higher-level reasoning process to evaluate likely causes of performance shortfalls such as atmospheric conditions, pointing errors, and jamming.
ERIC Educational Resources Information Center
Nussbaum, E. Michael
2011-01-01
Toulmin's model of argumentation, developed in 1958, has guided much argumentation research in education. However, argumentation theory in philosophy and cognitive science has advanced considerably since 1958. There are currently several alternative frameworks of argumentation that can be useful for both research and practice in education. These…
ERIC Educational Resources Information Center
Mirman, Daniel; Estes, Katharine Graf; Magnuson, James S.
2010-01-01
Statistical learning mechanisms play an important role in theories of language acquisition and processing. Recurrent neural network models have provided important insights into how these mechanisms might operate. We examined whether such networks capture two key findings in human statistical learning. In Simulation 1, a simple recurrent network…
On the probability distribution of stock returns in the Mike-Farmer model
NASA Astrophysics Data System (ADS)
Gu, G.-F.; Zhou, W.-X.
2009-02-01
Recently, Mike and Farmer have constructed a very powerful and realistic behavioral model to mimick the dynamic process of stock price formation based on the empirical regularities of order placement and cancelation in a purely order-driven market, which can successfully reproduce the whole distribution of returns, not only the well-known power-law tails, together with several other important stylized facts. There are three key ingredients in the Mike-Farmer (MF) model: the long memory of order signs characterized by the Hurst index Hs, the distribution of relative order prices x in reference to the same best price described by a Student distribution (or Tsallis’ q-Gaussian), and the dynamics of order cancelation. They showed that different values of the Hurst index Hs and the freedom degree αx of the Student distribution can always produce power-law tails in the return distribution fr(r) with different tail exponent αr. In this paper, we study the origin of the power-law tails of the return distribution fr(r) in the MF model, based on extensive simulations with different combinations of the left part L(x) for x < 0 and the right part R(x) for x > 0 of fx(x). We find that power-law tails appear only when L(x) has a power-law tail, no matter R(x) has a power-law tail or not. In addition, we find that the distributions of returns in the MF model at different timescales can be well modeled by the Student distributions, whose tail exponents are close to the well-known cubic law and increase with the timescale.
NASA Astrophysics Data System (ADS)
Mahmud, Zamalia; Porter, Anne; Salikin, Masniyati; Ghani, Nor Azura Md
2015-12-01
Students' understanding of probability concepts have been investigated from various different perspectives. Competency on the other hand is often measured separately in the form of test structure. This study was set out to show that perceived understanding and competency can be calibrated and assessed together using Rasch measurement tools. Forty-four students from the STAT131 Understanding Uncertainty and Variation course at the University of Wollongong, NSW have volunteered to participate in the study. Rasch measurement which is based on a probabilistic model is used to calibrate the responses from two survey instruments and investigate the interactions between them. Data were captured from the e-learning platform Moodle where students provided their responses through an online quiz. The study shows that majority of the students perceived little understanding about conditional and independent events prior to learning about it but tend to demonstrate a slightly higher competency level afterward. Based on the Rasch map, there is indication of some increase in learning and knowledge about some probability concepts at the end of the two weeks lessons on probability concepts.
People's conditional probability judgments follow probability theory (plus noise).
Costello, Fintan; Watts, Paul
2016-09-01
A common view in current psychology is that people estimate probabilities using various 'heuristics' or rules of thumb that do not follow the normative rules of probability theory. We present a model where people estimate conditional probabilities such as P(A|B) (the probability of A given that B has occurred) by a process that follows standard frequentist probability theory but is subject to random noise. This model accounts for various results from previous studies of conditional probability judgment. This model predicts that people's conditional probability judgments will agree with a series of fundamental identities in probability theory whose form cancels the effect of noise, while deviating from probability theory in other expressions whose form does not allow such cancellation. Two experiments strongly confirm these predictions, with people's estimates on average agreeing with probability theory for the noise-cancelling identities, but deviating from probability theory (in just the way predicted by the model) for other identities. This new model subsumes an earlier model of unconditional or 'direct' probability judgment which explains a number of systematic biases seen in direct probability judgment (Costello & Watts, 2014). This model may thus provide a fully general account of the mechanisms by which people estimate probabilities.
2014-03-01
Military Manpower, last modified November 21, 2003, 2, http://biotech.law.lsu.edu/ blaw /dodd/corres/pdf2/d11451p.pdf. 26 Figure 5. AFQT Score...http://biotech.law.lsu.edu/ blaw /dodd/corres/pdf2/d11451p.pdf. Erhardt, Bruce J. Jr. “Development of a Markov Model for Forecasting Continuation...Technical Information Center Ft. Belvoir, Virginia 2. Dudley Knox Library Naval Postgraduate School Monterey, California
Modeling of Kidney Hemodynamics: Probability-Based Topology of an Arterial Network.
Postnov, Dmitry D; Marsh, Donald J; Postnov, Dmitry E; Braunstein, Thomas H; Holstein-Rathlou, Niels-Henrik; Martens, Erik A; Sosnovtseva, Olga
2016-07-01
Through regulation of the extracellular fluid volume, the kidneys provide important long-term regulation of blood pressure. At the level of the individual functional unit (the nephron), pressure and flow control involves two different mechanisms that both produce oscillations. The nephrons are arranged in a complex branching structure that delivers blood to each nephron and, at the same time, provides a basis for an interaction between adjacent nephrons. The functional consequences of this interaction are not understood, and at present it is not possible to address this question experimentally. We provide experimental data and a new modeling approach to clarify this problem. To resolve details of microvascular structure, we collected 3D data from more than 150 afferent arterioles in an optically cleared rat kidney. Using these results together with published micro-computed tomography (μCT) data we develop an algorithm for generating the renal arterial network. We then introduce a mathematical model describing blood flow dynamics and nephron to nephron interaction in the network. The model includes an implementation of electrical signal propagation along a vascular wall. Simulation results show that the renal arterial architecture plays an important role in maintaining adequate pressure levels and the self-sustained dynamics of nephrons.
Modeling of Kidney Hemodynamics: Probability-Based Topology of an Arterial Network
Postnov, Dmitry D.; Postnov, Dmitry E.; Braunstein, Thomas H.; Holstein-Rathlou, Niels-Henrik; Sosnovtseva, Olga
2016-01-01
Through regulation of the extracellular fluid volume, the kidneys provide important long-term regulation of blood pressure. At the level of the individual functional unit (the nephron), pressure and flow control involves two different mechanisms that both produce oscillations. The nephrons are arranged in a complex branching structure that delivers blood to each nephron and, at the same time, provides a basis for an interaction between adjacent nephrons. The functional consequences of this interaction are not understood, and at present it is not possible to address this question experimentally. We provide experimental data and a new modeling approach to clarify this problem. To resolve details of microvascular structure, we collected 3D data from more than 150 afferent arterioles in an optically cleared rat kidney. Using these results together with published micro-computed tomography (μCT) data we develop an algorithm for generating the renal arterial network. We then introduce a mathematical model describing blood flow dynamics and nephron to nephron interaction in the network. The model includes an implementation of electrical signal propagation along a vascular wall. Simulation results show that the renal arterial architecture plays an important role in maintaining adequate pressure levels and the self-sustained dynamics of nephrons. PMID:27447287
Wojcik, Mariusz; Tachiya, M
2009-03-14
This paper deals with the exact extension of the original Onsager theory of the escape probability to the case of finite recombination rate at nonzero reaction radius. The empirical theories based on the Eigen model and the Braun model, which are applicable in the absence and presence of an external electric field, respectively, are based on a wrong assumption that both recombination and separation processes in geminate recombination follow exponential kinetics. The accuracies of the empirical theories are examined against the exact extension of the Onsager theory. The Eigen model gives the escape probability in the absence of an electric field, which is different by a factor of 3 from the exact one. We have shown that this difference can be removed by operationally redefining the volume occupied by the dissociating partner before dissociation, which appears in the Eigen model as a parameter. The Braun model gives the escape probability in the presence of an electric field, which is significantly different from the exact one over the whole range of electric fields. Appropriate modification of the original Braun model removes the discrepancy at zero or low electric fields, but it does not affect the discrepancy at high electric fields. In all the above theories it is assumed that recombination takes place only at the reaction radius. The escape probability in the case when recombination takes place over a range of distances is also calculated and compared with that in the case of recombination only at the reaction radius.
NASA Astrophysics Data System (ADS)
Mandal, K. G.; Padhi, J.; Kumar, A.; Ghosh, S.; Panda, D. K.; Mohanty, R. K.; Raychaudhuri, M.
2015-08-01
Rainfed agriculture plays and will continue to play a dominant role in providing food and livelihoods for an increasing world population. Rainfall analyses are helpful for proper crop planning under changing environment in any region. Therefore, in this paper, an attempt has been made to analyse 16 years of rainfall (1995-2010) at the Daspalla region in Odisha, eastern India for prediction using six probability distribution functions, forecasting the probable date of onset and withdrawal of monsoon, occurrence of dry spells by using Markov chain model and finally crop planning for the region. For prediction of monsoon and post-monsoon rainfall, log Pearson type III and Gumbel distribution were the best-fit probability distribution functions. The earliest and most delayed week of the onset of rainy season was the 20th standard meteorological week (SMW) (14th-20th May) and 25th SMW (18th-24th June), respectively. Similarly, the earliest and most delayed week of withdrawal of rainfall was the 39th SMW (24th-30th September) and 47th SMW (19th-25th November), respectively. The longest and shortest length of rainy season was 26 and 17 weeks, respectively. The chances of occurrence of dry spells are high from the 1st-22nd SMW and again the 42nd SMW to the end of the year. The probability of weeks (23rd-40th SMW) remaining wet varies between 62 and 100 % for the region. Results obtained through this analysis would be utilised for agricultural planning and mitigation of dry spells at the Daspalla region in Odisha, India.
Evaluation of Geometrically Nonlinear Reduced Order Models with Nonlinear Normal Modes
Kuether, Robert J.; Deaner, Brandon J.; Hollkamp, Joseph J.; ...
2015-09-15
Several reduced-order modeling strategies have been developed to create low-order models of geometrically nonlinear structures from detailed finite element models, allowing one to compute the dynamic response of the structure at a dramatically reduced cost. But, the parameters of these reduced-order models are estimated by applying a series of static loads to the finite element model, and the quality of the reduced-order model can be highly sensitive to the amplitudes of the static load cases used and to the type/number of modes used in the basis. Our paper proposes to combine reduced-order modeling and numerical continuation to estimate the nonlinearmore » normal modes of geometrically nonlinear finite element models. Not only does this make it possible to compute the nonlinear normal modes far more quickly than existing approaches, but the nonlinear normal modes are also shown to be an excellent metric by which the quality of the reduced-order model can be assessed. Hence, the second contribution of this work is to demonstrate how nonlinear normal modes can be used as a metric by which nonlinear reduced-order models can be compared. Moreover, various reduced-order models with hardening nonlinearities are compared for two different structures to demonstrate these concepts: a clamped–clamped beam model, and a more complicated finite element model of an exhaust panel cover.« less
Evaluation of Geometrically Nonlinear Reduced Order Models with Nonlinear Normal Modes
Kuether, Robert J.; Deaner, Brandon J.; Hollkamp, Joseph J.; Allen, Matthew S.
2015-09-15
Several reduced-order modeling strategies have been developed to create low-order models of geometrically nonlinear structures from detailed finite element models, allowing one to compute the dynamic response of the structure at a dramatically reduced cost. But, the parameters of these reduced-order models are estimated by applying a series of static loads to the finite element model, and the quality of the reduced-order model can be highly sensitive to the amplitudes of the static load cases used and to the type/number of modes used in the basis. Our paper proposes to combine reduced-order modeling and numerical continuation to estimate the nonlinear normal modes of geometrically nonlinear finite element models. Not only does this make it possible to compute the nonlinear normal modes far more quickly than existing approaches, but the nonlinear normal modes are also shown to be an excellent metric by which the quality of the reduced-order model can be assessed. Hence, the second contribution of this work is to demonstrate how nonlinear normal modes can be used as a metric by which nonlinear reduced-order models can be compared. Moreover, various reduced-order models with hardening nonlinearities are compared for two different structures to demonstrate these concepts: a clamped–clamped beam model, and a more complicated finite element model of an exhaust panel cover.
Jacobsen, J L; Saleur, H
2008-02-29
We determine exactly the probability distribution of the number N_(c) of valence bonds connecting a subsystem of length L>1 to the rest of the system in the ground state of the XXX antiferromagnetic spin chain. This provides, in particular, the asymptotic behavior of the valence-bond entanglement entropy S_(VB)=N_(c)ln2=4ln2/pi(2)lnL disproving a recent conjecture that this should be related with the von Neumann entropy, and thus equal to 1/3lnL. Our results generalize to the Q-state Potts model.
Cardozo, David Lopes; Holdsworth, Peter C W
2016-04-27
The magnetization probability density in d = 2 and 3 dimensional Ising models in slab geometry of volume [Formula: see text] is computed through Monte-Carlo simulation at the critical temperature and zero magnetic field. The finite-size scaling of this distribution and its dependence on the system aspect-ratio [Formula: see text]and boundary conditions are discussed. In the limiting case [Formula: see text] of a macroscopically large slab ([Formula: see text]) the distribution is found to scale as a Gaussian function for all tested system sizes and boundary conditions.
NASA Astrophysics Data System (ADS)
Lopes Cardozo, David; Holdsworth, Peter C. W.
2016-04-01
The magnetization probability density in d = 2 and 3 dimensional Ising models in slab geometry of volume L\\paralleld-1× {{L}\\bot} is computed through Monte-Carlo simulation at the critical temperature and zero magnetic field. The finite-size scaling of this distribution and its dependence on the system aspect-ratio ρ =\\frac{{{L}\\bot}}{{{L}\\parallel}} and boundary conditions are discussed. In the limiting case ρ \\to 0 of a macroscopically large slab ({{L}\\parallel}\\gg {{L}\\bot} ) the distribution is found to scale as a Gaussian function for all tested system sizes and boundary conditions.
NASA Astrophysics Data System (ADS)
Lobach, Yu. N.; Bucurescu, D.
1998-09-01
The Doppler shift attenuation method was used to determine lifetimes in the picosecond region for excited states of 117Sb populated with the (α,2nγ) reaction at Eα=27.2 MeV. Interacting boson-fermion model calculations explain reasonably well the main features of the positive parity levels known up to about 2.5 MeV excitation. The mixing of the lowest one-quasiparticle 9/2+ state with the intruder (2p-1h) 9/2+ state, as well as the quadrupole deformation of the intruder band are also discussed.
Chmelevsky, D; Barclay, D; Kellerer, A M; Tomasek, L; Kunz, E; Placek, V
1994-07-01
The estimates of lung cancer risk due to the exposure to radon decay products are based on different data sets from underground mining and on different mathematical models that are used to fit the data. Diagrams of the excess relative rate per 100 working level months in its dependence on age at exposure and age attained are shown to be a useful tool to elucidate the influence that is due to the choice of the model, and to assess the differences between the data from the major western cohorts and those from the Czech uranium miners. It is seen that the influence of the choice of the model is minor compared to the difference between the data sets. The results are used to derive attributable lifetime risks and probabilities of causation for lung cancer following radon progeny exposures.
NASA Technical Reports Server (NTRS)
Elizalde, E.; Gaztanaga, E.
1992-01-01
The dependence of counts in cells on the shape of the cell for the large scale galaxy distribution is studied. A very concrete prediction can be done concerning the void distribution for scale invariant models. The prediction is tested on a sample of the CfA catalog, and good agreement is found. It is observed that the probability of a cell to be occupied is bigger for some elongated cells. A phenomenological scale invariant model for the observed distribution of the counts in cells, an extension of the negative binomial distribution, is presented in order to illustrate how this dependence can be quantitatively determined. An original, intuitive derivation of this model is presented.
Scale normalization of histopathological images for batch invariant cancer diagnostic models
Kothari, Sonal; Phan, John H.
2016-01-01
Histopathological images acquired from different experimental set-ups often suffer from batch-effects due to color variations and scale variations. In this paper, we develop a novel scale normalization model for histopathological images based on nuclear area distributions. Results indicate that the normalization model closely fits empirical values for two renal tumor datasets. We study the effect of scale normalization on classification of renal tumor images. Scale normalization improves classification performance in most cases. However, performance decreases in a few cases. In order to understand this, we propose two methods to filter extracted image features that are sensitive to image scaling and features that are uncorrelated with scaling factor. Feature filtering improves the classification performance of cases that were initially negatively affected by scale normalization. PMID:23366904
Ogunnaike, Babatunde A; Gelmi, Claudio A; Edwards, Jeremy S
2010-05-21
Gene expression studies generate large quantities of data with the defining characteristic that the number of genes (whose expression profiles are to be determined) exceed the number of available replicates by several orders of magnitude. Standard spot-by-spot analysis still seeks to extract useful information for each gene on the basis of the number of available replicates, and thus plays to the weakness of microarrays. On the other hand, because of the data volume, treating the entire data set as an ensemble, and developing theoretical distributions for these ensembles provides a framework that plays instead to the strength of microarrays. We present theoretical results that under reasonable assumptions, the distribution of microarray intensities follows the Gamma model, with the biological interpretations of the model parameters emerging naturally. We subsequently establish that for each microarray data set, the fractional intensities can be represented as a mixture of Beta densities, and develop a procedure for using these results to draw statistical inference regarding differential gene expression. We illustrate the results with experimental data from gene expression studies on Deinococcus radiodurans following DNA damage using cDNA microarrays.
Model-Based Calculations of the Probability of a Country's Nuclear Proliferation Decisions
Li, Jun; Yim, Man-Sung; McNelis, David N.
2007-07-01
explain the occurrences of proliferation decisions. However, predicting major historical proliferation events using model-based predictions has been unreliable. Nuclear proliferation decisions by a country is affected by three main factors: (1) technology; (2) finance; and (3) political motivation [1]. Technological capability is important as nuclear weapons development needs special materials, detonation mechanism, delivery capability, and the supporting human resources and knowledge base. Financial capability is likewise important as the development of the technological capabilities requires a serious financial commitment. It would be difficult for any state with a gross national product (GNP) significantly less than that of about $100 billion to devote enough annual governmental funding to a nuclear weapon program to actually achieve positive results within a reasonable time frame (i.e., 10 years). At the same time, nuclear proliferation is not a matter determined by a mastery of technical details or overcoming financial constraints. Technology or finance is a necessary condition but not a sufficient condition for nuclear proliferation. At the most fundamental level, the proliferation decision by a state is controlled by its political motivation. To effectively address the issue of predicting proliferation events, all three of the factors must be included in the model. To the knowledge of the authors, none of the exiting models considered the 'technology' variable as part of the modeling. This paper presents an attempt to develop a methodology for statistical modeling and predicting a country's nuclear proliferation decisions. The approach is based on the combined use of data on a country's nuclear technical capability profiles economic development status, security environment factors and internal political and cultural factors. All of the information utilized in the study was from open source literature. (authors)
ERIC Educational Resources Information Center
Samejima, Fumiko
1997-01-01
As examples of models that are not based on normality or its approximation, the logistic positive exponent family of models is discussed. These models include the item task complexity as the third parameter, which determines the single principle of ordering individuals on the ability scale. (SLD)
Integrating Boolean Queries in Conjunctive Normal Form with Probabilistic Retrieval Models.
ERIC Educational Resources Information Center
Losee, Robert M.; Bookstein, Abraham
1988-01-01
Presents a model that places Boolean database queries into conjunctive normal form, thereby allowing probabilistic ranking of documents and the incorporation of relevance feedback. Experimental results compare the performance of a sequential learning probabilistic retrieval model with the proposed integrated Boolean probabilistic model and a fuzzy…
Fitting the Normal-Ogive Factor Analytic Model to Scores on Tests.
ERIC Educational Resources Information Center
Ferrando, Pere J.; Lorenzo-Seva, Urbano
2001-01-01
Describes how the nonlinear factor analytic approach of R. McDonald to the normal ogive curve can be used to factor analyze test scores. Discusses the conditions in which this model is more appropriate than the linear model and illustrates the applicability of both models using an empirical example based on data from 1,769 adolescents who took the…
A skewed PDF combustion model for jet diffusion flames. [Probability density function (PDF)
Abou-Ellail, M.M.M.; Salem, H. )
1990-11-01
A combustion model based on restricted chemical equilibrium is described. A transport equation for the skewness of the mixture fraction is derived. It contains two adjustable constants. The computed values of the mean mixture fraction (f) and its variance and skewness (g and s) for a jet diffusion methane flame are used to obtain the shape of a shewed pdf. The skewed pdf is split into a turbulent part (beta function) and a nonturbulent part (delta function) at f = 0. The contribution of each part is directly related to the values of f, g, and s. The inclusion of intermittency in the skewed pdf appreciably improves the numerical predictions obtained for a turbulent jet diffusion methane flame for which experimental data are available.
Binary logistic regression modelling: Measuring the probability of relapse cases among drug addict
NASA Astrophysics Data System (ADS)
Ismail, Mohd Tahir; Alias, Siti Nor Shadila
2014-07-01
For many years Malaysia faced the drug addiction issues. The most serious case is relapse phenomenon among treated drug addict (drug addict who have under gone the rehabilitation programme at Narcotic Addiction Rehabilitation Centre, PUSPEN). Thus, the main objective of this study is to find the most significant factor that contributes to relapse to happen. The binary logistic regression analysis was employed to model the relationship between independent variables (predictors) and dependent variable. The dependent variable is the status of the drug addict either relapse, (Yes coded as 1) or not, (No coded as 0). Meanwhile the predictors involved are age, age at first taking drug, family history, education level, family crisis, community support and self motivation. The total of the sample is 200 which the data are provided by AADK (National Antidrug Agency). The finding of the study revealed that age and self motivation are statistically significant towards the relapse cases..
NASA Astrophysics Data System (ADS)
Yan, Wang-Ji; Ren, Wei-Xin
2016-12-01
In Part I of this study, some new theorems, corollaries and lemmas on circularly-symmetric complex normal ratio distribution have been mathematically proved. This part II paper is dedicated to providing a rigorous treatment of statistical properties of raw scalar transmissibility functions at an arbitrary frequency line. On the basis of statistics of raw FFT coefficients and circularly-symmetric complex normal ratio distribution, explicit closed-form probabilistic models are established for both multivariate and univariate scalar transmissibility functions. Also, remarks on the independence of transmissibility functions at different frequency lines and the shape of the probability density function (PDF) of univariate case are presented. The statistical structures of probabilistic models are concise, compact and easy-implemented with a low computational effort. They hold for general stationary vector processes, either Gaussian stochastic processes or non-Gaussian stochastic processes. The accuracy of proposed models is verified using numerical example as well as field test data of a high-rise building and a long-span cable-stayed bridge. This study yields new insights into the qualitative analysis of the uncertainty of scalar transmissibility functions, which paves the way for developing new statistical methodologies for modal analysis, model updating or damage detection using responses only without input information.
NASA Astrophysics Data System (ADS)
Varouchakis, Emmanouil; Kourgialas, Nektarios; Karatzas, George; Giannakis, Georgios; Lilli, Maria; Nikolaidis, Nikolaos
2014-05-01
Riverbank erosion affects the river morphology and the local habitat and results in riparian land loss, damage to property and infrastructures, ultimately weakening flood defences. An important issue concerning riverbank erosion is the identification of the areas vulnerable to erosion, as it allows for predicting changes and assists with stream management and restoration. One way to predict the vulnerable to erosion areas is to determine the erosion probability by identifying the underlying relations between riverbank erosion and the geomorphological and/or hydrological variables that prevent or stimulate erosion. A statistical model for evaluating the probability of erosion based on a series of independent local variables and by using logistic regression is developed in this work. The main variables affecting erosion are vegetation index (stability), the presence or absence of meanders, bank material (classification), stream power, bank height, river bank slope, riverbed slope, cross section width and water velocities (Luppi et al. 2009). In statistics, logistic regression is a type of regression analysis used for predicting the outcome of a categorical dependent variable, e.g. binary response, based on one or more predictor variables (continuous or categorical). The probabilities of the possible outcomes are modelled as a function of independent variables using a logistic function. Logistic regression measures the relationship between a categorical dependent variable and, usually, one or several continuous independent variables by converting the dependent variable to probability scores. Then, a logistic regression is formed, which predicts success or failure of a given binary variable (e.g. 1 = "presence of erosion" and 0 = "no erosion") for any value of the independent variables. The regression coefficients are estimated by using maximum likelihood estimation. The erosion occurrence probability can be calculated in conjunction with the model deviance regarding
NASA Astrophysics Data System (ADS)
Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo
2016-07-01
Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.
NASA Astrophysics Data System (ADS)
Cao, Hong-Jun; Zhang, Hui-Qiang; Lin, Wen-Yi
2012-05-01
Four kinds of presumed probability-density-function (PDF) models for non-premixed turbulent combustion are evaluated in flames with various stoichiometric mixture fractions by using large eddy simulation (LES). The LES code is validated by the experimental data of a classical turbulent jet flame (Sandia flame D). The mean and rms temperatures obtained by the presumed PDF models are compared with the LES results. The β-function model achieves a good prediction for different flames. The predicted rms temperature by using the double-δ function model is very small and unphysical in the vicinity of the maximum mean temperature. The clip-Gaussian model and the multi-δ function model make a worse prediction of the extremely fuel-rich or fuel-lean side due to the clip at the boundary of the mixture fraction space. The results also show that the overall prediction performance of presumed PDF models is better at mediate stoichiometric mixture fractions than that at very small or very large ones.
NASA Astrophysics Data System (ADS)
Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo
2017-02-01
Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.
Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu
2013-01-04
Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ .
Field, Edward H.
2015-01-01
A methodology is presented for computing elastic‐rebound‐based probabilities in an unsegmented fault or fault system, which involves computing along‐fault averages of renewal‐model parameters. The approach is less biased and more self‐consistent than a logical extension of that applied most recently for multisegment ruptures in California. It also enables the application of magnitude‐dependent aperiodicity values, which the previous approach does not. Monte Carlo simulations are used to analyze long‐term system behavior, which is generally found to be consistent with that of physics‐based earthquake simulators. Results cast doubt that recurrence‐interval distributions at points on faults look anything like traditionally applied renewal models, a fact that should be considered when interpreting paleoseismic data. We avoid such assumptions by changing the "probability of what" question (from offset at a point to the occurrence of a rupture, assuming it is the next event to occur). The new methodology is simple, although not perfect in terms of recovering long‐term rates in Monte Carlo simulations. It represents a reasonable, improved way to represent first‐order elastic‐rebound predictability, assuming it is there in the first place, and for a system that clearly exhibits other unmodeled complexities, such as aftershock triggering.
NASA Astrophysics Data System (ADS)
Wang, Haifeng; Popov, Pavel; Hiremath, Varun; Lantz, Steven; Viswanathan, Sharadha; Pope, Stephen
2010-11-01
A large-eddy simulation (LES)/probability density function (PDF) code is developed and applied to the study of local extinction and re-ignition in Sandia Flame E. The modified Curl mixing model is used to account for the sub-filter scalar mixing; the ARM1 mechanism is used for the chemical reaction; and the in- situ adaptive tabulation (ISAT) algorithm is used to accelerate the chemistry calculations. Calculations are performed on different grids to study the resolution requirement for this flame. Then, with sufficient grid resolution, full-scale LES/PDF calculations are performed to study the flame characteristics and the turbulence-chemistry interactions. Sensitivity to the mixing frequency model is explored in order to understand the behavior of sub-filter scalar mixing in the context of LES. The simulation results are compared to the experimental data to demonstrate the capability of the code. Comparison is also made to previous RANS/PDF simulations.
NASA Astrophysics Data System (ADS)
Koch, J.; Nowak, W.
2015-02-01
Improper storage and disposal of nonaqueous-phase liquids (NAPLs) has resulted in widespread contamination of the subsurface, threatening the quality of groundwater as a freshwater resource. The high frequency of contaminated sites and the difficulties of remediation efforts demand rational decisions based on a sound risk assessment. Due to sparse data and natural heterogeneities, this risk assessment needs to be supported by appropriate predictive models with quantified uncertainty. This study proposes a physically and stochastically coherent model concept to simulate and predict crucial impact metrics for DNAPL contaminated sites, such as contaminant mass discharge and DNAPL source longevity. To this end, aquifer parameters and the contaminant source architecture are conceptualized as random space functions. The governing processes are simulated in a three-dimensional, highly resolved, stochastic, and coupled model that can predict probability density functions of mass discharge and source depletion times. While it is not possible to determine whether the presented model framework is sufficiently complex or not, we can investigate whether and to which degree the desired model predictions are sensitive to simplifications often found in the literature. By testing four commonly made simplifications, we identified aquifer heterogeneity, groundwater flow irregularity, uncertain and physically based contaminant source zones, and their mutual interlinkages as indispensable components of a sound model framework.
Chanrion, M-A; Sauerwein, W; Jelen, U; Wittig, A; Engenhart-Cabillic, R; Beuve, M
2014-06-21
In carbon ion beams, biological effects vary along the ion track; hence, to quantify them, specific radiobiological models are needed. One of them, the local effect model (LEM), in particular version I (LEM I), is implemented in treatment planning systems (TPS) clinically used in European particle therapy centers. From the physical properties of the specific ion radiation, the LEM calculates the survival probabilities of the cell or tissue type under study, provided that some determinant input parameters are initially defined. Mathematical models can be used to predict, for instance, the tumor control probability (TCP), and then evaluate treatment outcomes. This work studies the influence of the LEM I input parameters on the TCP predictions in the specific case of prostate cancer. Several published input parameters and their combinations were tested. Their influence on the dose distributions calculated for a water phantom and for a patient geometry was evaluated using the TPS TRiP98. Changing input parameters induced clinically significant modifications of the mean dose (up to a factor of 3.5), spatial dose distribution, and TCP predictions (up to factor of 2.6 for D50). TCP predictions were found to be more sensitive to the parameter threshold dose (Dt) than to the biological parameters α and β. Additionally, an analytical expression was derived for correlating α, β and Dt, and this has emphasized the importance of [Formula: see text]. The improvement of radiobiological models for particle TPS will only be achieved when more patient outcome data with well-defined patient groups, fractionation schemes and well-defined end-points are available.
La Russa, D
2015-06-15
Purpose: The purpose of this project is to develop a robust method of parameter estimation for a Poisson-based TCP model using Bayesian inference. Methods: Bayesian inference was performed using the PyMC3 probabilistic programming framework written in Python. A Poisson-based TCP regression model that accounts for clonogen proliferation was fit to observed rates of local relapse as a function of equivalent dose in 2 Gy fractions for a population of 623 stage-I non-small-cell lung cancer patients. The Slice Markov Chain Monte Carlo sampling algorithm was used to sample the posterior distributions, and was initiated using the maximum of the posterior distributions found by optimization. The calculation of TCP with each sample step required integration over the free parameter α, which was performed using an adaptive 24-point Gauss-Legendre quadrature. Convergence was verified via inspection of the trace plot and posterior distribution for each of the fit parameters, as well as with comparisons of the most probable parameter values with their respective maximum likelihood estimates. Results: Posterior distributions for α, the standard deviation of α (σ), the average tumour cell-doubling time (Td), and the repopulation delay time (Tk), were generated assuming α/β = 10 Gy, and a fixed clonogen density of 10{sup 7} cm−{sup 3}. Posterior predictive plots generated from samples from these posterior distributions are in excellent agreement with the observed rates of local relapse used in the Bayesian inference. The most probable values of the model parameters also agree well with maximum likelihood estimates. Conclusion: A robust method of performing Bayesian inference of TCP data using a complex TCP model has been established.
Huang, Yangxin; Yan, Chunning; Yin, Ping; Lu, Meixia
2016-01-01
Longitudinal data arise frequently in medical studies and it is a common practice to analyze such complex data with nonlinear mixed-effects (NLME) models. However, the following four issues may be critical in longitudinal data analysis. (i) A homogeneous population assumption for models may be unrealistically obscuring important features of between-subject and within-subject variations; (ii) normality assumption for model errors may not always give robust and reliable results, in particular, if the data exhibit skewness; (iii) the responses may be missing and the missingness may be nonignorable; and (iv) some covariates of interest may often be measured with substantial errors. When carrying out statistical inference in such settings, it is important to account for the effects of these data features; otherwise, erroneous or even misleading results may be produced. Inferential procedures can be complicated dramatically when these four data features arise. In this article, the Bayesian joint modeling approach based on a finite mixture of NLME joint models with skew distributions is developed to study simultaneous impact of these four data features, allowing estimates of both model parameters and class membership probabilities at population and individual levels. A real data example is analyzed to demonstrate the proposed methodologies, and to compare various scenarios-based potential models with different specifications of distributions.
Normality of raw data in general linear models: The most widespread myth in statistics
Kery, Marc; Hatfield, Jeff S.
2003-01-01
In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.
Modeling speech intelligibility in quiet and noise in listeners with normal and impaired hearing.
Rhebergen, Koenraad S; Lyzenga, Johannes; Dreschler, Wouter A; Festen, Joost M
2010-03-01
The speech intelligibility index (SII) is an often used calculation method for estimating the proportion of audible speech in noise. For speech reception thresholds (SRTs), measured in normally hearing listeners using various types of stationary noise, this model predicts a fairly constant speech proportion of about 0.33, necessary for Dutch sentence intelligibility. However, when the SII model is applied for SRTs in quiet, the estimated speech proportions are often higher, and show a larger inter-subject variability, than found for speech in noise near normal speech levels [65 dB sound pressure level (SPL)]. The present model attempts to alleviate this problem by including cochlear compression. It is based on a loudness model for normally hearing and hearing-impaired listeners of Moore and Glasberg [(2004). Hear. Res. 188, 70-88]. It estimates internal excitation levels for speech and noise and then calculates the proportion of speech above noise and threshold using similar spectral weighting as used in the SII. The present model and the standard SII were used to predict SII values in quiet and in stationary noise for normally hearing and hearing-impaired listeners. The present model predicted SIIs for three listener types (normal hearing, noise-induced, and age-induced hearing loss) with markedly less variability than the standard SII.
NASA Astrophysics Data System (ADS)
Prete, James John
1999-12-01
The studies proposed were designed to investigate the relationship between transperineal permanent prostate implant quality, as modeled by the radiobiologicalquantifier of implant quality, tumor control probability (TCP), and treatment efficacy, as measured by prostatespecific antigen (PSA) failure free survival. It was hypothesized that TCP could be useful in identifying which patients, or group of patients, might be at an increased risk for treatment failure among patients receiving 125I transperineal permanent prostate brachytherapy (TPPB) as the sole modality of treatment for early or intermediate stage prostatic carcinoma. The formal statement of hypothesis was that the linear- quadratic tumor control probability model for monotherapeutic 125I transperineal permanent prostate brahytherapy correlates with prostate- specific antigen failure free survival. The specific aims were: [i]to implement the TCP model in a computerized treatment planning system for TPPB, using the recently recommended dose calculation formalism and benchmark data presented in AAPM TG43 and validate it, [ii]to compute and examine the relationship between TCP and PSA failure free survival for patients receiving monotherapeutic 125I TPPB, [iii]to investigate the influence of the definition of PSA failure on the relationship between TCP and PSA failure free survival rates, and [iv]to develop a method for improving the TCP model. The conclusions were: [i]the model as implemented using AAPM TG43 formalism, produced results which were similar to that calculated by the original model. TCP was demonstrated to correlate strongly and similarly with underdosed prostate volume in comparison to data published from the original model, [ii]an analysis of 125I implants demonstrated that patients stratified into the high TCP group had PSA failure free survival rates which were superior to the rates for patients in the low TCP group, regardless of which of the five definitions of PSA failure was applied to
NASA Astrophysics Data System (ADS)
Gill, Wonpyong
2016-01-01
This study calculated the growing probability of additional offspring with the advantageous reversal allele in an asymmetric sharply-peaked landscape using the decoupled continuous-time mutation-selection model. The growing probability was calculated for various population sizes, N, sequence lengths, L, selective advantages, s, fitness parameters, k and measuring parameters, C. The saturated growing probability in the stochastic region was approximately the effective selective advantage, s*, when C≫1/Ns* and s*≪1. The present study suggests that the growing probability in the stochastic region in the decoupled continuous-time mutation-selection model can be described using the theoretical formula for the growing probability in the Moran two-allele model. The selective advantage ratio, which represents the ratio of the effective selective advantage to the selective advantage, does not depend on the population size, selective advantage, measuring parameter and fitness parameter; instead the selective advantage ratio decreases with the increasing sequence length.
1983-07-26
DeGroot , Morris H. Probability and Statistic. Addison-Wesley Publishing Company, Reading, Massachusetts, 1975. [Gillogly 78] Gillogly, J.J. Performance...distribution [ DeGroot 751 has just begun. The beta distribution has several features that might make it a more reasonable choice. As with the normal-based...1982. [Cooley 65] Cooley, J.M. and Tukey, J.W. An algorithm for the machine calculation of complex Fourier series. Math. Comp. 19, 1965. [ DeGroot 75
Shankar Subramaniam
2009-04-01
This final project report summarizes progress made towards the objectives described in the proposal entitled “Developing New Mathematical Models for Multiphase Flows Based on a Fundamental Probability Density Function Approach”. Substantial progress has been made in theory, modeling and numerical simulation of turbulent multiphase flows. The consistent mathematical framework based on probability density functions is described. New models are proposed for turbulent particle-laden flows and sprays.
Blood Vessel Normalization in the Hamster Oral Cancer Model for Experimental Cancer Therapy Studies
Ana J. Molinari; Romina F. Aromando; Maria E. Itoiz; Marcela A. Garabalino; Andrea Monti Hughes; Elisa M. Heber; Emiliano C. C. Pozzi; David W. Nigg; Veronica A. Trivillin; Amanda E. Schwint
2012-07-01
Normalization of tumor blood vessels improves drug and oxygen delivery to cancer cells. The aim of this study was to develop a technique to normalize blood vessels in the hamster cheek pouch model of oral cancer. Materials and Methods: Tumor-bearing hamsters were treated with thalidomide and were compared with controls. Results: Twenty eight hours after treatment with thalidomide, the blood vessels of premalignant tissue observable in vivo became narrower and less tortuous than those of controls; Evans Blue Dye extravasation in tumor was significantly reduced (indicating a reduction in aberrant tumor vascular hyperpermeability that compromises blood flow), and tumor blood vessel morphology in histological sections, labeled for Factor VIII, revealed a significant reduction in compressive forces. These findings indicated blood vessel normalization with a window of 48 h. Conclusion: The technique developed herein has rendered the hamster oral cancer model amenable to research, with the potential benefit of vascular normalization in head and neck cancer therapy.
Modelling the Shear Behaviour of Rock Joints with Asperity Damage Under Constant Normal Stiffness
NASA Astrophysics Data System (ADS)
Indraratna, Buddhima; Thirukumaran, Sivanathan; Brown, E. T.; Zhu, Song-Ping
2015-01-01
The shear behaviour of a rough rock joint depends largely on the surface properties of the joint, as well as the boundary conditions applied across the joint interface. This paper proposes a new analytical model to describe the complete shear behaviour of rough joints under constant normal stiffness (CNS) boundary conditions by incorporating the effect of damage to asperities. In particular, the effects of initial normal stress levels and joint surface roughness on the shear behaviour of joints under CNS conditions were studied, and the analytical model was validated through experimental results. Finally, the practical application of the model to a jointed rock slope stability analysis is presented.
NASA Astrophysics Data System (ADS)
Hou, Rui; Changyue, Jiana; He, Tingting; Mao, Tengyue; Yu, Jianwei; Lei, Bo
2013-04-01
In an optical burst switching core node, each output port is equipped with a different network interface unit that can provide a specific data rate. Bursts will use different probabilities of select output ports, which is in accordance to the path-length metric-based routing optimal algorithm and wavelength resource situation. Previous studies ignore this issue. We establish a burst-outputted model considering the different service rate of output ports and different port-selected probabilities. We calculate burst-blocking probability and analyze the relationship between service rate and output-port-selected probability in detail.
NASA Astrophysics Data System (ADS)
Gomez, Thomas A.; Winget, Donald E.; Montgomery, Michael H.; Kilcrease, Dave; Nagayama, Taisuke
2016-01-01
White dwarfs are interesting for a number of applications including studying equations of state, stellar pulsations, and determining the age of the universe.These interesting applications require accurate determination of surface conditions: temperatures and surface gravity (or mass).The most common technique to estimate the temperature and gravity is to find the model spectrun that best fits the observed spectra of a star (known as the spectroscopic method); however, this model rests on our ability to accurately model the hydrogen spectrum at high densities.There are currently disagreements between the spectroscopic method and other techniques to determine mass.We seek to resolve this issue by exploring the continuum lowering (or disappearance of states) of the hydrogen atom.The current formalism, called "occupation probability," defines some criteria for the isolated atom's bound state to be ionized, then extrapolates the continuous spectrum to the same energy threshold.The two are then combined to create the final cross-section.I introduce a new way of calculating the atomic spectrum by doing some averaging of the plasma interaction potential energy (previously used in the physics community) and directly integrating the Schrodinger equation.This technique is a major improvement over the Taylor expansion used to describe the ion-emitter interaction and removes the need of the occupation probability and treats continuum states and discrete states on the same footing in the spectrum calculation.The resulting energy spectrum is in fact many discrete states that when averaged over the electric field distribution in the plasma appears to be a continuum.In the low density limit, the two methods are in agreement, but show some differences at high densities (above 10$^{17} e/cc$) including line shifts near the ``continuum'' edge.
Kingdom, Frederick A A; Baldwin, Alex S; Schmidtmann, Gunnar
2015-01-01
Many studies have investigated how multiple stimuli combine to reach threshold. There are broadly speaking two ways this can occur: additive summation (AS) where inputs from the different stimuli add together in a single mechanism, or probability summation (PS) where different stimuli are detected independently by separate mechanisms. PS is traditionally modeled under high threshold theory (HTT); however, tests have shown that HTT is incorrect and that signal detection theory (SDT) is the better framework for modeling summation. Modeling the equivalent of PS under SDT is, however, relatively complicated, leading many investigators to use Monte Carlo simulations for the predictions. We derive formulas that employ numerical integration to predict the proportion correct for detecting multiple stimuli assuming PS under SDT, for the situations in which stimuli are either equal or unequal in strength. Both formulas are general purpose, calculating performance for forced-choice tasks with M alternatives, n stimuli, in Q monitored mechanisms, each subject to a non-linear transducer with exponent τ. We show how the probability (and additive) summation formulas can be used to simulate psychometric functions, which when fitted with Weibull functions make signature predictions for how thresholds and psychometric function slopes vary as a function of τ, n, and Q. We also show how one can fit the formulas directly to real psychometric functions using data from a binocular summation experiment, and show how one can obtain estimates of τ and test whether binocular summation conforms more to PS or AS. The methods described here can be readily applied using software functions newly added to the Palamedes toolbox.
Rationalizing Hybrid Earthquake Probabilities
NASA Astrophysics Data System (ADS)
Gomberg, J.; Reasenberg, P.; Beeler, N.; Cocco, M.; Belardinelli, M.
2003-12-01
An approach to including stress transfer and frictional effects in estimates of the probability of failure of a single fault affected by a nearby earthquake has been suggested in Stein et al. (1997). This `hybrid' approach combines conditional probabilities, which depend on the time elapsed since the last earthquake on the affected fault, with Poissonian probabilities that account for friction and depend only on the time since the perturbing earthquake. The latter are based on the seismicity rate change model developed by Dieterich (1994) to explain the temporal behavior of aftershock sequences in terms of rate-state frictional processes. The model assumes an infinite population of nucleation sites that are near failure at the time of the perturbing earthquake. In the hybrid approach, assuming the Dieterich model can lead to significant transient increases in failure probability. We explore some of the implications of applying the Dieterich model to a single fault and its impact on the hybrid probabilities. We present two interpretations that we believe can rationalize the use of the hybrid approach. In the first, a statistical distribution representing uncertainties in elapsed and/or mean recurrence time on the fault serves as a proxy for Dieterich's population of nucleation sites. In the second, we imagine a population of nucleation patches distributed over the fault with a distribution of maturities. In both cases we find that the probability depends on the time since the last earthquake. In particular, the size of the transient probability increase may only be significant for faults already close to failure. Neglecting the maturity of a fault may lead to overestimated rate and probability increases.
NASA Astrophysics Data System (ADS)
Kyselý, Jan
2010-08-01
Bootstrap, a technique for determining the accuracy of statistics, is a tool widely used in climatological and hydrological applications. The paper compares coverage probabilities of confidence intervals of high quantiles (5- to 200-year return values) constructed by the nonparametric and parametric bootstrap in frequency analysis of heavy-tailed data, typical for maxima of precipitation amounts. The simulation experiments are based on a wide range of models used for precipitation extremes (generalized extreme value, generalized Pareto, generalized logistic, and mixed distributions). The coverage probability of the confidence intervals is quantified for several sample sizes ( n = 20, 40, 60, and 100) and tail behaviors. We show that both bootstrap methods underestimate the width of the confidence intervals but that the parametric bootstrap is clearly superior to the nonparametric one. Even a misspecification of the parametric model—often unavoidable in practice—does not prevent the parametric bootstrap from performing better in most cases. A tendency to narrower confidence intervals from the nonparametric than parametric bootstrap is demonstrated in the application to high quantiles of distributions of observed maxima of 1- and 5-day precipitation amounts; the differences increase with the return level. The results show that estimation of uncertainty based on nonparametric bootstrap is highly unreliable, especially for small and moderate sample sizes and for very heavy-tailed data.
NASA Astrophysics Data System (ADS)
Koido, Tetsuya; Tomarikawa, Ko; Yonemura, Shigeru; Tokumasu, Takashi
2011-05-01
Molecular Dynamics (MD) was used to simulate dissociative adsorption of a hydrogen molecule on the Pt(111) surface considering the movement of the surface atoms and gas molecules. The Embedded Atom Method (EAM) was applied to represent the interaction potential. The parameters of the EAM potential were determined such that the values of the dissociation barrier at different sites estimated by the EAM potential agreed with that of DFT calculation results. A number of MD simulations of gas molecules impinging on a Pt(111) surface were carried out randomly changing initial orientations, incident azimuth angles, and impinging positions on the surface with fixed initial translational energy, initial rotational energy, and incident polar angle. The number of collisions in which the gas molecule was dissociated were counted to compute the dissociation probability. The dissociation probability was analyzed and expressed by a mathematical function involving the initial conditions of the impinging molecule, namely the translational energy, rotational energy, and incident polar angle. Furthermore, the utility of the model was verified by comparing its results with raw MD simulation results of molecular beam experiments.
Mixture of normal distributions in multivariate null intercept measurement error model.
Aoki, Reiko; Pinto Júnior, Dorival Leão; Achcar, Jorge Alberto; Bolfarine, Heleno
2006-01-01
In this paper we propose the use of a multivariate null intercept measurement error model, where the true unobserved value of the covariate follows a mixture of two normal distributions. The proposed model is applied to a dental clinical trial presented in Hadgu and Koch (1999). A Bayesian approach is considered and a Gibbs Sampler is used to perform the computations.
Gronewold, Andrew D; Wolpert, Robert L
2008-07-01
Most probable number (MPN) and colony-forming-unit (CFU) estimates of fecal coliform bacteria concentration are common measures of water quality in coastal shellfish harvesting and recreational waters. Estimating procedures for MPN and CFU have intrinsic variability and are subject to additional uncertainty arising from minor variations in experimental protocol. It has been observed empirically that the standard multiple-tube fermentation (MTF) decimal dilution analysis MPN procedure is more variable than the membrane filtration CFU procedure, and that MTF-derived MPN estimates are somewhat higher on average than CFU estimates, on split samples from the same water bodies. We construct a probabilistic model that provides a clear theoretical explanation for the variability in, and discrepancy between, MPN and CFU measurements. We then compare our model to water quality samples analyzed using both MPN and CFU procedures, and find that the (often large) observed differences between MPN and CFU values for the same water body are well within the ranges predicted by our probabilistic model. Our results indicate that MPN and CFU intra-sample variability does not stem from human error or laboratory procedure variability, but is instead a simple consequence of the probabilistic basis for calculating the MPN. These results demonstrate how probabilistic models can be used to compare samples from different analytical procedures, and to determine whether transitions from one procedure to another are likely to cause a change in quality-based management decisions.
NASA Astrophysics Data System (ADS)
Silva, A. Christian; Yakovenko, Victor M.
2003-06-01
We compare the probability distribution of returns for the three major stock-market indexes (Nasdaq, S&P500, and Dow-Jones) with an analytical formula recently derived by Drăgulescu and Yakovenko for the Heston model with stochastic variance. For the period of 1982-1999, we find a very good agreement between the theory and the data for a wide range of time lags from 1 to 250 days. On the other hand, deviations start to appear when the data for 2000-2002 are included. We interpret this as a statistical evidence of the major change in the market from a positive growth rate in 1980s and 1990s to a negative rate in 2000s.
NASA Astrophysics Data System (ADS)
Kaur, Arshdeep; Chopra, Sahila; Gupta, Raj K.
2014-08-01
The compound nucleus (CN) fusion/formation probability PCN is defined and its detailed variations with the CN excitation energy E*, center-of-mass energy Ec .m., fissility parameter χ, CN mass number ACN, and Coulomb interaction parameter Z1Z2 are studied for the first time within the dynamical cluster-decay model (DCM). The model is a nonstatistical description of the decay of a CN to all possible processes. The (total) fusion cross section σfusion is the sum of the CN and noncompound nucleus (nCN) decay cross sections, each calculated as the dynamical fragmentation process. The CN cross section σCN is constituted of evaporation residues and fusion-fission, including intermediate-mass fragments, each calculated for all contributing decay fragments (A1, A2) in terms of their formation and barrier penetration probabilities P0 and P. The nCN cross section σnCN is determined as the quasi-fission (qf) process, where P0=1 and P is calculated for the entrance-channel nuclei. The DCM, with effects of deformations and orientations of nuclei included in it, is used to study the PCN for about a dozen "hot" fusion reactions forming a CN of mass number A ˜100 to superheavy nuclei and for various different nuclear interaction potentials. Interesting results are that PCN=1 for complete fusion, but PCN<1 or PCN≪1 due to the nCN contribution, depending strongly on different parameters of the entrance-channel reaction but found to be independent of the nuclear interaction potentials used.
López, E; Ibarz, E; Herrera, A; Puértolas, S; Gabarre, S; Más, Y; Mateo, J; Gil-Albarova, J; Gracia, L
2016-07-01
Osteoporotic vertebral fractures represent a major cause of disability, loss of quality of life and even mortality among the elderly population. Decisions on drug therapy are based on the assessment of risk factors for fracture from bone mineral density (BMD) measurements. A previously developed model, based on the Damage and Fracture Mechanics, was applied for the evaluation of the mechanical magnitudes involved in the fracture process from clinical BMD measurements. BMD evolution in untreated patients and in patients with seven different treatments was analyzed from clinical studies in order to compare the variation in the risk of fracture. The predictive model was applied in a finite element simulation of the whole lumbar spine, obtaining detailed maps of damage and fracture probability, identifying high-risk local zones at vertebral body. For every vertebra, strontium ranelate exhibits the highest decrease, whereas minimum decrease is achieved with oral ibandronate. All the treatments manifest similar trends for every vertebra. Conversely, for the natural BMD evolution, as bone stiffness decreases, the mechanical damage and fracture probability show a significant increase (as it occurs in the natural history of BMD). Vertebral walls and external areas of vertebral end plates are the zones at greatest risk, in coincidence with the typical locations of osteoporotic fractures, characterized by a vertebral crushing due to the collapse of vertebral walls. This methodology could be applied for an individual patient, in order to obtain the trends corresponding to different treatments, in identifying at-risk individuals in early stages of osteoporosis and might be helpful for treatment decisions.
Tang, An-Min; Tang, Nian-Sheng
2015-02-28
We propose a semiparametric multivariate skew-normal joint model for multivariate longitudinal and multivariate survival data. One main feature of the posited model is that we relax the commonly used normality assumption for random effects and within-subject error by using a centered Dirichlet process prior to specify the random effects distribution and using a multivariate skew-normal distribution to specify the within-subject error distribution and model trajectory functions of longitudinal responses semiparametrically. A Bayesian approach is proposed to simultaneously obtain Bayesian estimates of unknown parameters, random effects and nonparametric functions by combining the Gibbs sampler and the Metropolis-Hastings algorithm. Particularly, a Bayesian local influence approach is developed to assess the effect of minor perturbations to within-subject measurement error and random effects. Several simulation studies and an example are presented to illustrate the proposed methodologies.
Hu, Xingdi; Chen, Xinguang; Cook, Robert L.; Chen, Ding-Geng; Okafor, Chukwuemeka
2016-01-01
Background The probabilistic discrete event systems (PDES) method provides a promising approach to study dynamics of underage drinking using cross-sectional data. However, the utility of this approach is often limited because the constructed PDES model is often non-identifiable. The purpose of the current study is to attempt a new method to solve the model. Methods A PDES-based model of alcohol use behavior was developed with four progression stages (never-drinkers [ND], light/moderate-drinker [LMD], heavy-drinker [HD], and ex-drinker [XD]) linked with 13 possible transition paths. We tested the proposed model with data for participants aged 12–21 from the 2012 National Survey on Drug Use and Health (NSDUH). The Moore-Penrose (M-P) generalized inverse matrix method was applied to solve the proposed model. Results Annual transitional probabilities by age groups for the 13 drinking progression pathways were successfully estimated with the M-P generalized inverse matrix approach. Result from our analysis indicates an inverse “J” shape curve characterizing pattern of experimental use of alcohol from adolescence to young adulthood. We also observed a dramatic increase for the initiation of LMD and HD after age 18 and a sharp decline in quitting light and heavy drinking. Conclusion Our findings are consistent with the developmental perspective regarding the dynamics of underage drinking, demonstrating the utility of the M-P method in obtaining a unique solution for the partially-observed PDES drinking behavior model. The M-P approach we tested in this study will facilitate the use of the PDES approach to examine many health behaviors with the widely available cross-sectional data. PMID:26511344
Jian, Y; Yao, R; Mulnix, T; Jin, X; Carson, R E
2015-01-07
Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners-the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [(11)C]AFM rats imaged on the HRRT and [(11)C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods.
Modeling the gait of normal and Parkinsonian persons for improving the diagnosis.
Sarbaz, Yashar; Banaie, Masood; Pooyan, Mohammad; Gharibzadeh, Shahriar; Towhidkhah, Farzad; Jafari, Ayyoob
2012-02-16
In this study, we present a model for the gait of normal and Parkinson's disease (PD) persons. Gait is semi-periodic and has fractal properties. Sine circle map (SCM) relation has a sinusoidal term and can show chaotic behaviour. Therefore, we used SCM as a basis for our model structure. Moreover, some similarities exist between the parameters of this relation and basal ganglia (BG) structure. This relation can explain the complex behaviours and the complex structure of BG. The presented model can simulate the BG behaviour globally. A model parameter, Ω, has a key role in the model response. We showed that when Ω is between 0.6 and 0.8, the model simulates the behaviour of normal persons; the amounts greater or less than this range correspond to PD persons. Our statistical tests show that there is a significant difference between the Ω of normal and PD patients. We conclude that Ω can be introduced as a parameter to distinguish normal and PD persons. Additionally, our results showed that Spearman correlation between the Ω and the severity of PD is 0.586. This parameter may be a good index of PD severity.
NASA Astrophysics Data System (ADS)
Tai, An; Liu, Feng; Gore, Elizabeth; Li, X. Allen
2016-05-01
We report a modeling study of tumor response after stereotactic body radiation therapy (SBRT) for early-stage non-small-cell lung carcinoma using published clinical data with a regrowth model. A linear-quadratic inspired regrowth model was proposed to analyze the tumor control probability (TCP) based on a series of published data of SBRT, in which a tumor is controlled for an individual patient if number of tumor cells is smaller than a critical value K cr. The regrowth model contains radiobiological parameters such as α, α/β the potential doubling time T p. This model also takes into account the heterogeneity of tumors and tumor regrowth after radiation treatment. The model was first used to fit TCP data from a single institution. The extracted fitting parameters were then used to predict the TCP data from another institution with a similar dose fractionation scheme. Finally, the model was used to fit the pooled TCP data selected from 48 publications available in the literature at the time when this manuscript was written. Excellent agreement between model predictions and single-institution data was found and the extracted radiobiological parameters were α = 0.010 ± 0.001 Gy-1, α /β = 21.5 ± 1.0 Gy and T p = 133.4 ± 7.6 d. These parameters were α = 0.072 ± 0.006 Gy-1, α/β = 15.9 ± 1.0 Gy and T p = 85.6 ± 24.7 d when extracted from multi-institution data. This study shows that TCP saturates at a BED of around 120 Gy. A few new dose-fractionation schemes were proposed based on the extracted model parameters from multi-institution data. It is found that the regrowth model with an α/β around 16 Gy can be used to predict the dose response of lung tumors treated with SBRT. The extracted radiobiological parameters may be useful for comparing clinical outcome data of various SBRT trials and for designing new treatment regimens.
Normal Brain-Skull Development with Hybrid Deformable VR Models Simulation.
Jin, Jing; De Ribaupierre, Sandrine; Eagleson, Roy
2016-01-01
This paper describes a simulation framework for a clinical application involving skull-brain co-development in infants, leading to a platform for craniosynostosis modeling. Craniosynostosis occurs when one or more sutures are fused early in life, resulting in an abnormal skull shape. Surgery is required to reopen the suture and reduce intracranial pressure, but is difficult without any predictive model to assist surgical planning. We aim to study normal brain-skull growth by computer simulation, which requires a head model and appropriate mathematical methods for brain and skull growth respectively. On the basis of our previous model, we further specified suture model into fibrous and cartilaginous sutures and develop algorithm for skull extension. We evaluate the resulting simulation by comparison with datasets of cases and normal growth.
Tumour and normal tissue radiobiology in mouse models: how close are mice to mini-humans?
Koontz, Bridget F; Verhaegen, Frank; De Ruysscher, Dirk
2017-01-01
Animal modelling is essential to the study of radiobiology and the advancement of clinical radiation oncology by providing preclinical data. Mouse models in particular have been highly utilized in the study of both tumour and normal tissue radiobiology because of their cost effectiveness and versatility. Technology has significantly advanced in preclinical radiation techniques to allow highly conformal image-guided irradiation of small animals in an effort to mimic human treatment capabilities. However, the biological and physical limitations of animal modelling should be recognized and considered when interpreting preclinical radiotherapy (RT) studies. Murine tumour and normal tissue radioresponse has been shown to vary from human cellular and molecular pathways. Small animal irradiation techniques utilize different anatomical boundaries and may have different physical properties than human RT. This review addresses the difference between the human condition and mouse models and discusses possible strategies for future refinement of murine models of cancer and radiation for the benefit of both basic radiobiology and clinical translation.
Ames, Heather; Grossberg, Stephen
2008-12-01
Auditory signals of speech are speaker dependent, but representations of language meaning are speaker independent. The transformation from speaker-dependent to speaker-independent language representations enables speech to be learned and understood from different speakers. A neural model is presented that performs speaker normalization to generate a pitch-independent representation of speech sounds, while also preserving information about speaker identity. This speaker-invariant representation is categorized into unitized speech items, which input to sequential working memories whose distributed patterns can be categorized, or chunked, into syllable and word representations. The proposed model fits into an emerging model of auditory streaming and speech categorization. The auditory streaming and speaker normalization parts of the model both use multiple strip representations and asymmetric competitive circuits, thereby suggesting that these two circuits arose from similar neural designs. The normalized speech items are rapidly categorized and stably remembered by adaptive resonance theory circuits. Simulations use synthesized steady-state vowels from the Peterson and Barney [Peterson, G. E., and Barney, H.L., J. Acoust. Soc. Am. 24, 175-184 (1952).] vowel database and achieve accuracy rates similar to those achieved by human listeners. These results are compared to behavioral data and other speaker normalization models.
Modeling absolute differences in life expectancy with a censored skew-normal regression approach.
Moser, André; Clough-Gorr, Kerri; Zwahlen, Marcel
2015-01-01
Parameter estimates from commonly used multivariable parametric survival regression models do not directly quantify differences in years of life expectancy. Gaussian linear regression models give results in terms of absolute mean differences, but are not appropriate in modeling life expectancy, because in many situations time to death has a negative skewed distribution. A regression approach using a skew-normal distribution would be an alternative to parametric survival models in the modeling of life expectancy, because parameter estimates can be interpreted in terms of survival time differences while allowing for skewness of the distribution. In this paper we show how to use the skew-normal regression so that censored and left-truncated observations are accounted for. With this we model differences in life expectancy using data from the Swiss National Cohort Study and from official life expectancy estimates and compare the results with those derived from commonly used survival regression models. We conclude that a censored skew-normal survival regression approach for left-truncated observations can be used to model differences in life expectancy across covariates of interest.
Stein, Ross S.
2008-01-01
The Working Group for California Earthquake Probabilities must transform fault lengths and their slip rates into earthquake moment-magnitudes. First, the down-dip coseismic fault dimension, W, must be inferred. We have chosen the Nazareth and Hauksson (2004) method, which uses the depth above which 99% of the background seismicity occurs to assign W. The product of the observed or inferred fault length, L, with the down-dip dimension, W, gives the fault area, A. We must then use a scaling relation to relate A to moment-magnitude, Mw. We assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2007) equations. The former uses a single logarithmic relation fitted to the M=6.5 portion of data of Wells and Coppersmith (1994); the latter uses a bilinear relation with a slope change at M=6.65 (A=537 km2) and also was tested against a greatly expanded dataset for large continental transform earthquakes. We also present an alternative power law relation, which fits the newly expanded Hanks and Bakun (2007) data best, and captures the change in slope that Hanks and Bakun attribute to a transition from area- to length-scaling of earthquake slip. We have not opted to use the alternative relation for the current model. The selections and weights were developed by unanimous consensus of the Executive Committee of the Working Group, following an open meeting of scientists, a solicitation of outside opinions from additional scientists, and presentation of our approach to the Scientific Review Panel. The magnitude-area relations and their assigned weights are unchanged from that used in Working Group (2003).
The perception of probability.
Gallistel, C R; Krishan, Monika; Liu, Ye; Miller, Reilly; Latham, Peter E
2014-01-01
We present a computational model to explain the results from experiments in which subjects estimate the hidden probability parameter of a stepwise nonstationary Bernoulli process outcome by outcome. The model captures the following results qualitatively and quantitatively, with only 2 free parameters: (a) Subjects do not update their estimate after each outcome; they step from one estimate to another at irregular intervals. (b) The joint distribution of step widths and heights cannot be explained on the assumption that a threshold amount of change must be exceeded in order for them to indicate a change in their perception. (c) The mapping of observed probability to the median perceived probability is the identity function over the full range of probabilities. (d) Precision (how close estimates are to the best possible estimate) is good and constant over the full range. (e) Subjects quickly detect substantial changes in the hidden probability parameter. (f) The perceived probability sometimes changes dramatically from one observation to the next. (g) Subjects sometimes have second thoughts about a previous change perception, after observing further outcomes. (h) The frequency with which they perceive changes moves in the direction of the true frequency over sessions. (Explaining this finding requires 2 additional parametric assumptions.) The model treats the perception of the current probability as a by-product of the construction of a compact encoding of the experienced sequence in terms of its change points. It illustrates the why and the how of intermittent Bayesian belief updating and retrospective revision in simple perception. It suggests a reinterpretation of findings in the recent literature on the neurobiology of decision making.
NASA Astrophysics Data System (ADS)
Kikuchi, Ryota; Misaka, Takashi; Obayashi, Shigeru
2015-10-01
An integrated method of a proper orthogonal decomposition based reduced-order model (ROM) and data assimilation is proposed for the real-time prediction of an unsteady flow field. In this paper, a particle filter (PF) and an ensemble Kalman filter (EnKF) are compared for data assimilation and the difference in the predicted flow fields is evaluated focusing on the probability density function (PDF) of the model variables. The proposed method is demonstrated using identical twin experiments of an unsteady flow field around a circular cylinder at the Reynolds number of 1000. The PF and EnKF are employed to estimate temporal coefficients of the ROM based on the observed velocity components in the wake of the circular cylinder. The prediction accuracy of ROM-PF is significantly better than that of ROM-EnKF due to the flexibility of PF for representing a PDF compared to EnKF. Furthermore, the proposed method reproduces the unsteady flow field several orders faster than the reference numerical simulation based on the Navier-Stokes equations.
Boitard, Simon; Loisel, Patrice
2007-05-01
The probability distribution of haplotype frequencies in a population, and the way it is influenced by genetical forces such as recombination, selection, random drift ...is a question of fundamental interest in population genetics. For large populations, the distribution of haplotype frequencies for two linked loci under the classical Wright-Fisher model is almost impossible to compute because of numerical reasons. However the Wright-Fisher process can in such cases be approximated by a diffusion process and the transition density can then be deduced from the Kolmogorov equations. As no exact solution has been found for these equations, we developed a numerical method based on finite differences to solve them. It applies to transient states and models including selection or mutations. We show by several tests that this method is accurate for computing the conditional joint density of haplotype frequencies given that no haplotype has been lost. We also prove that it is far less time consuming than other methods such as Monte Carlo simulations.
NASA Astrophysics Data System (ADS)
Chodera, John D.; Noé, Frank
2010-09-01
Discrete-state Markov (or master equation) models provide a useful simplified representation for characterizing the long-time statistical evolution of biomolecules in a manner that allows direct comparison with experiments as well as the elucidation of mechanistic pathways for an inherently stochastic process. A vital part of meaningful comparison with experiment is the characterization of the statistical uncertainty in the predicted experimental measurement, which may take the form of an equilibrium measurement of some spectroscopic signal, the time-evolution of this signal following a perturbation, or the observation of some statistic (such as the correlation function) of the equilibrium dynamics of a single molecule. Without meaningful error bars (which arise from both approximation and statistical error), there is no way to determine whether the deviations between model and experiment are statistically meaningful. Previous work has demonstrated that a Bayesian method that enforces microscopic reversibility can be used to characterize the statistical component of correlated uncertainties in state-to-state transition probabilities (and functions thereof) for a model inferred from molecular simulation data. Here, we extend this approach to include the uncertainty in observables that are functions of molecular conformation (such as surrogate spectroscopic signals) characterizing each state, permitting the full statistical uncertainty in computed spectroscopic experiments to be assessed. We test the approach in a simple model system to demonstrate that the computed uncertainties provide a useful indicator of statistical variation, and then apply it to the computation of the fluorescence autocorrelation function measured for a dye-labeled peptide previously studied by both experiment and simulation.
NASA Astrophysics Data System (ADS)
Liang, Zach; Lee, George C.
2012-09-01
The current AASHTO load and resistance factor design (LRFD) guidelines are formulated based on bridge reliability, which interprets traditional design safety factors into more rigorously deduced factors based on the theory of probability. This is a major advancement in bridge design specifications. However, LRFD is only calibrated for dead and live loads. In cases when extreme loads are significant, they need to be individually assessed. Combining regular loads with extreme loads has been a major challenge, mainly because the extreme loads are time variables and cannot be directly combined with time invariant loads to formulate the probability of structural failure. To overcome these difficulties, this paper suggests a methodology of comprehensive reliability, by introducing the concept of partial failure probability to separate the loads so that each individual load combination under a certain condition can be approximated as time invariant. Based on these conditions, the extreme loads (also referred to as multiple hazard or MH loads) can be broken down into single effects. In Part II of this paper, a further breakdown of these conditional occurrence probabilities into pure conditions is discussed by using a live truck and earthquake loads on a bridge as an example. There are three major steps in establishing load factors from MH load distributions: (1) formulate the failure probabilities; (2) normalize various load distributions; and (3) establish design limit state equations. This paper describes the formulation of the failure probabilities of single and combined loads.
Partially linear models with autoregressive scale-mixtures of normal errors: A Bayesian approach
NASA Astrophysics Data System (ADS)
Ferreira, Guillermo; Castro, Mauricio; Lachos, Victor H.
2012-10-01
Normality and independence of error terms is a typical assumption for partial linear models. However, such an assumption may be unrealistic on many fields such as economics, finance and biostatistics. In this paper, we develop a Bayesian analysis for partial linear model with first-order autoregressive errors belonging to the class of scale mixtures of normal (SMN) distributions. The proposed model provides a useful generalization of the symmetrical linear regression models with independent error, since the error distribution cover both correlated and thick-tailed distribution, and has a convenient hierarchical representation allowing to us an easily implementation of a Markov chain Monte Carlo (MCMC) scheme. In order to examine the robustness of this distribution against outlying and influential observations, we present a Bayesian case deletion influence diagnostics based on the Kullback-Leibler (K-L) divergence. The proposed methodology is applied to the Cuprum Company monthly returns.
NASA Technical Reports Server (NTRS)
Butler, Doug; Bauman, David; Johnson-Throop, Kathy
2011-01-01
The Integrated Medical Model (IMM) Project has been developing a probabilistic risk assessment tool, the IMM, to help evaluate in-flight crew health needs and impacts to the mission due to medical events. This package is a follow-up to a data package provided in June 2009. The IMM currently represents 83 medical conditions and associated ISS resources required to mitigate medical events. IMM end state forecasts relevant to the ISS PRA model include evacuation (EVAC) and loss of crew life (LOCL). The current version of the IMM provides the basis for the operational version of IMM expected in the January 2011 timeframe. The objectives of this data package are: 1. To provide a preliminary understanding of medical risk data used to update the ISS PRA Model. The IMM has had limited validation and an initial characterization of maturity has been completed using NASA STD 7009 Standard for Models and Simulation. The IMM has been internally validated by IMM personnel but has not been validated by an independent body external to the IMM Project. 2. To support a continued dialogue between the ISS PRA and IMM teams. To ensure accurate data interpretation, and that IMM output format and content meets the needs of the ISS Risk Management Office and ISS PRA Model, periodic discussions are anticipated between the risk teams. 3. To help assess the differences between the current ISS PRA and IMM medical risk forecasts of EVAC and LOCL. Follow-on activities are anticipated based on the differences between the current ISS PRA medical risk data and the latest medical risk data produced by IMM.
Stability of the normal vacuum in multi-Higgs-doublet models
Barroso, A.; Ferreira, P. M.; Santos, R.; Silva, Joao P.
2006-10-15
We show that the vacuum structure of a generic multi-Higgs-doublet model shares several important features with the vacuum structure of the two and three Higgs-doublet model. In particular, one can still define the usual charge breaking, spontaneous CP breaking, and normal (charge and CP preserving) stationary points. We analyze the possibility of charge or spontaneous CP breaking by studying the relative depth of the potential in each of the possible stationary points.
Takam, Rungdham; Bezak, Eva; Yeoh, Eric E.; Marcu, Loredana
2010-09-15
Purpose: Normal tissue complication probability (NTCP) of the rectum, bladder, urethra, and femoral heads following several techniques for radiation treatment of prostate cancer were evaluated applying the relative seriality and Lyman models. Methods: Model parameters from literature were used in this evaluation. The treatment techniques included external (standard fractionated, hypofractionated, and dose-escalated) three-dimensional conformal radiotherapy (3D-CRT), low-dose-rate (LDR) brachytherapy (I-125 seeds), and high-dose-rate (HDR) brachytherapy (Ir-192 source). Dose-volume histograms (DVHs) of the rectum, bladder, and urethra retrieved from corresponding treatment planning systems were converted to biological effective dose-based and equivalent dose-based DVHs, respectively, in order to account for differences in radiation treatment modality and fractionation schedule. Results: Results indicated that with hypofractionated 3D-CRT (20 fractions of 2.75 Gy/fraction delivered five times/week to total dose of 55 Gy), NTCP of the rectum, bladder, and urethra were less than those for standard fractionated 3D-CRT using a four-field technique (32 fractions of 2 Gy/fraction delivered five times/week to total dose of 64 Gy) and dose-escalated 3D-CRT. Rectal and bladder NTCPs (5.2% and 6.6%, respectively) following the dose-escalated four-field 3D-CRT (2 Gy/fraction to total dose of 74 Gy) were the highest among analyzed treatment techniques. The average NTCP for the rectum and urethra were 0.6% and 24.7% for LDR-BT and 0.5% and 11.2% for HDR-BT. Conclusions: Although brachytherapy techniques resulted in delivering larger equivalent doses to normal tissues, the corresponding NTCPs were lower than those of external beam techniques other than the urethra because of much smaller volumes irradiated to higher doses. Among analyzed normal tissues, the femoral heads were found to have the lowest probability of complications as most of their volume was irradiated to lower
The July 17, 2006 Java Tsunami: Tsunami Modeling and the Probable Causes of the Extreme Run-up
NASA Astrophysics Data System (ADS)
Kongko, W.; Schlurmann, T.
2009-04-01
On 17 July 2006, an Earthquake magnitude Mw 7.8 off the south coast of west Java, Indonesia generated tsunami that affected over 300 km of south Java coastline and killed more than 600 people. Observed tsunami heights and field measurement of run-up distributions were uniformly scattered approximately 5 to 7 m along a 200 km coastal stretch; remarkably, a locally focused tsunami run-up height exceeding 20 m at Nusakambangan Island has been observed. Within the framework of the German Indonesia Tsunami Early Warning System (GITEWS) Project, a high-resolution near-shore bathymetrical survey equipped by multi-beam echo-sounder has been recently conducted. Additional geodata have been collected using Intermap Technologies STAR-4 airborne interferometric SAR data acquisition system on a 5 m ground sample distance basis in order to establish a most-sophisticated Digital Terrain Model (DTM). This paper describes the outcome of tsunami modelling approaches using high resolution data of bathymetry and topography being part of a general case study in Cilacap, Indonesia, and medium resolution data for other area along coastline of south Java Island. By means of two different seismic deformation models to mimic the tsunami source generation, a numerical code based on the 2D nonlinear shallow water equations is used to simulate probable tsunami run-up scenarios. Several model tests are done and virtual points in offshore, near-shore, coastline, as well as tsunami run-up on the coast are collected. For the purpose of validation, the model results are compared with field observations and sea level data observed at several tide gauges stations. The performance of numerical simulations and correlations with observed field data are highlighted, and probable causes for the extreme wave heights and run-ups are outlined. References Ammon, C.J., Kanamori, K., Lay, T., and Velasco, A., 2006. The July 2006 Java Tsunami Earthquake, Geophysical Research Letters, 33(L24308). Fritz, H
An all-timescales rainfall probability distribution
NASA Astrophysics Data System (ADS)
Papalexiou, S. M.; Koutsoyiannis, D.
2009-04-01
The selection of a probability distribution for rainfall intensity at many different timescales simultaneously is of primary interest and importance as typically the hydraulic design strongly depends on the rainfall model choice. It is well known that the rainfall distribution may have a long tail, is highly skewed at fine timescales and tends to normality as the timescale increases. This behaviour, explained by the maximum entropy principle (and for large timescales also by the central limit theorem), indicates that the construction of a "universal" probability distribution, capable to adequately describe the rainfall in all timescales, is a difficult task. A search in hydrological literature confirms this argument, as many different distributions have been proposed as appropriate models for different timescales or even for the same timescale, such as Normal, Skew-Normal, two- and three-parameter Log-Normal, Log-Normal mixtures, Generalized Logistic, Pearson Type III, Log-Pearson Type III, Wakeby, Generalized Pareto, Weibull, three- and four-parameter Kappa distribution, and many more. Here we study a single flexible four-parameter distribution for rainfall intensity (the JH distribution) and derive its basic statistics. This distribution incorporates as special cases many other well known distributions, and is capable of describing rainfall in a great range of timescales. Furthermore, we demonstrate the excellent fitting performance of the distribution in various rainfall samples from different areas and for timescales varying from sub-hourly to annual.
NASA Technical Reports Server (NTRS)
1995-01-01
The success of any solution methodology for studying gas-turbine combustor flows depends a great deal on how well it can model various complex, rate-controlling processes associated with turbulent transport, mixing, chemical kinetics, evaporation and spreading rates of the spray, convective and radiative heat transfer, and other phenomena. These phenomena often strongly interact with each other at disparate time and length scales. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and evaporation in many practical combustion devices. Turbulence manifests its influence in a diffusion flame in several forms depending on how turbulence interacts with various flame scales. These forms range from the so-called wrinkled, or stretched, flamelets regime, to the distributed combustion regime. Conventional turbulence closure models have difficulty in treating highly nonlinear reaction rates. A solution procedure based on the joint composition probability density function (PDF) approach holds the promise of modeling various important combustion phenomena relevant to practical combustion devices such as extinction, blowoff limits, and emissions predictions because it can handle the nonlinear chemical reaction rates without any approximation. In this approach, mean and turbulence gas-phase velocity fields are determined from a standard turbulence model; the joint composition field of species and enthalpy are determined from the solution of a modeled PDF transport equation; and a Lagrangian-based dilute spray model is used for the liquid-phase representation with appropriate consideration of the exchanges of mass, momentum, and energy between the two phases. The PDF transport equation is solved by a Monte Carlo method, and existing state-of-the-art numerical representations are used to solve the mean gasphase velocity and turbulence fields together with the liquid-phase equations. The joint composition PDF
Normal fault growth above pre-existing structures: insights from discrete element modelling
NASA Astrophysics Data System (ADS)
Wrona, Thilo; Finch, Emma; Bell, Rebecca; Jackson, Christopher; Gawthorpe, Robert; Phillips, Thomas
2016-04-01
In extensional systems, pre-existing structures such as shear zones may affect the growth, geometry and location of normal faults. Recent seismic reflection-based observations from the North Sea suggest that shear zones not only localise deformation in the host rock, but also in the overlying sedimentary succession. While pre-existing weaknesses are known to localise deformation in the host rock, their effect on deformation in the overlying succession is less well understood. Here, we use 3-D discrete element modelling to determine if and how kilometre-scale shear zones affect normal fault growth in the overlying succession. Discrete element models use a large number of interacting particles to describe the dynamic evolution of complex systems. The technique has therefore been applied to describe fault and fracture growth in a variety of geological settings. We model normal faulting by extending a 60×60×30 km crustal rift-basin model including brittle and ductile interactions and gravitation and isostatic forces by 30%. An inclined plane of weakness which represents a pre-existing shear zone is introduced in the lower section of the upper brittle layer at the start of the experiment. The length, width, orientation and dip of the weak zone are systematically varied between experiments to test how these parameters control the geometric and kinematic development of overlying normal fault systems. Consistent with our seismic reflection-based observations, our results show that strain is indeed localised in and above these weak zones. In the lower brittle layer, normal faults nucleate, as expected, within the zone of weakness and control the initiation and propagation of neighbouring faults. Above this, normal faults nucleate throughout the overlying strata where their orientations are strongly influenced by the underlying zone of weakness. These results challenge the notion that overburden normal faults simply form due to reactivation and upwards propagation of pre
Widesott, Lamberto; Pierelli, Alessio; Fiorino, Claudio; Lomax, Antony J.; Amichetti, Maurizio; Cozzarini, Cesare; Soukup, Martin; Schneider, Ralf; Hug, Eugen; Di Muzio, Nadia; Calandrino, Riccardo; Schwarz, Marco
2011-08-01
Purpose: To compare intensity-modulated proton therapy (IMPT) and helical tomotherapy (HT) treatment plans for high-risk prostate cancer (HRPCa) patients. Methods and Materials: The plans of 8 patients with HRPCa treated with HT were compared with IMPT plans with two quasilateral fields set up (-100{sup o}; 100{sup o}) and optimized with the Hyperion treatment planning system. Both techniques were optimized to simultaneously deliver 74.2 Gy/Gy relative biologic effectiveness (RBE) in 28 fractions on planning target volumes (PTVs)3-4 (P + proximal seminal vesicles), 65.5 Gy/Gy(RBE) on PTV2 (distal seminal vesicles and rectum/prostate overlapping), and 51.8 Gy/Gy(RBE) to PTV1 (pelvic lymph nodes). Normal tissue calculation probability (NTCP) calculations were performed for the rectum, and generalized equivalent uniform dose (gEUD) was estimated for the bowel cavity, penile bulb and bladder. Results: A slightly better PTV coverage and homogeneity of target dose distribution with IMPT was found: the percentage of PTV volume receiving {>=}95% of the prescribed dose (V{sub 95%}) was on average >97% in HT and >99% in IMPT. The conformity indexes were significantly lower for protons than for photons, and there was a statistically significant reduction of the IMPT dosimetric parameters, up to 50 Gy/Gy(RBE) for the rectum and bowel and 60 Gy/Gy(RBE) for the bladder. The NTCP values for the rectum were higher in HT for all the sets of parameters, but the gain was small and in only a few cases statistically significant. Conclusions: Comparable PTV coverage was observed. Based on NTCP calculation, IMPT is expected to allow a small reduction in rectal toxicity, and a significant dosimetric gain with IMPT, both in medium-dose and in low-dose range in all OARs, was observed.
Duffy, S; Schaffner, D W
2001-05-01
Outbreaks of foodborne illness from apple cider have prompted research on the survival of Escherichia coli O157:H7 in this food. Published results vary widely, potentially due to differences in E. coli O157:H7 strains, enumeration media, and other experimental considerations. We developed probability distribution functions for the change in concentration of E. coli O157:H7 (log CFU/day) in cider using data from scientific publications for use in a quantitative risk assessment. Six storage conditions (refrigeration [4 to 5 degrees C]; temperature abuse [6 to 10 degrees C]; room temperature [20 to 25 degrees C]; refrigerated with 0.1% sodium benzoate, 0.1% potassium sorbate, or both) were modeled. E. coli survival rate data for all three unpreserved cider storage conditions were highly peaked, and these data were fit to logistic distributions: ideal refrigeration, logistic (-0.061, 0.13); temperature abuse, logistic (-0.0982, 0.23); room temperature, logistic (-0.1, 0.29) and uniform (-4.3, -1.8), to model the very small chance of extremely high log CFU reductions. There were fewer published studies on refrigerated, preserved cider, and these smaller data sets were modeled with beta (4.27, 2.37) x 2.2 - 1.6, normal (-0.2, 0.13), and gamma (1.45, 0.6) distributions, respectively. Simulations were run to show the effect of storage on E. coli O157:H7 during the shelf life of apple cider. Under every storage condition, with and without preservatives, there was an overall decline in E. coli O157:H7 populations in cider, although a small fraction of the time a slight increase was seen.
Computer modelling of bone's adaptation: the role of normal strain, shear strain and fluid flow.
Tiwari, Abhishek Kumar; Prasad, Jitendra
2017-04-01
Bone loss is a serious health problem. In vivo studies have found that mechanical stimulation may inhibit bone loss as elevated strain in bone induces osteogenesis, i.e. new bone formation. However, the exact relationship between mechanical environment and osteogenesis is less clear. Normal strain is considered as a prime stimulus of osteogenic activity; however, there are some instances in the literature where osteogenesis is observed in the vicinity of minimal normal strain, specifically near the neutral axis of bending in long bones. It suggests that osteogenesis may also be induced by other or secondary components of mechanical environment such as shear strain or canalicular fluid flow. As it is evident from the literature, shear strain and fluid flow can be potent stimuli of osteogenesis. This study presents a computational model to investigate the roles of these stimuli in bone adaptation. The model assumes that bone formation rate is roughly proportional to the normal, shear and fluid shear strain energy density above their osteogenic thresholds. In vivo osteogenesis due to cyclic cantilever bending of a murine tibia has been simulated. The model predicts results close to experimental findings when normal strain, and shear strain or fluid shear were combined. This study also gives a new perspective on the relation between osteogenic potential of micro-level fluid shear and that of macro-level bending shear. Attempts to establish such relations among the components of mechanical environment and corresponding osteogenesis may ultimately aid in the development of effective approaches to mitigating bone loss.
A single period inventory model with a truncated normally distributed fuzzy random variable demand
NASA Astrophysics Data System (ADS)
Dey, Oshmita; Chakraborty, Debjani
2012-03-01
In this article, a single period inventory model has been considered in the mixed fuzzy random environment by assuming the annual customer demand to be a fuzzy random variable. Since assuming demand to be normally distributed implies that some amount of demand information is being automatically taken to be negative, the model has been developed for two cases, using the non-truncated and the truncated normal distributions. The problem has been developed to represent scenarios where the aim of the decision-maker is to determine the optimal order quantity such that the expected profit is greater than or equal to a predetermined target. This 'greater than or equal to' inequality has been modelled as a fuzzy inequality and a methodology has been developed to this effect. This methodology has been illustrated through a numerical example.
Glenn E McCreery; Keith G Condie
2006-09-01
The Very High Temperature Reactor (VHTR) is the leading candidate for the Next Generation Nuclear Power (NGNP) Project in the U.S. which has the goal of demonstrating the production of emissions free electricity and hydrogen by 2015. The present document addresses experimental modeling of flow and thermal mixing phenomena of importance during normal or reduced power operation and during a loss of forced reactor cooling (pressurized conduction cooldown) scenario. The objectives of the experiments are, 1), provide benchmark data for assessment and improvement of codes proposed for NGNP designs and safety studies, and, 2), obtain a better understanding of related phenomena, behavior and needs. Physical models of VHTR vessel upper and lower plenums which use various working fluids to scale phenomena of interest are described. The models may be used to both simulate natural convection conditions during pressurized conduction cooldown and turbulent lower plenum flow during normal or reduced power operation.
The zoom lens of attention: Simulating shuffled versus normal text reading using the SWIFT model.
Schad, Daniel J; Engbert, Ralf
2012-04-01
Assumptions on the allocation of attention during reading are crucial for theoretical models of eye guidance. The zoom lens model of attention postulates that attentional deployment can vary from a sharp focus to a broad window. The model is closely related to the foveal load hypothesis, i.e., the assumption that the perceptual span is modulated by the difficulty of the fixated word. However, these important theoretical concepts for cognitive research have not been tested quantitatively in eye movement models. Here we show that the zoom lens model, implemented in the SWIFT model of saccade generation, captures many important patterns of eye movements. We compared the model's performance to experimental data from normal and shuffled text reading. Our results demonstrate that the zoom lens of attention might be an important concept for eye movement control in reading.
NASA Astrophysics Data System (ADS)
Gupta, N.; Callaghan, S.; Graves, R.; Mehta, G.; Zhao, L.; Deelman, E.; Jordan, T. H.; Kesselman, C.; Okaya, D.; Cui, Y.; Field, E.; Gupta, V.; Vahi, K.; Maechling, P. J.
2006-12-01
Researchers from the SCEC Community Modeling Environment (SCEC/CME) project are utilizing the CyberShake computational platform and a distributed high performance computing environment that includes USC High Performance Computer Center and the NSF TeraGrid facilities to calculate physics-based probabilistic seismic hazard curves for several sites in the Southern California area. Traditionally, probabilistic seismic hazard analysis (PSHA) is conducted using intensity measure relationships based on empirical attenuation relationships. However, a more physics-based approach using waveform modeling could lead to significant improvements in seismic hazard analysis. Members of the SCEC/CME Project have integrated leading-edge PSHA software tools, SCEC-developed geophysical models, validated anelastic wave modeling software, and state-of-the-art computational technologies on the TeraGrid to calculate probabilistic seismic hazard curves using 3D waveform-based modeling. The CyberShake calculations for a single probablistic seismic hazard curve require tens of thousands of CPU hours and multiple terabytes of disk storage. The CyberShake workflows are run on high performance computing systems including multiple TeraGrid sites (currently SDSC and NCSA), and the USC Center for High Performance Computing and Communications. To manage the extensive job scheduling and data requirements, CyberShake utilizes a grid-based scientific workflow system based on the Virtual Data System (VDS), the Pegasus meta-scheduler system, and the Globus toolkit. Probabilistic seismic hazard curves for spectral acceleration at 3.0 seconds have been produced for eleven sites in the Southern California region, including rock and basin sites. At low ground motion levels, there is little difference between the CyberShake and attenuation relationship curves. At higher ground motion (lower probability) levels, the curves are similar for some sites (downtown LA, I-5/SR-14 interchange) but different for
Asymptotically Normal and Efficient Estimation of Covariate-Adjusted Gaussian Graphical Model
Chen, Mengjie; Ren, Zhao; Zhao, Hongyu; Zhou, Harrison
2015-01-01
A tuning-free procedure is proposed to estimate the covariate-adjusted Gaussian graphical model. For each finite subgraph, this estimator is asymptotically normal and efficient. As a consequence, a confidence interval can be obtained for each edge. The procedure enjoys easy implementation and efficient computation through parallel estimation on subgraphs or edges. We further apply the asymptotic normality result to perform support recovery through edge-wise adaptive thresholding. This support recovery procedure is called ANTAC, standing for Asymptotically Normal estimation with Thresholding after Adjusting Covariates. ANTAC outperforms other methodologies in the literature in a range of simulation studies. We apply ANTAC to identify gene-gene interactions using an eQTL dataset. Our result achieves better interpretability and accuracy in comparison with CAMPE. PMID:27499564
NASA Astrophysics Data System (ADS)
Laze, Kuenda
2016-08-01
Modelling of land use may be improved by incorporating the results of species distribution modelling and species distribution modelling may be upgraded if a variable of the process-based variable of forest cover change or accessibility of forest from human settlement is included. This work presents the results of spatially explicit analyses of the changes in forest cover from 2000 to 2007 using the method of Geographically Weighted Regression (GWR) and of the species distribution for protected species of Lynx lynx martinoi, Ursus arctos using Generalized Linear Models (GLMs). The methodological approach is separately searching for a parsimonious model for forest cover change and species distribution for the entire territory of Albania. The findings of this work show that modelling of land change and of species distribution is indeed value-added by showing higher values of model selection of corrected Akaike Information Criterion. These results provide evidences on the effects of process-based variables on species distribution modelling and on the performance of species distribution modelling as well as show an example of the incorporation of estimated probability of species occurrences in a land change modelling.
NASA Astrophysics Data System (ADS)
Swearingen, Michelle Elaine
2003-10-01
This thesis is a presentation of an analytic model, developed in cylindrical coordinates, for the scattering of a spherical wave off a semi infinite right cylinder placed normal to a ground surface. The model is developed to simulate a single tree and is developed as a first piece to creating a model for estimating attenuation in a forest based on scattering from individual tree trunks. Comparisons are made to the plane wave case, the transparent cylinder case, and the rigid and soft ground cases as a method of theoretically verifying the model. Agreement is excellent for these benchmark cases. Model sensitivity to five parameters is determined, which aids in error analysis, particularly when comparing the model results to experimental data, and offers insight into the inner workings of the model. An experiment was performed to collect real-world data on scattering from a cylinder normal to a ground surface. The data from the experiment is analyzed with a transfer function method into frequency and impulse responses. The model results are compared to the experimental data.
Binder, Harald
2014-07-01
This is a discussion of the following papers: "Probability estimation with machine learning methods for dichotomous and multicategory outcome: Theory" by Jochen Kruppa, Yufeng Liu, Gérard Biau, Michael Kohler, Inke R. König, James D. Malley, and Andreas Ziegler; and "Probability estimation with machine learning methods for dichotomous and multicategory outcome: Applications" by Jochen Kruppa, Yufeng Liu, Hans-Christian Diener, Theresa Holste, Christian Weimar, Inke R. König, and Andreas Ziegler.
Normal and diseased personal eye modeling using age-appropriate lens parameters
Chen, Ying-Ling; Shi, L.; Lewis, J. W. L.; Wang, M.
2012-01-01
Personalized eye modeling of normal and diseased eye conditions is attractive due to the recent availability of detailed ocular measurements in clinic environments and the promise of its medical and industrial applications. In the customized modeling, the optical properties of the crystalline lens including the gradient refractive index, the lens bio-geometry and orientation are typically assigned with average lens parameters from literature since typically they are not clinically available. Although, through the optical optimization by assigning lens parameters as variables, the clinical measured wavefront aberration can be achieved, the optimized lens biometry and orientation often end up at edges of the statistical distribution. Without an effective validation of these models today, the fidelity of the final lens (and therefore the model) remains questionable. To develop a more reliable customized model without detailed lens information, we incorporate age-appropriate lens parameters as the initial condition of optical optimization. A biconic lens optimization was first performed to provide a correct lens profile for accurate lower order aberration and then followed by the wavefront optimization. Clinical subjects were selected from all ages with both normal and diseased corneal and refractive conditions. 19 ammetropic eyes ( + 4D to −11D), and 16 keratoconus eyes (mild to moderate with cylinder 0.25 to 6D) were modeled. Age- and gender-corrected refractive index was evaluated. Final models attained the lens shapes comparable to the statistical distribution in their age. PMID:22714237
NASA Astrophysics Data System (ADS)
Wei, W. B.; Tan, L.; Jia, M. Q.; Pan, Z. K.
2017-01-01
The variational level set method is one of the main methods of image segmentation. Due to signed distance functions as level sets have to keep the nature of the functions through numerical remedy or additional technology in an evolutionary process, it is not very efficient. In this paper, a normal vector projection method for image segmentation using Chan-Vese model is proposed. An equivalent formulation of Chan-Vese model is used by taking advantage of property of binary level set functions and combining with the concept of convex relaxation. Threshold method and projection formula are applied in the implementation. It can avoid the above problems and obtain a global optimal solution. Experimental results on both synthetic and real images validate the effects of the proposed normal vector projection method, and show advantages over traditional algorithms in terms of computational efficiency.
NASA Technical Reports Server (NTRS)
Demoulin, P.; Forbes, T. G.
1992-01-01
A technique which incorporates both photospheric and prominence magnetic field observations is used to analyze the magnetic support of solar prominences in two dimensions. The prominence is modeled by a mass-loaded current sheet which is supported against gravity by magnetic fields from a bipolar source in the photosphere and a massless line current in the corona. It is found that prominence support can be achieved in three different kinds of configurations: an arcade topology with a normal polarity; a helical topology with a normal polarity; and a helical topology with an inverse polarity. In all cases the important parameter is the variation of the horizontal component of the prominence field with height. Adding a line current external to the prominence eliminates the nonsupport problem which plagues virtually all previous prominence models with inverse polarity.
Recognition of sine wave modeled consonants by normal hearing and hearing-impaired individuals
NASA Astrophysics Data System (ADS)
Balachandran, Rupa
Sine wave modeling is a parametric tool for representing the speech signal with a limited number of sine waves. It involves replacing the peaks of the speech spectrum with sine waves and discarding the rest of the lower amplitude components during synthesis. It has the potential to be used as a speech enhancement technique for hearing-impaired adults. The present study answers the following basic questions: (1) Are sine wave synthesized speech tokens more intelligible than natural speech tokens? (2) What is the effect of varying the number of sine waves on consonant recognition in quiet? (3) What is the effect of varying the number of sine waves on consonant recognition in noise? (4) How does sine wave modeling affect the transmission of speech feature in quiet and in noise? (5) Are there differences in recognition performance between normal hearing and hearing-impaired listeners? VCV syllables representing 20 consonants (/p/, /t/, /k/, /b/, /d/, /g/, /f/, /theta/, /s/, /∫/, /v/, /z/, /t∫/, /dy/, /j/, /w/, /r/, /l/, /m/, /n/) in three vowel contexts (/a/, /i/, /u/) were modeled with 4, 8, 12, and 16 sine waves. A consonant recognition task was performed in quiet, and in background noise (+10 dB and 0 dB SNR). Twenty hearing-impaired listeners and six normal hearing listeners were tested under headphones at their most comfortable listening level. The main findings were: (1) Recognition of unprocessed speech was better that of sine wave modeled speech. (2) Asymptotic performance was reached with 8 sine waves in quiet for both normal hearing and hearing-impaired listeners. (3) Consonant recognition performance in noise improved with increasing number of sine waves. (4) As the number of sine waves was decreased, place information was lost first, followed by manner, and finally voicing. (5) Hearing-impaired listeners made more errors then normal hearing listeners, but there were no differences in the error patterns made by both groups.
Time Series Models with a Specified Symmetric Non-Normal Marginal Distribution.
1985-09-01
processes with a specified non-Normal - marginal distribution, Gastwirth and Wolff [Ref.13] had derived a solution to the linear additive first-order...of Lewis, Orav and Uribe [Ref. 15]. The least squares estimation theory is derived around the concept of a linearized residual. Asymptotic properties...linear process of Gastwirth and Wolff [Ref. 13], called the LAR(1) process. The LDAR(1) model produces an {X I sequence using then first-order
Sakashita, Tetsuya; Hamada, Nobuyuki; Kawaguchi, Isao; Hara, Takamitsu; Kobayashi, Yasuhiko; Saito, Kimiaki
2014-05-01
A single cell can form a colony, and ionizing irradiation has long been known to reduce such a cellular clonogenic potential. Analysis of abortive colonies unable to continue to grow should provide important information on the reproductive cell death (RCD) following irradiation. Our previous analysis with a branching process model showed that the RCD in normal human fibroblasts can persist over 16 generations following irradiation with low linear energy transfer (LET) γ-rays. Here we further set out to evaluate the RCD persistency in abortive colonies arising from normal human fibroblasts exposed to high-LET carbon ions (18.3 MeV/u, 108 keV/µm). We found that the abortive colony size distribution determined by biological experiments follows a linear relationship on the log-log plot, and that the Monte Carlo simulation using the RCD probability estimated from such a linear relationship well simulates the experimentally determined surviving fraction and the relative biological effectiveness (RBE). We identified the short-term phase and long-term phase for the persistent RCD following carbon-ion irradiation, which were similar to those previously identified following γ-irradiation. Taken together, our results suggest that subsequent secondary or tertiary colony formation would be invaluable for understanding the long-lasting RCD. All together, our framework for analysis with a branching process model and a colony formation assay is applicable to determination of cellular responses to low- and high-LET radiation, and suggests that the long-lasting RCD is a pivotal determinant of the surviving fraction and the RBE.
Pulsatile flows and wall-shear stresses in models simulating normal and stenosed aortic arches
NASA Astrophysics Data System (ADS)
Huang, Rong Fung; Yang, Ten-Fang; Lan, Y.-K.
2010-03-01
Pulsatile aqueous glycerol solution flows in the models simulating normal and stenosed human aortic arches are measured by means of particle image velocimetry. Three transparent models were used: normal, 25% stenosed, and 50% stenosed aortic arches. The Womersley parameter, Dean number, and time-averaged Reynolds number are 17.31, 725, and 1,081, respectively. The Reynolds numbers based on the peak velocities of the normal, 25% stenosed, and 50% stenosed aortic arches are 2,484, 3,456, and 3,931, respectively. The study presents the temporal/spatial evolution processes of the flow pattern, velocity distribution, and wall-shear stress during the systolic and diastolic phases. It is found that the flow pattern evolving in the central plane of normal and stenosed aortic arches exhibits (1) a separation bubble around the inner arch, (2) a recirculation vortex around the outer arch wall upstream of the junction of the brachiocephalic artery, (3) an accelerated main stream around the outer arch wall near the junctions of the left carotid and the left subclavian arteries, and (4) the vortices around the entrances of the three main branches. The study identifies and discusses the reasons for the flow physics’ contribution to the formation of these features. The oscillating wall-shear stress distributions are closely related to the featured flow structures. On the outer wall of normal and slightly stenosed aortas, large wall-shear stresses appear in the regions upstream of the junction of the brachiocephalic artery as well as the corner near the junctions of the left carotid artery and the left subclavian artery. On the inner wall, the largest wall-shear stress appears in the region where the boundary layer separates.
A log-normal distribution model for the molecular weight of aquatic fulvic acids
Cabaniss, S.E.; Zhou, Q.; Maurice, P.A.; Chin, Y.-P.; Aiken, G.R.
2000-01-01
The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a lognormal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured M(n) and M(w) and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several types of molecular weight data, including the shapes of high- pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a log-normal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured Mn and Mw and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several type's of molecular weight data, including the shapes of high-pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.
Holohean, Alice M; Magleby, Karl L
2011-05-11
Presynaptic short-term plasticity (STP) dynamically modulates synaptic strength in a reversible manner on a timescale of milliseconds to minutes. For low basal vesicular release probability (prob0), four components of enhancement, F1 and F2 facilitation, augmentation (A), and potentiation (P), increase synaptic strength during repetitive nerve activity. For release rates that exceed the rate of replenishment of the readily releasable pool (RRP) of synaptic vesicles, depression of synaptic strength, observed as a rundown of postsynaptic potential amplitudes, can also develop. To understand the relationship between enhancement and depression at the frog (Rana pipiens) neuromuscular synapse, data obtained over a wide range of prob0 using patterned stimulation are analyzed with a hybrid model to reveal the components of STP. We find that F1, F2, A, P, and depletion of the RRP all contribute to STP during repetitive nerve activity at low prob0. As prob0 is increased by raising Ca(o)(2+) (extracellular Ca2+), specific components of enhancement no longer contribute, with first P, then A, and then F2 becoming undetectable, even though F1 continues to enhance release. For levels of prob0 that lead to appreciable depression, only F1 and depletion of the RRP contribute to STP during rundown, and for low stimulation rates, F2 can also contribute. These observations place prob0-dependent limitations on which components of enhancement contribute to STP and suggest some fundamental mechanistic differences among the components. The presented model can serve as a tool to readily characterize the components of STP over wide ranges of prob0.
Normal diffusion in crystal structures and higher-dimensional billiard models with gaps.
Sanders, David P
2008-12-01
We show, both heuristically and numerically, that three-dimensional periodic Lorentz gases-clouds of particles scattering off crystalline arrays of hard spheres-often exhibit normal diffusion, even when there are gaps through which particles can travel without ever colliding-i.e., when the system has an infinite horizon. This is the case provided that these gaps are not "too large," as measured by their dimension. The results are illustrated with simulations of a simple three-dimensional model having different types of diffusive regime and are then extended to higher-dimensional billiard models, which include hard-sphere fluids.
Human Normal Bronchial Epithelial Cells: A Novel In Vitro Cell Model for Toxicity Evaluation
Huang, Haiyan; Xia, Bo; Liu, Hongya; Li, Jie; Lin, Shaolin; Li, Tiyuan; Liu, Jianjun; Li, Hui
2015-01-01
Human normal cell-based systems are needed for drug discovery and toxicity evaluation. hTERT or viral genes transduced human cells are currently widely used for these studies, while these cells exhibited abnormal differentiation potential or response to biological and chemical signals. In this study, we established human normal bronchial epithelial cells (HNBEC) using a defined primary epithelial cell culture medium without transduction of exogenous genes. This system may involve decreased IL-1 signaling and enhanced Wnt signaling in cells. Our data demonstrated that HNBEC exhibited a normal diploid karyotype. They formed well-defined spheres in matrigel 3D culture while cancer cells (HeLa) formed disorganized aggregates. HNBEC cells possessed a normal cellular response to DNA damage and did not induce tumor formation in vivo by xenograft assays. Importantly, we assessed the potential of these cells in toxicity evaluation of the common occupational toxicants that may affect human respiratory system. Our results demonstrated that HNBEC cells are more sensitive to exposure of 10~20 nm-sized SiO2, Cr(VI) and B(a)P compared to 16HBE cells (a SV40-immortalized human bronchial epithelial cells). This study provides a novel in vitro human cells-based model for toxicity evaluation, may also be facilitating studies in basic cell biology, cancer biology and drug discovery. PMID:25861018
FLUID-STRUCTURE INTERACTION MODELS OF THE MITRAL VALVE: FUNCTION IN NORMAL AND PATHOLOGIC STATES
Kunzelman, K. S.; Einstein, Daniel R.; Cochran, R. P.
2007-08-29
Successful mitral valve repair is dependent upon a full understanding of normal and abnormal mitral valve anatomy and function. Computational analysis is one such method that can be applied to simulate mitral valve function in order to analyze the roles of individual components, and evaluate proposed surgical repair. We developed the first three-dimensional, finite element (FE) computer model of the mitral valve including leaflets and chordae tendineae, however, one critical aspect that has been missing until the last few years was the evaluation of fluid flow, as coupled to the function of the mitral valve structure. We present here our latest results for normal function and specific pathologic changes using a fluid-structure interaction (FSI) model. Normal valve function was first assessed, followed by pathologic material changes in collagen fiber volume fraction, fiber stiffness, fiber splay, and isotropic stiffness. Leaflet and chordal stress and strain, and papillary muscle force was determined. In addition, transmitral flow, time to leaflet closure, and heart valve sound were assessed. Model predictions in the normal state agreed well with a wide range of available in-vivo and in-vitro data. Further, pathologic material changes that preserved the anisotropy of the valve leaflets were found to preserve valve function. By contrast, material changes that altered the anisotropy of the valve were found to profoundly alter valve function. The addition of blood flow and an experimentally driven microstructural description of mitral tissue represent significant advances in computational studies of the mitral valve, which allow further insight to be gained. This work is another building block in the foundation of a computational framework to aid in the refinement and development of a truly noninvasive diagnostic evaluation of the mitral valve. Ultimately, it represents the basis for simulation of surgical repair of pathologic valves in a clinical and educational
NASA Astrophysics Data System (ADS)
Neupauer, R. M.; Lin, R.
2003-12-01
When contamination is observed in an aquifer, the source of contamination is often unknown. We present an approach that can be used to identify sources of contamination based on the observed distribution (spatial, temporal, or both) of the contaminant plume. Using backward-in-time advection dispersion theory, we first obtain a backward location probability distribution that describes the possible prior positions of the contamination. This distribution is independent of the measured concentrations of the contaminant. Next, we condition the probability distribution on the measured concentrations, resulting in an improvement in the accuracy and a reduction in the variance of the backward location probability distribution. We illustrate the approach for a reactive solute (first-order decay), and demonstrate its applicability for identifying possible source locations of a trichloroethylene plume at the Massachusetts Military Reservation.
Modelling of the hygroelastic behaviour of normal and compression wood tracheids.
Joffre, Thomas; Neagu, R Cristian; Bardage, Stig L; Gamstedt, E Kristofer
2014-01-01
Compression wood conifer tracheids show different swelling and stiffness properties than those of usual normal wood, which has a practical function in the living plant: when a conifer shoot is moved from its vertical position, compression wood is formed in the under part of the shoot. The growth rate of the compression wood is faster than in the upper part resulting in a renewed horizontal growth. The actuating and load-carrying function of the compression wood is addressed, on the basis of its special ultrastructure and shape of the tracheids. As a first step, a quantitative model is developed to predict the difference of moisture-induced expansion and axial stiffness between normal wood and compression wood. The model is based on a state space approach using concentric cylinders with anisotropic helical structure for each cell-wall layer, whose hygroelastic properties are in turn determined by a self-consistent concentric cylinder assemblage of the constituent wood polymers. The predicted properties compare well with experimental results found in the literature. Significant differences in both stiffness and hygroexpansion are found for normal and compression wood, primarily due to the large difference in microfibril angle and lignin content. On the basis of these numerical results, some functional arguments for the reason of high microfibril angle, high lignin content and cylindrical structure of compression wood tracheids are supported.
A global shear velocity model of the mantle from normal modes and surface waves
NASA Astrophysics Data System (ADS)
durand, S.; Debayle, E.; Ricard, Y. R.; Lambotte, S.
2013-12-01
We present a new global shear wave velocity model of the mantle based on the inversion of all published normal mode splitting functions and the large surface wave dataset measured by Debayle & Ricard (2012). Normal mode splitting functions and surface wave phase velocity maps are sensitive to lateral heterogeneities of elastic parameters (Vs, Vp, xi, phi, eta) and density. We first only consider spheroidal modes and Rayleigh waves and restrict the inversion to Vs, Vp and the density. Although it is well known that Vs is the best resolved parameter, we also investigate whether our dataset allows to extract additional information on density and/or Vp. We check whether the determination of the shear wave velocity is affected by the a priori choice of the crustal model (CRUST2.0 or 3SMAC) or by neglecting/coupling poorly resolved parameters. We include the major discontinuities, at 400 and 670 km. Vertical smoothing is imposed through an a priori gaussian covariance matrix on the model and we discuss the effect of coupling/decoupling the inverted structure above and below the discontinuities. We finally discuss the large scale structure of our model and its geodynamical implications regarding the amount of mass exchange between the upper and lower mantle.
The Manhattan Frame Model - Manhattan World Inference in the Space of Surface Normals.
Straub, Julian; Freifeld, Oren; Rosman, Guy; Leonard, John J; Fisher, John W
2017-02-01
Objects and structures within man-made environments typically exhibit a high degree of organization in the form of orthogonal and parallel planes. Traditional approaches utilize these regularities via the restrictive, and rather local, Manhattan World (MW) assumption which posits that every plane is perpendicular to one of the axes of a single coordinate system. The aforementioned regularities are especially evident in the surface normal distribution of a scene where they manifest as orthogonally-coupled clusters. This motivates the introduction of the Manhattan-Frame (MF) model which captures the notion of a MW in the surface normals space, the unit sphere, and two probabilistic MF models over this space. First, for a single MF we propose novel real-time MAP inference algorithms, evaluate their performance and their use in drift-free rotation estimation. Second, to capture the complexity of real-world scenes at a global scale, we extend the MF model to a probabilistic mixture of Manhattan Frames (MMF). For MMF inference we propose a simple MAP inference algorithm and an adaptive Markov-Chain Monte-Carlo sampling algorithm with Metropolis-Hastings split/merge moves that let us infer the unknown number of mixture components. We demonstrate the versatility of the MMF model and inference algorithm across several scales of man-made environments.
Implementation of Combined Feather and Surface-Normal Ice Growth Models in LEWICE/X
NASA Technical Reports Server (NTRS)
Velazquez, M. T.; Hansman, R. J., Jr.
1995-01-01
Experimental observations have shown that discrete rime ice growths called feathers, which grow in approximately the direction of water droplet impingement, play an important role in the growth of ice on accreting surfaces for some thermodynamic conditions. An improved physical model of ice accretion has been implemented in the LEWICE 2D panel-based ice accretion code maintained by the NASA Lewis Research Center. The LEWICE/X model of ice accretion explicitly simulates regions of feather growth within the framework of the LEWICE model. Water droplets impinging on an accreting surface are withheld from the normal LEWICE mass/energy balance and handled in a separate routine; ice growth resulting from these droplets is performed with enhanced convective heat transfer approximately along droplet impingement directions. An independent underlying ice shape is grown along surface normals using the unmodified LEWICE method. The resulting dual-surface ice shape models roughness-induced feather growth observed in icing wind tunnel tests. Experiments indicate that the exact direction of feather growth is dependent on external conditions. Data is presented to support a linear variation of growth direction with temperature and cloud water content. Test runs of LEWICE/X indicate that the sizes of surface regions containing feathers are influenced by initial roughness element height. This suggests that a previous argument that feather region size is determined by boundary layer transition may be incorrect. Simulation results for two typical test cases give improved shape agreement over unmodified LEWICE.
Kinetic modeling of hyperpolarized 13C 1-pyruvate metabolism in normal rats and TRAMP mice
NASA Astrophysics Data System (ADS)
Zierhut, Matthew L.; Yen, Yi-Fen; Chen, Albert P.; Bok, Robert; Albers, Mark J.; Zhang, Vickie; Tropp, Jim; Park, Ilwoo; Vigneron, Daniel B.; Kurhanewicz, John; Hurd, Ralph E.; Nelson, Sarah J.
2010-01-01
PurposeTo investigate metabolic exchange between 13C 1-pyruvate, 13C 1-lactate, and 13C 1-alanine in pre-clinical model systems using kinetic modeling of dynamic hyperpolarized 13C spectroscopic data and to examine the relationship between fitted parameters and dose-response. Materials and methodsDynamic 13C spectroscopy data were acquired in normal rats, wild type mice, and mice with transgenic prostate tumors (TRAMP) either within a single slice or using a one-dimensional echo-planar spectroscopic imaging (1D-EPSI) encoding technique. Rate constants were estimated by fitting a set of exponential equations to the dynamic data. Variations in fitted parameters were used to determine model robustness in 15 mm slices centered on normal rat kidneys. Parameter values were used to investigate differences in metabolism between and within TRAMP and wild type mice. ResultsThe kinetic model was shown here to be robust when fitting data from a rat given similar doses. In normal rats, Michaelis-Menten kinetics were able to describe the dose-response of the fitted exchange rate constants with a 13.65% and 16.75% scaled fitting error (SFE) for kpyr→lac and kpyr→ala, respectively. In TRAMP mice, kpyr→lac increased an average of 94% after up to 23 days of disease progression, whether the mice were untreated or treated with casodex. Parameters estimated from dynamic 13C 1D-EPSI data were able to differentiate anatomical structures within both wild type and TRAMP mice. ConclusionsThe metabolic parameters estimated using this approach may be useful for in vivo monitoring of tumor progression and treatment efficacy, as well as to distinguish between various tissues based on metabolic activity.
NASA Astrophysics Data System (ADS)
Wilson, J. L.; Neupauer, R. M.
2001-05-01
Backward-in-space-and-time advection dispersion theory can be used to obtain information about the prior location of contamination or tracer that is captured by a pumping well or that is observed at a monitoring well. Location probability hindcasts the position of the contamination at some previous time; travel time probability describes the probability distribution for the length of time required for the contamination to travel from some location, such as the source, to the well. Originally based on a heuristic argument, there is now a formal theory founded on adjoint versions of forward transport equations, with the appropriate initial and boundary conditions, and loads. The formal approach reveals that there are several types of travel time probability. For the most part the theory can be applied with standard transport codes, sometimes requiring simple modifications. We have developed theory for multidimensional processes including advection, dispersion, decay, equilibrium and non-equilibrium sorption, and transient flow. We have tested it on a tracer test at the Borden Site, with 15 injection sites, and applied it to identify the origin of a TCE plume in New England. The method has applications to tracer test analysis, source identification, cost allocation, liability assignment and wellhead protection.
A quantum probability perspective on borderline vagueness.
Blutner, Reinhard; Pothos, Emmanuel M; Bruza, Peter
2013-10-01
The term "vagueness" describes a property of natural concepts, which normally have fuzzy boundaries, admit borderline cases, and are susceptible to Zeno's sorites paradox. We will discuss the psychology of vagueness, especially experiments investigating the judgment of borderline cases and contradictions. In the theoretical part, we will propose a probabilistic model that describes the quantitative characteristics of the experimental finding and extends Alxatib's and Pelletier's () theoretical analysis. The model is based on a Hopfield network for predicting truth values. Powerful as this classical perspective is, we show that it falls short of providing an adequate coverage of the relevant empirical results. In the final part, we will argue that a substantial modification of the analysis put forward by Alxatib and Pelletier and its probabilistic pendant is needed. The proposed modification replaces the standard notion of probabilities by quantum probabilities. The crucial phenomenon of borderline contradictions can be explained then as a quantum interference phenomenon.
Martens-Kuin models of normal and inverse polarity filament eruptions and coronal mass ejections
NASA Technical Reports Server (NTRS)
Smith, D. F.; Hildner, E.; Kuin, N. P. M.
1992-01-01
An analysis is made of the Martens-Kuin filament eruption model in relation to observations of coronal mass ejections (CMEs). The field lines of this model are plotted in the vacuum or infinite resistivity approximation with two background fields. The first is the dipole background field of the model and the second is the potential streamer model of Low. The Martens-Kuin model predicts that, as the filament erupts, the overlying coronal magnetic field lines rise in a manner inconsistent with observations of CMEs associated with eruptive filaments. This model and, by generalization the whole class of so-called Kuperus-Raadu configurations in which a neutral point occurs below the filament, are of questionable utility for CME modeling. An alternate case is considered in which the directions of currents in the Martens-Kuin model are reversed resulting in a so-called normal polarity configuration of the filament magnetic field. The background field lines now distort to support the filament and help eject it. While the vacuum field results make this configuration appear very promising, a full two- or more-dimensional MHD simulations is required to properly analyze the dynamics resulting from this configuration.
Yuan, Xiguo; Zhang, Junying; Wang, Yue
2010-12-01
One of the most challenging points in studying human common complex diseases is to search for both strong and weak susceptibility single-nucleotide polymorphisms (SNPs) and identify forms of genetic disease models. Currently, a number of methods have been proposed for this purpose. Many of them have not been validated through applications into various genome datasets, so their abilities are not clear in real practice. In this paper, we present a novel SNP asso