NASA Astrophysics Data System (ADS)
Darnah
2016-04-01
Poisson regression has been used if the response variable is count data that based on the Poisson distribution. The Poisson distribution assumed equal dispersion. In fact, a situation where count data are over dispersion or under dispersion so that Poisson regression inappropriate because it may underestimate the standard errors and overstate the significance of the regression parameters, and consequently, giving misleading inference about the regression parameters. This paper suggests the generalized Poisson regression model to handling over dispersion and under dispersion on the Poisson regression model. The Poisson regression model and generalized Poisson regression model will be applied the number of filariasis cases in East Java. Based regression Poisson model the factors influence of filariasis are the percentage of families who don't behave clean and healthy living and the percentage of families who don't have a healthy house. The Poisson regression model occurs over dispersion so that we using generalized Poisson regression. The best generalized Poisson regression model showing the factor influence of filariasis is percentage of families who don't have healthy house. Interpretation of result the model is each additional 1 percentage of families who don't have healthy house will add 1 people filariasis patient.
NASA Astrophysics Data System (ADS)
Zamani, Hossein; Faroughi, Pouya; Ismail, Noriszura
2014-06-01
This study relates the Poisson, mixed Poisson (MP), generalized Poisson (GP) and finite Poisson mixture (FPM) regression models through mean-variance relationship, and suggests the application of these models for overdispersed count data. As an illustration, the regression models are fitted to the US skin care count data. The results indicate that FPM regression model is the best model since it provides the largest log likelihood and the smallest AIC, followed by Poisson-Inverse Gaussion (PIG), GP and negative binomial (NB) regression models. The results also show that NB, PIG and GP regression models provide similar results.
Background stratified Poisson regression analysis of cohort data
Langholz, Bryan
2012-01-01
Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as ‘nuisance’ variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this ‘conditional’ regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. PMID:22193911
Background stratified Poisson regression analysis of cohort data.
Richardson, David B; Langholz, Bryan
2012-03-01
Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. PMID:22193911
Analyzing Historical Count Data: Poisson and Negative Binomial Regression Models.
ERIC Educational Resources Information Center
Beck, E. M.; Tolnay, Stewart E.
1995-01-01
Asserts that traditional approaches to multivariate analysis, including standard linear regression techniques, ignore the special character of count data. Explicates three suitable alternatives to standard regression techniques, a simple Poisson regression, a modified Poisson regression, and a negative binomial model. (MJP)
Poisson Regression Analysis of Illness and Injury Surveillance Data
Frome E.L., Watkins J.P., Ellis E.D.
2012-12-12
The Department of Energy (DOE) uses illness and injury surveillance to monitor morbidity and assess the overall health of the work force. Data collected from each participating site include health events and a roster file with demographic information. The source data files are maintained in a relational data base, and are used to obtain stratified tables of health event counts and person time at risk that serve as the starting point for Poisson regression analysis. The explanatory variables that define these tables are age, gender, occupational group, and time. Typical response variables of interest are the number of absences due to illness or injury, i.e., the response variable is a count. Poisson regression methods are used to describe the effect of the explanatory variables on the health event rates using a log-linear main effects model. Results of fitting the main effects model are summarized in a tabular and graphical form and interpretation of model parameters is provided. An analysis of deviance table is used to evaluate the importance of each of the explanatory variables on the event rate of interest and to determine if interaction terms should be considered in the analysis. Although Poisson regression methods are widely used in the analysis of count data, there are situations in which over-dispersion occurs. This could be due to lack-of-fit of the regression model, extra-Poisson variation, or both. A score test statistic and regression diagnostics are used to identify over-dispersion. A quasi-likelihood method of moments procedure is used to evaluate and adjust for extra-Poisson variation when necessary. Two examples are presented using respiratory disease absence rates at two DOE sites to illustrate the methods and interpretation of the results. In the first example the Poisson main effects model is adequate. In the second example the score test indicates considerable over-dispersion and a more detailed analysis attributes the over-dispersion to extra-Poisson
A new form of bivariate generalized Poisson regression model
NASA Astrophysics Data System (ADS)
Faroughi, Pouya; Ismail, Noriszura
2014-09-01
This paper introduces a new form of bivariate generalized Poisson (BGP) regression which can be fitted to bivariate and correlated count data with covariates. The BGP regression suggested in this study can be fitted not only to bivariate count data with positive, zero or negative correlations, but also to underdispersed or overdispersed bivariate count data. Applications of bivariate Poisson (BP) regression and the new BGP regression are illustrated on Malaysian motor insurance data.
Collision prediction models using multivariate Poisson-lognormal regression.
El-Basyouny, Karim; Sayed, Tarek
2009-07-01
This paper advocates the use of multivariate Poisson-lognormal (MVPLN) regression to develop models for collision count data. The MVPLN approach presents an opportunity to incorporate the correlations across collision severity levels and their influence on safety analyses. The paper introduces a new multivariate hazardous location identification technique, which generalizes the univariate posterior probability of excess that has been commonly proposed and applied in the literature. In addition, the paper presents an alternative approach for quantifying the effect of the multivariate structure on the precision of expected collision frequency. The MVPLN approach is compared with the independent (separate) univariate Poisson-lognormal (PLN) models with respect to model inference, goodness-of-fit, identification of hot spots and precision of expected collision frequency. The MVPLN is modeled using the WinBUGS platform which facilitates computation of posterior distributions as well as providing a goodness-of-fit measure for model comparisons. The results indicate that the estimates of the extra Poisson variation parameters were considerably smaller under MVPLN leading to higher precision. The improvement in precision is due mainly to the fact that MVPLN accounts for the correlation between the latent variables representing property damage only (PDO) and injuries plus fatalities (I+F). This correlation was estimated at 0.758, which is highly significant, suggesting that higher PDO rates are associated with higher I+F rates, as the collision likelihood for both types is likely to rise due to similar deficiencies in roadway design and/or other unobserved factors. In terms of goodness-of-fit, the MVPLN model provided a superior fit than the independent univariate models. The multivariate hazardous location identification results demonstrated that some hazardous locations could be overlooked if the analysis was restricted to the univariate models. PMID:19540972
Mixed-effects Poisson regression analysis of adverse event reports
Gibbons, Robert D.; Segawa, Eisuke; Karabatsos, George; Amatya, Anup K.; Bhaumik, Dulal K.; Brown, C. Hendricks; Kapur, Kush; Marcus, Sue M.; Hur, Kwan; Mann, J. John
2008-01-01
SUMMARY A new statistical methodology is developed for the analysis of spontaneous adverse event (AE) reports from post-marketing drug surveillance data. The method involves both empirical Bayes (EB) and fully Bayes estimation of rate multipliers for each drug within a class of drugs, for a particular AE, based on a mixed-effects Poisson regression model. Both parametric and semiparametric models for the random-effect distribution are examined. The method is applied to data from Food and Drug Administration (FDA)’s Adverse Event Reporting System (AERS) on the relationship between antidepressants and suicide. We obtain point estimates and 95 per cent confidence (posterior) intervals for the rate multiplier for each drug (e.g. antidepressants), which can be used to determine whether a particular drug has an increased risk of association with a particular AE (e.g. suicide). Confidence (posterior) intervals that do not include 1.0 provide evidence for either significant protective or harmful associations of the drug and the adverse effect. We also examine EB, parametric Bayes, and semiparametric Bayes estimators of the rate multipliers and associated confidence (posterior) intervals. Results of our analysis of the FDA AERS data revealed that newer antidepressants are associated with lower rates of suicide adverse event reports compared with older antidepressants. We recommend improvements to the existing AERS system, which are likely to improve its public health value as an early warning system. PMID:18404622
Estimation of adjusted rate differences using additive negative binomial regression.
Donoghoe, Mark W; Marschner, Ian C
2016-08-15
Rate differences are an important effect measure in biostatistics and provide an alternative perspective to rate ratios. When the data are event counts observed during an exposure period, adjusted rate differences may be estimated using an identity-link Poisson generalised linear model, also known as additive Poisson regression. A problem with this approach is that the assumption of equality of mean and variance rarely holds in real data, which often show overdispersion. An additive negative binomial model is the natural alternative to account for this; however, standard model-fitting methods are often unable to cope with the constrained parameter space arising from the non-negativity restrictions of the additive model. In this paper, we propose a novel solution to this problem using a variant of the expectation-conditional maximisation-either algorithm. Our method provides a reliable way to fit an additive negative binomial regression model and also permits flexible generalisations using semi-parametric regression functions. We illustrate the method using a placebo-controlled clinical trial of fenofibrate treatment in patients with type II diabetes, where the outcome is the number of laser therapy courses administered to treat diabetic retinopathy. An R package is available that implements the proposed method. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27073156
Fuzzy classifier based support vector regression framework for Poisson ratio determination
NASA Astrophysics Data System (ADS)
Asoodeh, Mojtaba; Bagheripour, Parisa
2013-09-01
Poisson ratio is considered as one of the most important rock mechanical properties of hydrocarbon reservoirs. Determination of this parameter through laboratory measurement is time, cost, and labor intensive. Furthermore, laboratory measurements do not provide continuous data along the reservoir intervals. Hence, a fast, accurate, and inexpensive way of determining Poisson ratio which produces continuous data over the whole reservoir interval is desirable. For this purpose, support vector regression (SVR) method based on statistical learning theory (SLT) was employed as a supervised learning algorithm to estimate Poisson ratio from conventional well log data. SVR is capable of accurately extracting the implicit knowledge contained in conventional well logs and converting the gained knowledge into Poisson ratio data. Structural risk minimization (SRM) principle which is embedded in the SVR structure in addition to empirical risk minimization (EMR) principle provides a robust model for finding quantitative formulation between conventional well log data and Poisson ratio. Although satisfying results were obtained from an individual SVR model, it had flaws of overestimation in low Poisson ratios and underestimation in high Poisson ratios. These errors were eliminated through implementation of fuzzy classifier based SVR (FCBSVR). The FCBSVR significantly improved accuracy of the final prediction. This strategy was successfully applied to data from carbonate reservoir rocks of an Iranian Oil Field. Results indicated that SVR predicted Poisson ratio values are in good agreement with measured values.
Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.
Chatzis, Sotirios P; Andreou, Andreas S
2015-11-01
Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.
Modeling animal-vehicle collisions using diagonal inflated bivariate Poisson regression.
Lao, Yunteng; Wu, Yao-Jan; Corey, Jonathan; Wang, Yinhai
2011-01-01
Two types of animal-vehicle collision (AVC) data are commonly adopted for AVC-related risk analysis research: reported AVC data and carcass removal data. One issue with these two data sets is that they were found to have significant discrepancies by previous studies. In order to model these two types of data together and provide a better understanding of highway AVCs, this study adopts a diagonal inflated bivariate Poisson regression method, an inflated version of bivariate Poisson regression model, to fit the reported AVC and carcass removal data sets collected in Washington State during 2002-2006. The diagonal inflated bivariate Poisson model not only can model paired data with correlation, but also handle under- or over-dispersed data sets as well. Compared with three other types of models, double Poisson, bivariate Poisson, and zero-inflated double Poisson, the diagonal inflated bivariate Poisson model demonstrates its capability of fitting two data sets with remarkable overlapping portions resulting from the same stochastic process. Therefore, the diagonal inflated bivariate Poisson model provides researchers a new approach to investigating AVCs from a different perspective involving the three distribution parameters (λ(1), λ(2) and λ(3)). The modeling results show the impacts of traffic elements, geometric design and geographic characteristics on the occurrences of both reported AVC and carcass removal data. It is found that the increase of some associated factors, such as speed limit, annual average daily traffic, and shoulder width, will increase the numbers of reported AVCs and carcass removals. Conversely, the presence of some geometric factors, such as rolling and mountainous terrain, will decrease the number of reported AVCs.
Effect of air pollution on lung cancer: a Poisson regression model based on vital statistics.
Tango, T
1994-01-01
This article describes a Poisson regression model for time trends of mortality to detect the long-term effects of common levels of air pollution on lung cancer, in which the adjustment for cigarette smoking is not always necessary. The main hypothesis to be tested in the model is that if the long-term and common-level air pollution had an effect on lung cancer, the death rate from lung cancer could be expected to increase gradually at a higher rate in the region with relatively high levels of air pollution than in the region with low levels, and that this trend would not be expected for other control diseases in which cigarette smoking is a risk factor. Using this approach, we analyzed the trend of mortality in females aged 40 to 79, from lung cancer and two control diseases, ischemic heart disease and cerebrovascular disease, based on vital statistics in 23 wards of the Tokyo metropolitan area for 1972 to 1988. Ward-specific mean levels per day of SO2 and NO2 from 1974 through 1976 estimated by Makino (1978) were used as the ward-specific exposure measure of air pollution. No data on tobacco consumption in each ward is available. Our analysis supported the existence of long-term effects of air pollution on lung cancer. PMID:7851329
A marginalized zero-inflated Poisson regression model with overall exposure effects.
Long, D Leann; Preisser, John S; Herring, Amy H; Golin, Carol E
2014-12-20
The zero-inflated Poisson (ZIP) regression model is often employed in public health research to examine the relationships between exposures of interest and a count outcome exhibiting many zeros, in excess of the amount expected under sampling from a Poisson distribution. The regression coefficients of the ZIP model have latent class interpretations, which correspond to a susceptible subpopulation at risk for the condition with counts generated from a Poisson distribution and a non-susceptible subpopulation that provides the extra or excess zeros. The ZIP model parameters, however, are not well suited for inference targeted at marginal means, specifically, in quantifying the effect of an explanatory variable in the overall mixture population. We develop a marginalized ZIP model approach for independent responses to model the population mean count directly, allowing straightforward inference for overall exposure effects and empirical robust variance estimation for overall log-incidence density ratios. Through simulation studies, the performance of maximum likelihood estimation of the marginalized ZIP model is assessed and compared with other methods of estimating overall exposure effects. The marginalized ZIP model is applied to a recent study of a motivational interviewing-based safer sex counseling intervention, designed to reduce unprotected sexual act counts. PMID:25220537
Poisson regression analysis of mortality among male workers at a thorium-processing plant
Liu, Zhiyuan; Lee, Tze-San; Kotek, T.J.
1991-12-31
Analyses of mortality among a cohort of 3119 male workers employed between 1915 and 1973 at a thorium-processing plant were updated to the end of 1982. Of the whole group, 761 men were deceased and 2161 men were still alive, while 197 men were lost to follow-up. A total of 250 deaths was added to the 511 deaths observed in the previous study. The standardized mortality ratio (SMR) for all causes of death was 1.12 with 95% confidence interval (CI) of 1.05-1.21. The SMRs were also significantly increased for all malignant neoplasms (SMR = 1.23, 95% CI = 1.04-1.43) and lung cancer (SMR = 1.36, 95% CI = 1.02-1.78). Poisson regression analysis was employed to evaluate the joint effects of job classification, duration of employment, time since first employment, age and year at first employment on mortality of all malignant neoplasms and lung cancer. A comparison of internal and external analyses with the Poisson regression model was also conducted and showed no obvious difference in fitting the data on lung cancer mortality of the thorium workers. The results of the multivariate analysis showed that there was no significant effect of all the study factors on mortality due to all malignant neoplasms and lung cancer. Therefore, further study is needed for the former thorium workers.
Naya, Hugo; Urioste, Jorge I; Chang, Yu-Mei; Rodrigues-Motta, Mariana; Kremer, Roberto; Gianola, Daniel
2008-01-01
Dark spots in the fleece area are often associated with dark fibres in wool, which limits its competitiveness with other textile fibres. Field data from a sheep experiment in Uruguay revealed an excess number of zeros for dark spots. We compared the performance of four Poisson and zero-inflated Poisson (ZIP) models under four simulation scenarios. All models performed reasonably well under the same scenario for which the data were simulated. The deviance information criterion favoured a Poisson model with residual, while the ZIP model with a residual gave estimates closer to their true values under all simulation scenarios. Both Poisson and ZIP models with an error term at the regression level performed better than their counterparts without such an error. Field data from Corriedale sheep were analysed with Poisson and ZIP models with residuals. Parameter estimates were similar for both models. Although the posterior distribution of the sire variance was skewed due to a small number of rams in the dataset, the median of this variance suggested a scope for genetic selection. The main environmental factor was the age of the sheep at shearing. In summary, age related processes seem to drive the number of dark spots in this breed of sheep. PMID:18558072
Erhardt, Vinzenz; Bogdan, Malgorzata; Czado, Claudia
2010-01-01
We consider the problem of locating multiple interacting quantitative trait loci (QTL) influencing traits measured in counts. In many applications the distribution of the count variable has a spike at zero. Zero-inflated generalized Poisson regression (ZIGPR) allows for an additional probability mass at zero and hence an improvement in the detection of significant loci. Classical model selection criteria often overestimate the QTL number. Therefore, modified versions of the Bayesian Information Criterion (mBIC and EBIC) were successfully used for QTL mapping. We apply these criteria based on ZIGPR as well as simpler models. An extensive simulation study shows their good power detecting QTL while controlling the false discovery rate. We illustrate how the inability of the Poisson distribution to account for over-dispersion leads to an overestimation of the QTL number and hence strongly discourages its application for identifying factors influencing count data. The proposed method is used to analyze the mice gallstone data of Lyons et al. (2003). Our results suggest the existence of a novel QTL on chromosome 4 interacting with another QTL previously identified on chromosome 5. We provide the corresponding code in R.
Ribeiro, Manuel Castro; Sousa, António Jorge; Pereira, Maria João
2016-05-01
The geographical distribution of health outcomes is influenced by socio-economic and environmental factors operating on different spatial scales. Geographical variations in relationships can be revealed with semi-parametric Geographically Weighted Poisson Regression (sGWPR), a model that can combine both geographically varying and geographically constant parameters. To decide whether a parameter should vary geographically, two models are compared: one in which all parameters are allowed to vary geographically and one in which all except the parameter being evaluated are allowed to vary geographically. The model with the lower corrected Akaike Information Criterion (AICc) is selected. Delivering model selection exclusively according to the AICc might hide important details in spatial variations of associations. We propose assisting the decision by using a Linear Model of Coregionalization (LMC). Here we show how LMC can refine sGWPR on ecological associations between socio-economic and environmental variables and low birth weight outcomes in the west-north-central region of Portugal.
Non-Poisson processes: regression to equilibrium versus equilibrium correlation functions
NASA Astrophysics Data System (ADS)
Allegrini, Paolo; Grigolini, Paolo; Palatella, Luigi; Rosa, Angelo; West, Bruce J.
2005-03-01
We study the response to perturbation of non-Poisson dichotomous fluctuations that generate super-diffusion. We adopt the Liouville perspective and with it a quantum-like approach based on splitting the density distribution into a symmetric and an anti-symmetric component. To accomodate the equilibrium condition behind the stationary correlation function, we study the time evolution of the anti-symmetric component, while keeping the symmetric component at equilibrium. For any realistic form of the perturbed distribution density we expect a breakdown of the Onsager principle, namely, of the property that the subsequent regression of the perturbation to equilibrium is identical to the corresponding equilibrium correlation function. We find the directions to follow for the calculation of higher-order correlation functions, an unsettled problem, which has been addressed in the past by means of approximations yielding quite different physical effects.
Frost, G; Harding, A-H; Darnton, A; McElvenny, D; Morgan, D
2008-09-01
The asbestos industry has shifted from manufacture to stripping/removal work. The aim of this study was to investigate early indications of mortality among removal workers. The study population consisted of 31 302 stripping/removal workers in the Great Britain Asbestos Survey, followed up to December 2005. Relative risks (RR) for causes of death with elevated standardised mortality ratios (SMR) and sufficient deaths were obtained from Poisson regression. Risk factors considered included dust suppression technique, type of respirator used, hours spent stripping, smoking status and exposure length. Deaths were elevated for all causes (SMR 123, 95% CI 119-127, n=985), all cancers including lung cancer, mesothelioma, and circulatory disease. There were no significant differences between suppression techniques and respirator types. Spending more than 40 h per week stripping rather than less than 10, increased mortality risk from all causes (RR 1.4, 95% CI 1.2-1.7), circulatory disease and ischaemic heart disease. Elevated mesothelioma risks were observed for those first exposed at young ages or exposed for more than 30 years. This study is a first step in assessing long-term mortality of asbestos removal workers in relation to working practices and asbestos exposure. Further follow-up will allow the impact of recent regulations to be assessed.
NASA Astrophysics Data System (ADS)
Winahju, W. S.; Mukarromah, A.; Putri, S.
2015-03-01
Leprosy is a chronic infectious disease caused by bacteria of leprosy (Mycobacterium leprae). Leprosy has become an important thing in Indonesia because its morbidity is quite high. Based on WHO data in 2014, in 2012 Indonesia has the highest number of new leprosy patients after India and Brazil with a contribution of 18.994 people (8.7% of the world). This number makes Indonesia automatically placed as the country with the highest number of leprosy morbidity of ASEAN countries. The province that most contributes to the number of leprosy patients in Indonesia is East Java. There are two kind of leprosy. They consist of pausibacillary and multibacillary. The morbidity of multibacillary leprosy is higher than pausibacillary leprosy. This paper will discuss modeling both of the number of multibacillary and pausibacillary leprosy patients as responses variables. These responses are count variables, so modeling will be conducted by using bivariate poisson regression method. Unit experiment used is in East Java, and predictors involved are: environment, demography, and poverty. The model uses data in 2012, and the result indicates that all predictors influence significantly.
Longevity Is Linked to Mitochondrial Mutation Rates in Rockfish: A Test Using Poisson Regression.
Hua, Xia; Cowman, Peter; Warren, Dan; Bromham, Lindell
2015-10-01
The mitochondrial theory of ageing proposes that the cumulative effect of biochemical damage in mitochondria causes mitochondrial mutations and plays a key role in ageing. Numerous studies have applied comparative approaches to test one of the predictions of the theory: That the rate of mitochondrial mutations is negatively correlated with longevity. Comparative studies face three challenges in detecting correlates of mutation rate: Covariation of mutation rates between species due to ancestry, covariation between life-history traits, and difficulty obtaining accurate estimates of mutation rate. We address these challenges using a novel Poisson regression method to examine the link between mutation rate and lifespan in rockfish (Sebastes). This method has better performance than traditional sister-species comparisons when sister species are too recently diverged to give reliable estimates of mutation rate. Rockfish are an ideal model system: They have long life spans with indeterminate growth and little evidence of senescence, which minimizes the confounding tradeoffs between lifespan and fecundity. We show that lifespan in rockfish is negatively correlated to rate of mitochondrial mutation, but not the rate of nuclear mutation. The life history of rockfish allows us to conclude that this relationship is unlikely to be driven by the tradeoffs between longevity and fecundity, or by the frequency of DNA replications in the germline. Instead, the relationship is compatible with the hypothesis that mutation rates are reduced by selection in long-lived taxa to reduce the chance of mitochondrial damage over its lifespan, consistent with the mitochondrial theory of ageing.
Kim, Sungduk; Chen, Zhen; Zhang, Zhiwei; Simons-Morton, Bruce G.; Albert, Paul S.
2013-01-01
Although there is evidence that teenagers are at a high risk of crashes in the early months after licensure, the driving behavior of these teenagers is not well understood. The Naturalistic Teenage Driving Study (NTDS) is the first U.S. study to document continuous driving performance of newly-licensed teenagers during their first 18 months of licensure. Counts of kinematic events such as the number of rapid accelerations are available for each trip, and their incidence rates represent different aspects of driving behavior. We propose a hierarchical Poisson regression model incorporating over-dispersion, heterogeneity, and serial correlation as well as a semiparametric mean structure. Analysis of the NTDS data is carried out with a hierarchical Bayesian framework using reversible jump Markov chain Monte Carlo algorithms to accommodate the flexible mean structure. We show that driving with a passenger and night driving decrease kinematic events, while having risky friends increases these events. Further the within-subject variation in these events is comparable to the between-subject variation. This methodology will be useful for other intensively collected longitudinal count data, where event rates are low and interest focuses on estimating the mean and variance structure of the process. This article has online supplementary materials. PMID:24076760
Association between large strongyle genera in larval cultures--using rare-event poisson regression.
Cao, X; Vidyashankar, A N; Nielsen, M K
2013-09-01
Decades of intensive anthelmintic treatment has caused equine large strongyles to become quite rare, while the cyathostomins have developed resistance to several drug classes. The larval culture has been associated with low to moderate negative predictive values for detecting Strongylus vulgaris infection. It is unknown whether detection of other large strongyle species can be statistically associated with presence of S. vulgaris. This remains a statistical challenge because of the rare occurrence of large strongyle species. This study used a modified Poisson regression to analyse a dataset for associations between S. vulgaris infection and simultaneous occurrence of Strongylus edentatus and Triodontophorus spp. In 663 horses on 42 Danish farms, the individual prevalences of S. vulgaris, S. edentatus and Triodontophorus spp. were 12%, 3% and 12%, respectively. Both S. edentatus and Triodontophorus spp. were significantly associated with S. vulgaris infection with relative risks above 1. Further, S. edentatus was associated with use of selective therapy on the farms, as well as negatively associated with anthelmintic treatment carried out within 6 months prior to the study. The findings illustrate that occurrence of S. vulgaris in larval cultures can be interpreted as indicative of other large strongyles being likely to be present.
Poisson regression analysis of the mortality among a cohort of World War II nuclear industry workers
Frome, E.L.; Cragle, D.L.; McLain, R.W. )
1990-08-01
A historical cohort mortality study was conducted among 28,008 white male employees who had worked for at least 1 month in Oak Ridge, Tennessee, during World War II. The workers were employed at two plants that were producing enriched uranium and a research and development laboratory. Vital status was ascertained through 1980 for 98.1% of the cohort members and death certificates were obtained for 96.8% of the 11,671 decedents. A modified version of the traditional standardized mortality ratio (SMR) analysis was used to compare the cause-specific mortality experience of the World War II workers with the U.S. white male population. An SMR and a trend statistic were computed for each cause-of-death category for the 30-year interval from 1950 to 1980. The SMR for all causes was 1.11, and there was a significant upward trend of 0.74% per year. The excess mortality was primarily due to lung cancer and diseases of the respiratory system. Poisson regression methods were used to evaluate the influence of duration of employment, facility of employment, socioeconomic status, birth year, period of follow-up, and radiation exposure on cause-specific mortality. Maximum likelihood estimates of the parameters in a main-effects model were obtained to describe the joint effects of these six factors on cause-specific mortality of the World War II workers. We show that these multivariate regression techniques provide a useful extension of conventional SMR analysis and illustrate their effective use in a large occupational cohort study.
Park, Taeyoung; Krafty, Robert T.; Sánchez, Alvaro I.
2012-01-01
A Poisson regression model with an offset assumes a constant baseline rate after accounting for measured covariates, which may lead to biased estimates of coefficients in an inhomogeneous Poisson process. To correctly estimate the effect of time-dependent covariates, we propose a Poisson change-point regression model with an offset that allows a time-varying baseline rate. When the nonconstant pattern of a log baseline rate is modeled with a nonparametric step function, the resulting semi-parametric model involves a model component of varying dimension and thus requires a sophisticated varying-dimensional inference to obtain correct estimates of model parameters of fixed dimension. To fit the proposed varying-dimensional model, we devise a state-of-the-art MCMC-type algorithm based on partial collapse. The proposed model and methods are used to investigate an association between daily homicide rates in Cali, Colombia and policies that restrict the hours during which the legal sale of alcoholic beverages is permitted. While simultaneously identifying the latent changes in the baseline homicide rate which correspond to the incidence of sociopolitical events, we explore the effect of policies governing the sale of alcohol on homicide rates and seek a policy that balances the economic and cultural dependencies on alcohol sales to the health of the public. PMID:23393408
2013-01-01
Background Malnutrition is one of the principal causes of child mortality in developing countries including Bangladesh. According to our knowledge, most of the available studies, that addressed the issue of malnutrition among under-five children, considered the categorical (dichotomous/polychotomous) outcome variables and applied logistic regression (binary/multinomial) to find their predictors. In this study malnutrition variable (i.e. outcome) is defined as the number of under-five malnourished children in a family, which is a non-negative count variable. The purposes of the study are (i) to demonstrate the applicability of the generalized Poisson regression (GPR) model as an alternative of other statistical methods and (ii) to find some predictors of this outcome variable. Methods The data is extracted from the Bangladesh Demographic and Health Survey (BDHS) 2007. Briefly, this survey employs a nationally representative sample which is based on a two-stage stratified sample of households. A total of 4,460 under-five children is analysed using various statistical techniques namely Chi-square test and GPR model. Results The GPR model (as compared to the standard Poisson regression and negative Binomial regression) is found to be justified to study the above-mentioned outcome variable because of its under-dispersion (variance < mean) property. Our study also identify several significant predictors of the outcome variable namely mother’s education, father’s education, wealth index, sanitation status, source of drinking water, and total number of children ever born to a woman. Conclusions Consistencies of our findings in light of many other studies suggest that the GPR model is an ideal alternative of other statistical models to analyse the number of under-five malnourished children in a family. Strategies based on significant predictors may improve the nutritional status of children in Bangladesh. PMID:23297699
Khan, Asaduzzaman; Western, Mark
2011-01-01
The purpose of this study was to explore factors that facilitate or hinder effective use of computers in Australian general medical practice. This study is based on data extracted from a national telephone survey of 480 general practitioners (GPs) across Australia. Clinical functions performed by GPs using computers were examined using a zero-inflated Poisson (ZIP) regression modelling. About 17% of GPs were not using computer for any clinical function, while 18% reported using computers for all clinical functions. The ZIP model showed that computer anxiety was negatively associated with effective computer use, while practitioners' belief about usefulness of computers was positively associated with effective computer use. Being a female GP or working in partnership or group practice increased the odds of effectively using computers for clinical functions. To fully capitalise on the benefits of computer technology, GPs need to be convinced that this technology is useful and can make a difference.
Choo-Wosoba, Hyoyoung; Levy, Steven M; Datta, Somnath
2016-06-01
Community water fluoridation is an important public health measure to prevent dental caries, but it continues to be somewhat controversial. The Iowa Fluoride Study (IFS) is a longitudinal study on a cohort of Iowa children that began in 1991. The main purposes of this study (http://www.dentistry.uiowa.edu/preventive-fluoride-study) were to quantify fluoride exposures from both dietary and nondietary sources and to associate longitudinal fluoride exposures with dental fluorosis (spots on teeth) and dental caries (cavities). We analyze a subset of the IFS data by a marginal regression model with a zero-inflated version of the Conway-Maxwell-Poisson distribution for count data exhibiting excessive zeros and a wide range of dispersion patterns. In general, we introduce two estimation methods for fitting a ZICMP marginal regression model. Finite sample behaviors of the estimators and the resulting confidence intervals are studied using extensive simulation studies. We apply our methodologies to the dental caries data. Our novel modeling incorporating zero inflation, clustering, and overdispersion sheds some new light on the effect of community water fluoridation and other factors. We also include a second application of our methodology to a genomic (next-generation sequencing) dataset that exhibits underdispersion. PMID:26575079
Assessing Longitudinal Change: Adjustment for Regression to the Mean Effects
ERIC Educational Resources Information Center
Rocconi, Louis M.; Ethington, Corinna A.
2009-01-01
Pascarella (J Coll Stud Dev 47:508-520, 2006) has called for an increase in use of longitudinal data with pretest-posttest design when studying effects on college students. However, such designs that use multiple measures to document change are vulnerable to an important threat to internal validity, regression to the mean. Herein, we discuss a…
Procedures for adjusting regional regression models of urban-runoff quality using local data
Hoos, Anne B.; Lizarraga, Joy S.
1996-01-01
Statistical operations termed model-adjustment procedures can be used to incorporate local data into existing regression modes to improve the predication of urban-runoff quality. Each procedure is a form of regression analysis in which the local data base is used as a calibration data set; the resulting adjusted regression models can then be used to predict storm-runoff quality at unmonitored sites. Statistical tests of the calibration data set guide selection among proposed procedures.
Hogan, Jennifer N; Daniels, Miles E; Watson, Fred G; Conrad, Patricia A; Oates, Stori C; Miller, Melissa A; Hardin, Dane; Byrne, Barbara A; Dominik, Clare; Melli, Ann; Jessup, David A; Miller, Woutrina A
2012-05-01
Fecal pathogen contamination of watersheds worldwide is increasingly recognized, and natural wetlands may have an important role in mitigating fecal pathogen pollution flowing downstream. Given that waterborne protozoa, such as Cryptosporidium and Giardia, are transported within surface waters, this study evaluated associations between fecal protozoa and various wetland-specific and environmental risk factors. This study focused on three distinct coastal California wetlands: (i) a tidally influenced slough bordered by urban and agricultural areas, (ii) a seasonal wetland adjacent to a dairy, and (iii) a constructed wetland that receives agricultural runoff. Wetland type, seasonality, rainfall, and various water quality parameters were evaluated using longitudinal Poisson regression to model effects on concentrations of protozoa and indicator bacteria (Escherichia coli and total coliform). Among wetland types, the dairy wetland exhibited the highest protozoal and bacterial concentrations, and despite significant reductions in microbe concentrations, the wetland could still be seen to influence water quality in the downstream tidal wetland. Additionally, recent rainfall events were associated with higher protozoal and bacterial counts in wetland water samples across all wetland types. Notably, detection of E. coli concentrations greater than a 400 most probable number (MPN) per 100 ml was associated with higher Cryptosporidium oocyst and Giardia cyst concentrations. These findings show that natural wetlands draining agricultural and livestock operation runoff into human-utilized waterways should be considered potential sources of pathogens and that wetlands can be instrumental in reducing pathogen loads to downstream waters.
Hogan, Jennifer N.; Daniels, Miles E.; Watson, Fred G.; Conrad, Patricia A.; Oates, Stori C.; Miller, Melissa A.; Hardin, Dane; Byrne, Barbara A.; Dominik, Clare; Melli, Ann; Jessup, David A.
2012-01-01
Fecal pathogen contamination of watersheds worldwide is increasingly recognized, and natural wetlands may have an important role in mitigating fecal pathogen pollution flowing downstream. Given that waterborne protozoa, such as Cryptosporidium and Giardia, are transported within surface waters, this study evaluated associations between fecal protozoa and various wetland-specific and environmental risk factors. This study focused on three distinct coastal California wetlands: (i) a tidally influenced slough bordered by urban and agricultural areas, (ii) a seasonal wetland adjacent to a dairy, and (iii) a constructed wetland that receives agricultural runoff. Wetland type, seasonality, rainfall, and various water quality parameters were evaluated using longitudinal Poisson regression to model effects on concentrations of protozoa and indicator bacteria (Escherichia coli and total coliform). Among wetland types, the dairy wetland exhibited the highest protozoal and bacterial concentrations, and despite significant reductions in microbe concentrations, the wetland could still be seen to influence water quality in the downstream tidal wetland. Additionally, recent rainfall events were associated with higher protozoal and bacterial counts in wetland water samples across all wetland types. Notably, detection of E. coli concentrations greater than a 400 most probable number (MPN) per 100 ml was associated with higher Cryptosporidium oocyst and Giardia cyst concentrations. These findings show that natural wetlands draining agricultural and livestock operation runoff into human-utilized waterways should be considered potential sources of pathogens and that wetlands can be instrumental in reducing pathogen loads to downstream waters. PMID:22427504
Adjustment of regional regression equations for urban storm-runoff quality using at-site data
Barks, C.S.
1996-01-01
Regional regression equations have been developed to estimate urban storm-runoff loads and mean concentrations using a national data base. Four statistical methods using at-site data to adjust the regional equation predictions were developed to provide better local estimates. The four adjustment procedures are a single-factor adjustment, a regression of the observed data against the predicted values, a regression of the observed values against the predicted values and additional local independent variables, and a weighted combination of a local regression with the regional prediction. Data collected at five representative storm-runoff sites during 22 storms in Little Rock, Arkansas, were used to verify, and, when appropriate, adjust the regional regression equation predictions. Comparison of observed values of stormrunoff loads and mean concentrations to the predicted values from the regional regression equations for nine constituents (chemical oxygen demand, suspended solids, total nitrogen as N, total ammonia plus organic nitrogen as N, total phosphorus as P, dissolved phosphorus as P, total recoverable copper, total recoverable lead, and total recoverable zinc) showed large prediction errors ranging from 63 percent to more than several thousand percent. Prediction errors for 6 of the 18 regional regression equations were less than 100 percent and could be considered reasonable for water-quality prediction equations. The regression adjustment procedure was used to adjust five of the regional equation predictions to improve the predictive accuracy. For seven of the regional equations the observed and the predicted values are not significantly correlated. Thus neither the unadjusted regional equations nor any of the adjustments were appropriate. The mean of the observed values was used as a simple estimator when the regional equation predictions and adjusted predictions were not appropriate.
Marsh, G M; Stone, R A; Henderson, V L
1992-11-01
The Formaldehyde Institute (FI) sponsored additional Poisson regression analysis of lung cancer mortality data from the joint National Cancer Institute (NCI)/FI cohort study of workers exposed to formaldehyde to investigate the previously reported effects of plant and latency period and to assess the impact of short-term workers (under 1 yr employment) on the results. There were 242 lung cancer deaths in this cohort of 20,067 white male workers. With OCMAP software, lung cancer death rates for the white males in this cohort were computed by plant, age, calendar time, and job type for several time-dependent formaldehyde exposures, including formaldehyde exposure in the presence of 12 selected co-exposures: ammonia (AM), antioxidants (AN), asbestos (AS), carbon black (CB), dyes/inks/pigments (DY), hexamethylenetetramine (HX), melamine (ME), particulates (PT), phenol (PH), plasticizers (PL), urea/urea compounds (UR), wood dust (WD), and a composite co-exposure (X5) involving AN, HX, ME, PH, and UR.A 1.6-fold increase in lung cancer risk was found, beginning approximately 16-20 yr after first employment in the study plants with no evidence of a differential effect of latency between hourly and salaried workers or among the various categories of formaldehyde exposure as measured by cumulative average intensity or length of exposure. The statistically significant heterogeneity in lung cancer risk among the 10 plants could not be explained by interplant differences in cumulative or average intensity of exposure to formaldehyde, either without regard to co-exposures or in the presence of any of the 12 co-exposures considered individually. Plant was not a statistically significant predictor of lung cancer risk when cumulative exposure to the composite X5 was included in the model, suggesting that some component of X5, or a correlate, could at least partly account for the overall heterogeneity. No significant associations were found for cumulative, average, or length of
ERIC Educational Resources Information Center
Olejnik, Stephen; Mills, Jamie; Keselman, Harvey
2000-01-01
Evaluated the use of Mallow's C(p) and Wherry's adjusted R squared (R. Wherry, 1931) statistics to select a final model from a pool of model solutions using computer generated data. Neither statistic identified the underlying regression model any better than, and usually less well than, the stepwise selection method, which itself was poor for…
Ma, Jianming; Kockelman, Kara M; Damien, Paul
2008-05-01
Numerous efforts have been devoted to investigating crash occurrence as related to roadway design features, environmental factors and traffic conditions. However, most of the research has relied on univariate count models; that is, traffic crash counts at different levels of severity are estimated separately, which may neglect shared information in unobserved error terms, reduce efficiency in parameter estimates, and lead to potential biases in sample databases. This paper offers a multivariate Poisson-lognormal (MVPLN) specification that simultaneously models crash counts by injury severity. The MVPLN specification allows for a more general correlation structure as well as overdispersion. This approach addresses several questions that are difficult to answer when estimating crash counts separately. Thanks to recent advances in crash modeling and Bayesian statistics, parameter estimation is done within the Bayesian paradigm, using a Gibbs Sampler and the Metropolis-Hastings (M-H) algorithms for crashes on Washington State rural two-lane highways. Estimation results from the MVPLN approach show statistically significant correlations between crash counts at different levels of injury severity. The non-zero diagonal elements suggest overdispersion in crash counts at all levels of severity. The results lend themselves to several recommendations for highway safety treatments and design policies. For example, wide lanes and shoulders are key for reducing crash frequencies, as are longer vertical curves. PMID:18460364
Wang, Chenggang; Jiang, Baofa; Fan, Jingchun; Wang, Furong; Liu, Qiyong
2014-01-01
The aim of this study is to develop a model that correctly identifies and quantifies the relationship between dengue and meteorological factors in Guangzhou, China. By cross-correlation analysis, meteorological variables and their lag effects were determined. According to the epidemic characteristics of dengue in Guangzhou, those statistically significant variables were modeled by a zero-inflated Poisson regression model. The number of dengue cases and minimum temperature at 1-month lag, along with average relative humidity at 0- to 1-month lag were all positively correlated with the prevalence of dengue fever, whereas wind velocity and temperature in the same month along with rainfall at 2 months' lag showed negative association with dengue incidence. Minimum temperature at 1-month lag and wind velocity in the same month had a greater impact on the dengue epidemic than other variables in Guangzhou.
VanderWeele, Tyler J; Robinson, Whitney R
2014-07-01
We consider several possible interpretations of the "effect of race" when regressions are run with race as an exposure variable, controlling also for various confounding and mediating variables. When adjustment is made for socioeconomic status early in a person's life, we discuss under what contexts the regression coefficients for race can be interpreted as corresponding to the extent to which a racial inequality would remain if various socioeconomic distributions early in life across racial groups could be equalized. When adjustment is also made for adult socioeconomic status, we note how the overall racial inequality can be decomposed into the portion that would be eliminated by equalizing adult socioeconomic status across racial groups and the portion of the inequality that would remain even if adult socioeconomic status across racial groups were equalized. We also discuss a stronger interpretation of the effect of race (stronger in terms of assumptions) involving the joint effects of race-associated physical phenotype (eg, skin color), parental physical phenotype, genetic background, and cultural context when such variables are thought to be hypothetically manipulable and if adequate control for confounding were possible. We discuss some of the challenges with such an interpretation. Further discussion is given as to how the use of selected populations in examining racial disparities can additionally complicate the interpretation of the effects.
On causal interpretation of race in regressions adjusting for confounding and mediating variables
VanderWeele, Tyler J.; Robinson, Whitney R.
2014-01-01
We consider several possible interpretations of the “effect of race” when regressions are run with race as an exposure variable, controlling also for various confounding and mediating variables. When adjustment is made for socioeconomic status early in a person’s life, we discuss under what contexts the regression coefficients for race can be interpreted as corresponding to the extent to which a racial inequality would remain if various socioeconomic distributions early in life across racial groups could be equalized. When adjustment is also made for adult socioeconomic status, we note how the overall racial inequality can be decomposed into the portion that would be eliminated by equalizing adult socioeconomic status across racial groups and the portion of the inequality that would remain even if adult socioeconomic status across racial groups were equalized. We also discuss a stronger interpretation of the “effect of race” (stronger in terms of assumptions) involving the joint effects of race-associated physical phenotype (e.g. skin color), parental physical phenotype, genetic background and cultural context when such variables are thought to be hypothetically manipulable and if adequate control for confounding were possible. We discuss some of the challenges with such an interpretation. Further discussion is given as to how the use of selected populations in examining racial disparities can additionally complicate the interpretation of the effects. PMID:24887159
Algamal, Zakariya Yahya; Lee, Muhammad Hisyam
2015-12-01
Cancer classification and gene selection in high-dimensional data have been popular research topics in genetics and molecular biology. Recently, adaptive regularized logistic regression using the elastic net regularization, which is called the adaptive elastic net, has been successfully applied in high-dimensional cancer classification to tackle both estimating the gene coefficients and performing gene selection simultaneously. The adaptive elastic net originally used elastic net estimates as the initial weight, however, using this weight may not be preferable for certain reasons: First, the elastic net estimator is biased in selecting genes. Second, it does not perform well when the pairwise correlations between variables are not high. Adjusted adaptive regularized logistic regression (AAElastic) is proposed to address these issues and encourage grouping effects simultaneously. The real data results indicate that AAElastic is significantly consistent in selecting genes compared to the other three competitor regularization methods. Additionally, the classification performance of AAElastic is comparable to the adaptive elastic net and better than other regularization methods. Thus, we can conclude that AAElastic is a reliable adaptive regularized logistic regression method in the field of high-dimensional cancer classification.
Lopez, Michael J; Gutman, Roee
2014-11-28
Propensity score methods are common for estimating a binary treatment effect when treatment assignment is not randomized. When exposure is measured on an ordinal scale (i.e. low-medium-high), however, propensity score inference requires extensions which have received limited attention. Estimands of possible interest with an ordinal exposure are the average treatment effects between each pair of exposure levels. Using these estimands, it is possible to determine an optimal exposure level. Traditional methods, including dichotomization of the exposure or a series of binary propensity score comparisons across exposure pairs, are generally inadequate for identification of optimal levels. We combine subclassification with regression adjustment to estimate transitive, unbiased average causal effects across an ordered exposure, and apply our method on the 2005-2006 National Health and Nutrition Examination Survey to estimate the effects of nutritional label use on body mass index.
Lidauer, M H; Emmerling, R; Mäntysaari, E A
2008-06-01
A multiplicative random regression (M-RRM) test-day (TD) model was used to analyse daily milk yields from all available parities of German and Austrian Simmental dairy cattle. The method to account for heterogeneous variance (HV) was based on the multiplicative mixed model approach of Meuwissen. The variance model for the heterogeneity parameters included a fixed region x year x month x parity effect and a random herd x test-month effect with a within-herd first-order autocorrelation between test-months. Acceleration of variance model solutions after each multiplicative model cycle enabled fast convergence of adjustment factors and reduced total computing time significantly. Maximum Likelihood estimation of within-strata residual variances was enhanced by inclusion of approximated information on loss in degrees of freedom due to estimation of location parameters. This improved heterogeneity estimates for very small herds. The multiplicative model was compared with a model that assumed homogeneous variance. Re-estimated genetic variances, based on Mendelian sampling deviations, were homogeneous for the M-RRM TD model but heterogeneous for the homogeneous random regression TD model. Accounting for HV had large effect on cow ranking but moderate effect on bull ranking.
Kauhl, Boris; Heil, Jeanne; Hoebe, Christian J. P. A.; Schweikart, Jürgen; Krafft, Thomas; Dukers-Muijrers, Nicole H. T. M.
2015-01-01
Background Hepatitis C Virus (HCV) infections are a major cause for liver diseases. A large proportion of these infections remain hidden to care due to its mostly asymptomatic nature. Population-based screening and screening targeted on behavioural risk groups had not proven to be effective in revealing these hidden infections. Therefore, more practically applicable approaches to target screenings are necessary. Geographic Information Systems (GIS) and spatial epidemiological methods may provide a more feasible basis for screening interventions through the identification of hotspots as well as demographic and socio-economic determinants. Methods Analysed data included all HCV tests (n = 23,800) performed in the southern area of the Netherlands between 2002–2008. HCV positivity was defined as a positive immunoblot or polymerase chain reaction test. Population data were matched to the geocoded HCV test data. The spatial scan statistic was applied to detect areas with elevated HCV risk. We applied global regression models to determine associations between population-based determinants and HCV risk. Geographically weighted Poisson regression models were then constructed to determine local differences of the association between HCV risk and population-based determinants. Results HCV prevalence varied geographically and clustered in urban areas. The main population at risk were middle-aged males, non-western immigrants and divorced persons. Socio-economic determinants consisted of one-person households, persons with low income and mean property value. However, the association between HCV risk and demographic as well as socio-economic determinants displayed strong regional and intra-urban differences. Discussion The detection of local hotspots in our study may serve as a basis for prioritization of areas for future targeted interventions. Demographic and socio-economic determinants associated with HCV risk show regional differences underlining that a one
Yang, Fang; Yang, Min; Hu, Yuehua; Zhang, Juying
2016-01-01
Background Hand, Foot, and Mouth Disease (HFMD) is a worldwide infectious disease. In China, many provinces have reported HFMD cases, especially the south and southwest provinces. Many studies have found a strong association between the incidence of HFMD and climatic factors such as temperature, rainfall, and relative humidity. However, few studies have analyzed cluster effects between various geographical units. Methods The nonlinear relationships and lag effects between weekly HFMD cases and climatic variables were estimated for the period of 2008–2013 using a polynomial distributed lag model. The extra-Poisson multilevel spatial polynomial model was used to model the exact relationship between weekly HFMD incidence and climatic variables after considering cluster effects, provincial correlated structure of HFMD incidence and overdispersion. The smoothing spline methods were used to detect threshold effects between climatic factors and HFMD incidence. Results The HFMD incidence spatial heterogeneity distributed among provinces, and the scale measurement of overdispersion was 548.077. After controlling for long-term trends, spatial heterogeneity and overdispersion, temperature was highly associated with HFMD incidence. Weekly average temperature and weekly temperature difference approximate inverse “V” shape and “V” shape relationships associated with HFMD incidence. The lag effects for weekly average temperature and weekly temperature difference were 3 weeks and 2 weeks. High spatial correlated HFMD incidence were detected in northern, central and southern province. Temperature can be used to explain most of variation of HFMD incidence in southern and northeastern provinces. After adjustment for temperature, eastern and Northern provinces still had high variation HFMD incidence. Conclusion We found a relatively strong association between weekly HFMD incidence and weekly average temperature. The association between the HFMD incidence and climatic
ALMASI, Afshin; RAHIMIFOROUSHANI, Abbas; ESHRAGHIAN, Mohammad Reza; MOHAMMAD, Kazem; PASDAR, Yahya; TARRAHI, Mohammad Javad; MOGHIMBEIGI, Abbas; AHMADI JOUYBARI, Touraj
2016-01-01
Background: The aim of this study was to assess the associations between nutrition and dental caries in permanent dentition among schoolchildren. Methods: A cross-sectional survey was undertaken on 698 schoolchildren aged 10 to 12 yr from a random sample of primary schools in Kermanshah, western Iran, in 2014. The study was based on the data obtained from the questionnaire containing information on nutritional habits and the outcome of decayed/missing/filled teeth (DMFT) index. The association between predictors and dental caries was modeled using the Zero Inflated Generalized Poisson (ZIGP) regression model. Results: Fourteen percent of the children were caries free. The model was shown that in female children, the odds of being in a caries susceptible sub-group was 1.23 (95% CI: 1.08–1.51) times more likely than boys (P=0.041). Additionally, mean caries count in children who consumed the fizzy soft beverages and sweet biscuits more than once daily was 1.41 (95% CI: 1.19–1.63) and 1.27 (95% CI: 1.18–1.37) times more than children that were in category of less than 3 times a week or never, respectively. Conclusions: Girls were at a higher risk of caries than boys were. Since our study showed that nutritional status may have significant effect on caries in permanent teeth, we recommend that health promotion activities in school should be emphasized on healthful eating practices; especially limiting beverages containing sugar to only occasionally between meals. PMID:27141498
Poisson`s ratio and crustal seismology
Christensen, N.I.
1996-02-10
This report discusses the use of Poisson`s ratio to place constraints on continental crustal composition. A summary of Poisson`s ratios for many common rock formations is also included with emphasis on igneous and metamorphic rock properties.
Li, Xian-Ying; Hu, Shi-Min
2013-02-01
Harmonic functions are the critical points of a Dirichlet energy functional, the linear projections of conformal maps. They play an important role in computer graphics, particularly for gradient-domain image processing and shape-preserving geometric computation. We propose Poisson coordinates, a novel transfinite interpolation scheme based on the Poisson integral formula, as a rapid way to estimate a harmonic function on a certain domain with desired boundary values. Poisson coordinates are an extension of the Mean Value coordinates (MVCs) which inherit their linear precision, smoothness, and kernel positivity. We give explicit formulas for Poisson coordinates in both continuous and 2D discrete forms. Superior to MVCs, Poisson coordinates are proved to be pseudoharmonic (i.e., they reproduce harmonic functions on n-dimensional balls). Our experimental results show that Poisson coordinates have lower Dirichlet energies than MVCs on a number of typical 2D domains (particularly convex domains). As well as presenting a formula, our approach provides useful insights for further studies on coordinates-based interpolation and fast estimation of harmonic functions.
Barks, C.S.
1995-01-01
Storm-runoff water-quality data were used to verify and, when appropriate, adjust regional regression models previously developed to estimate urban storm- runoff loads and mean concentrations in Little Rock, Arkansas. Data collected at 5 representative sites during 22 storms from June 1992 through January 1994 compose the Little Rock data base. Comparison of observed values (0) of storm-runoff loads and mean concentrations to the predicted values (Pu) from the regional regression models for nine constituents (chemical oxygen demand, suspended solids, total nitrogen, total ammonia plus organic nitrogen as nitrogen, total phosphorus, dissolved phosphorus, total recoverable copper, total recoverable lead, and total recoverable zinc) shows large prediction errors ranging from 63 to several thousand percent. Prediction errors for six of the regional regression models are less than 100 percent, and can be considered reasonable for water-quality models. Differences between 0 and Pu are due to variability in the Little Rock data base and error in the regional models. Where applicable, a model adjustment procedure (termed MAP-R-P) based upon regression with 0 against Pu was applied to improve predictive accuracy. For 11 of the 18 regional water-quality models, 0 and Pu are significantly correlated, that is much of the variation in 0 is explained by the regional models. Five of these 11 regional models consistently overestimate O; therefore, MAP-R-P can be used to provide a better estimate. For the remaining seven regional models, 0 and Pu are not significanfly correlated, thus neither the unadjusted regional models nor the MAP-R-P is appropriate. A simple estimator, such as the mean of the observed values may be used if the regression models are not appropriate. Standard error of estimate of the adjusted models ranges from 48 to 130 percent. Calibration results may be biased due to the limited data set sizes in the Little Rock data base. The relatively large values of
Li, Li; Brumback, Babette A; Weppelmann, Thomas A; Morris, J Glenn; Ali, Afsar
2016-08-15
Motivated by an investigation of the effect of surface water temperature on the presence of Vibrio cholerae in water samples collected from different fixed surface water monitoring sites in Haiti in different months, we investigated methods to adjust for unmeasured confounding due to either of the two crossed factors site and month. In the process, we extended previous methods that adjust for unmeasured confounding due to one nesting factor (such as site, which nests the water samples from different months) to the case of two crossed factors. First, we developed a conditional pseudolikelihood estimator that eliminates fixed effects for the levels of each of the crossed factors from the estimating equation. Using the theory of U-Statistics for independent but non-identically distributed vectors, we show that our estimator is consistent and asymptotically normal, but that its variance depends on the nuisance parameters and thus cannot be easily estimated. Consequently, we apply our estimator in conjunction with a permutation test, and we investigate use of the pigeonhole bootstrap and the jackknife for constructing confidence intervals. We also incorporate our estimator into a diagnostic test for a logistic mixed model with crossed random effects and no unmeasured confounding. For comparison, we investigate between-within models extended to two crossed factors. These generalized linear mixed models include covariate means for each level of each factor in order to adjust for the unmeasured confounding. We conduct simulation studies, and we apply the methods to the Haitian data. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26892025
Hoos, Anne B.; Patel, Anant R.
1996-01-01
Model-adjustment procedures were applied to the combined data bases of storm-runoff quality for Chattanooga, Knoxville, and Nashville, Tennessee, to improve predictive accuracy for storm-runoff quality for urban watersheds in these three cities and throughout Middle and East Tennessee. Data for 45 storms at 15 different sites (five sites in each city) constitute the data base. Comparison of observed values of storm-runoff load and event-mean concentration to the predicted values from the regional regression models for 10 constituents shows prediction errors, as large as 806,000 percent. Model-adjustment procedures, which combine the regional model predictions with local data, are applied to improve predictive accuracy. Standard error of estimate after model adjustment ranges from 67 to 322 percent. Calibration results may be biased due to sampling error in the Tennessee data base. The relatively large values of standard error of estimate for some of the constituent models, although representing significant reduction (at least 50 percent) in prediction error compared to estimation with unadjusted regional models, may be unacceptable for some applications. The user may wish to collect additional local data for these constituents and repeat the analysis, or calibrate an independent local regression model.
Stratton, Kelly G; Cook, Andrea J; Jackson, Lisa A; Nelson, Jennifer C
2015-03-30
Sequential methods are well established for randomized clinical trials (RCTs), and their use in observational settings has increased with the development of national vaccine and drug safety surveillance systems that monitor large healthcare databases. Observational safety monitoring requires that sequential testing methods be better equipped to incorporate confounder adjustment and accommodate rare adverse events. New methods designed specifically for observational surveillance include a group sequential likelihood ratio test that uses exposure matching and generalized estimating equations approach that involves regression adjustment. However, little is known about the statistical performance of these methods or how they compare to RCT methods in both observational and rare outcome settings. We conducted a simulation study to determine the type I error, power and time-to-surveillance-end of group sequential likelihood ratio test, generalized estimating equations and RCT methods that construct group sequential Lan-DeMets boundaries using data from a matched (group sequential Lan-DeMets-matching) or unmatched regression (group sequential Lan-DeMets-regression) setting. We also compared the methods using data from a multisite vaccine safety study. All methods had acceptable type I error, but regression methods were more powerful, faster at detecting true safety signals and less prone to implementation difficulties with rare events than exposure matching methods. Method performance also depended on the distribution of information and extent of confounding by site. Our results suggest that choice of sequential method, especially the confounder control strategy, is critical in rare event observational settings. These findings provide guidance for choosing methods in this context and, in particular, suggest caution when conducting exposure matching.
Methods for Adjusting U.S. Geological Survey Rural Regression Peak Discharges in an Urban Setting
Moglen, Glenn E.; Shivers, Dorianne E.
2006-01-01
A study was conducted of 78 U.S. Geological Survey gaged streams that have been subjected to varying degrees of urbanization over the last three decades. Flood-frequency analysis coupled with nonlinear regression techniques were used to generate a set of equations for converting peak discharge estimates determined from rural regression equations to a set of peak discharge estimates that represent known urbanization. Specifically, urban regression equations for the 2-, 5-, 10-, 25-, 50-, 100-, and 500-year return periods were calibrated as a function of the corresponding rural peak discharge and the percentage of impervious area in a watershed. The results of this study indicate that two sets of equations, one set based on imperviousness and one set based on population density, performed well. Both sets of equations are dependent on rural peak discharges, a measure of development (average percentage of imperviousness or average population density), and a measure of homogeneity of development within a watershed. Average imperviousness was readily determined by using geographic information system methods and commonly available land-cover data. Similarly, average population density was easily determined from census data. Thus, a key advantage to the equations developed in this study is that they do not require field measurements of watershed characteristics as did the U.S. Geological Survey urban equations developed in an earlier investigation. During this study, the U.S. Geological Survey PeakFQ program was used as an integral tool in the calibration of all equations. The scarcity of historical land-use data, however, made exclusive use of flow records necessary for the 30-year period from 1970 to 2000. Such relatively short-duration streamflow time series required a nonstandard treatment of the historical data function of the PeakFQ program in comparison to published guidelines. Thus, the approach used during this investigation does not fully comply with the
Nie, Lei; Wu, Gang; Brockman, Fred J.; Zhang, Weiwen
2006-05-04
Abstract Advances in DNA microarray and proteomics technologies have enabled high-throughput measurement of mRNA expression and protein abundance. Parallel profiling of mRNA and protein on a global scale and integrative analysis of these two data types could provide additional insight into the metabolic mechanisms underlying complex biological systems. However, because protein abundance and mRNA expression are affected by many cellular and physical processes, there have been conflicting results on the correlation of these two measurements. In addition, as current proteomic methods can detect only a small fraction of proteins present in cells, no correlation study of these two data types has been done thus far at the whole-genome level. In this study, we describe a novel data-driven statistical model to integrate whole-genome microarray and proteomic data collected from Desulfovibrio vulgaris grown under three different conditions. Based on the Poisson distribution pattern of proteomic data and the fact that a large number of proteins were undetected (excess zeros), Zero-inflated Poisson models were used to define the correlation pattern of mRNA and protein abundance. The models assumed that there is a probability mass at zero representing some of the undetected proteins because of technical limitations. The models thus use abundance measurements of transcripts and proteins experimentally detected as input to generate predictions of protein abundances as output for all genes in the genome. We demonstrated the statistical models by comparatively analyzing D. vulgaris grown on lactate-based versus formate-based media. The increased expressions of Ech hydrogenase and alcohol dehydrogenase (Adh)-periplasmic Fe-only hydrogenase (Hyd) pathway for ATP synthesis were predicted for D. vulgaris grown on formate.
Ho Hoang, Khai-Long; Mombaur, Katja
2015-10-15
Dynamic modeling of the human body is an important tool to investigate the fundamentals of the biomechanics of human movement. To model the human body in terms of a multi-body system, it is necessary to know the anthropometric parameters of the body segments. For young healthy subjects, several data sets exist that are widely used in the research community, e.g. the tables provided by de Leva. None such comprehensive anthropometric parameter sets exist for elderly people. It is, however, well known that body proportions change significantly during aging, e.g. due to degenerative effects in the spine, such that parameters for young people cannot be used for realistically simulating the dynamics of elderly people. In this study, regression equations are derived from the inertial parameters, center of mass positions, and body segment lengths provided by de Leva to be adjustable to the changes in proportion of the body parts of male and female humans due to aging. Additional adjustments are made to the reference points of the parameters for the upper body segments as they are chosen in a more practicable way in the context of creating a multi-body model in a chain structure with the pelvis representing the most proximal segment.
Ho Hoang, Khai-Long; Mombaur, Katja
2015-10-15
Dynamic modeling of the human body is an important tool to investigate the fundamentals of the biomechanics of human movement. To model the human body in terms of a multi-body system, it is necessary to know the anthropometric parameters of the body segments. For young healthy subjects, several data sets exist that are widely used in the research community, e.g. the tables provided by de Leva. None such comprehensive anthropometric parameter sets exist for elderly people. It is, however, well known that body proportions change significantly during aging, e.g. due to degenerative effects in the spine, such that parameters for young people cannot be used for realistically simulating the dynamics of elderly people. In this study, regression equations are derived from the inertial parameters, center of mass positions, and body segment lengths provided by de Leva to be adjustable to the changes in proportion of the body parts of male and female humans due to aging. Additional adjustments are made to the reference points of the parameters for the upper body segments as they are chosen in a more practicable way in the context of creating a multi-body model in a chain structure with the pelvis representing the most proximal segment. PMID:26338096
ERIC Educational Resources Information Center
Thatcher, Greg W.; Henson, Robin K.
This study examined research in training and development to determine effect size reporting practices. It focused on the reporting of corrected effect sizes in research articles using multiple regression analyses. When possible, researchers calculated corrected effect sizes and determine if the associated shrinkage could have impacted researcher…
ERIC Educational Resources Information Center
Tipton, Elizabeth; Pustejovsky, James E.
2015-01-01
Randomized experiments are commonly used to evaluate the effectiveness of educational interventions. The goal of the present investigation is to develop small-sample corrections for multiple contrast hypothesis tests (i.e., F-tests) such as the omnibus test of meta-regression fit or a test for equality of three or more levels of a categorical…
Kjelstrom, L.C.
1995-01-01
Previously developed U.S. Geological Survey regional regression models of runoff and 11 chemical constituents were evaluated to assess their suitability for use in urban areas in Boise and Garden City. Data collected in the study area were used to develop adjusted regional models of storm-runoff volumes and mean concentrations and loads of chemical oxygen demand, dissolved and suspended solids, total nitrogen and total ammonia plus organic nitrogen as nitrogen, total and dissolved phosphorus, and total recoverable cadmium, copper, lead, and zinc. Explanatory variables used in these models were drainage area, impervious area, land-use information, and precipitation data. Mean annual runoff volume and loads at the five outfalls were estimated from 904 individual storms during 1976 through 1993. Two methods were used to compute individual storm loads. The first method used adjusted regional models of storm loads and the second used adjusted regional models for mean concentration and runoff volume. For large storms, the first method seemed to produce excessively high loads for some constituents and the second method provided more reliable results for all constituents except suspended solids. The first method provided more reliable results for large storms for suspended solids.
Scaling the Poisson Distribution
ERIC Educational Resources Information Center
Farnsworth, David L.
2014-01-01
We derive the additive property of Poisson random variables directly from the probability mass function. An important application of the additive property to quality testing of computer chips is presented.
Asquith, William H.; Roussel, Meghan C.
2009-01-01
Annual peak-streamflow frequency estimates are needed for flood-plain management; for objective assessment of flood risk; for cost-effective design of dams, levees, and other flood-control structures; and for design of roads, bridges, and culverts. Annual peak-streamflow frequency represents the peak streamflow for nine recurrence intervals of 2, 5, 10, 25, 50, 100, 200, 250, and 500 years. Common methods for estimation of peak-streamflow frequency for ungaged or unmonitored watersheds are regression equations for each recurrence interval developed for one or more regions; such regional equations are the subject of this report. The method is based on analysis of annual peak-streamflow data from U.S. Geological Survey streamflow-gaging stations (stations). Beginning in 2007, the U.S. Geological Survey, in cooperation with the Texas Department of Transportation and in partnership with Texas Tech University, began a 3-year investigation concerning the development of regional equations to estimate annual peak-streamflow frequency for undeveloped watersheds in Texas. The investigation focuses primarily on 638 stations with 8 or more years of data from undeveloped watersheds and other criteria. The general approach is explicitly limited to the use of L-moment statistics, which are used in conjunction with a technique of multi-linear regression referred to as PRESS minimization. The approach used to develop the regional equations, which was refined during the investigation, is referred to as the 'L-moment-based, PRESS-minimized, residual-adjusted approach'. For the approach, seven unique distributions are fit to the sample L-moments of the data for each of 638 stations and trimmed means of the seven results of the distributions for each recurrence interval are used to define the station specific, peak-streamflow frequency. As a first iteration of regression, nine weighted-least-squares, PRESS-minimized, multi-linear regression equations are computed using the watershed
Robertson, D.M.; Saad, D.A.; Heisey, D.M.
2006-01-01
Various approaches are used to subdivide large areas into regions containing streams that have similar reference or background water quality and that respond similarly to different factors. For many applications, such as establishing reference conditions, it is preferable to use physical characteristics that are not affected by human activities to delineate these regions. However, most approaches, such as ecoregion classifications, rely on land use to delineate regions or have difficulties compensating for the effects of land use. Land use not only directly affects water quality, but it is often correlated with the factors used to define the regions. In this article, we describe modifications to SPARTA (spatial regression-tree analysis), a relatively new approach applied to water-quality and environmental characteristic data to delineate zones with similar factors affecting water quality. In this modified approach, land-use-adjusted (residualized) water quality and environmental characteristics are computed for each site. Regression-tree analysis is applied to the residualized data to determine the most statistically important environmental characteristics describing the distribution of a specific water-quality constituent. Geographic information for small basins throughout the study area is then used to subdivide the area into relatively homogeneous environmental water-quality zones. For each zone, commonly used approaches are subsequently used to define its reference water quality and how its water quality responds to changes in land use. SPARTA is used to delineate zones of similar reference concentrations of total phosphorus and suspended sediment throughout the upper Midwestern part of the United States. ?? 2006 Springer Science+Business Media, Inc.
NASA Astrophysics Data System (ADS)
Matsuo, Kuniaki; Saleh, Bahaa E. A.; Teich, Malvin Carl
1982-12-01
We investigate the counting statistics for stationary and nonstationary cascaded Poisson processes. A simple equation is obtained for the variance-to-mean ratio in the limit of long counting times. Explicit expressions for the forward-recurrence and inter-event-time probability density functions are also obtained. The results are expected to be of use in a number of areas of physics.
Demonstrating Poisson Statistics.
ERIC Educational Resources Information Center
Vetterling, William T.
1980-01-01
Describes an apparatus that offers a very lucid demonstration of Poisson statistics as applied to electrical currents, and the manner in which such statistics account for shot noise when applied to macroscopic currents. The experiment described is intended for undergraduate physics students. (HM)
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2008-05-01
Many random populations can be modeled as a countable set of points scattered randomly on the positive half-line. The points may represent magnitudes of earthquakes and tornados, masses of stars, market values of public companies, etc. In this article we explore a specific class of random such populations we coin ` Paretian Poisson processes'. This class is elemental in statistical physics—connecting together, in a deep and fundamental way, diverse issues including: the Poisson distribution of the Law of Small Numbers; Paretian tail statistics; the Fréchet distribution of Extreme Value Theory; the one-sided Lévy distribution of the Central Limit Theorem; scale-invariance, renormalization and fractality; resilience to random perturbations.
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2008-09-01
The Central Limit Theorem (CLT) and Extreme Value Theory (EVT) study, respectively, the stochastic limit-laws of sums and maxima of sequences of independent and identically distributed (i.i.d.) random variables via an affine scaling scheme. In this research we study the stochastic limit-laws of populations of i.i.d. random variables via nonlinear scaling schemes. The stochastic population-limits obtained are fractal Poisson processes which are statistically self-similar with respect to the scaling scheme applied, and which are characterized by two elemental structures: (i) a universal power-law structure common to all limits, and independent of the scaling scheme applied; (ii) a specific structure contingent on the scaling scheme applied. The sum-projection and the maximum-projection of the population-limits obtained are generalizations of the classic CLT and EVT results - extending them from affine to general nonlinear scaling schemes.
NASA Astrophysics Data System (ADS)
Zhang, Ying; Bi, Peng; Hiller, Janet
2008-01-01
This is the first study to identify appropriate regression models for the association between climate variation and salmonellosis transmission. A comparison between different regression models was conducted using surveillance data in Adelaide, South Australia. By using notified salmonellosis cases and climatic variables from the Adelaide metropolitan area over the period 1990-2003, four regression methods were examined: standard Poisson regression, autoregressive adjusted Poisson regression, multiple linear regression, and a seasonal autoregressive integrated moving average (SARIMA) model. Notified salmonellosis cases in 2004 were used to test the forecasting ability of the four models. Parameter estimation, goodness-of-fit and forecasting ability of the four regression models were compared. Temperatures occurring 2 weeks prior to cases were positively associated with cases of salmonellosis. Rainfall was also inversely related to the number of cases. The comparison of the goodness-of-fit and forecasting ability suggest that the SARIMA model is better than the other three regression models. Temperature and rainfall may be used as climatic predictors of salmonellosis cases in regions with climatic characteristics similar to those of Adelaide. The SARIMA model could, thus, be adopted to quantify the relationship between climate variations and salmonellosis transmission.
Algorithm Calculates Cumulative Poisson Distribution
NASA Technical Reports Server (NTRS)
Bowerman, Paul N.; Nolty, Robert C.; Scheuer, Ernest M.
1992-01-01
Algorithm calculates accurate values of cumulative Poisson distribution under conditions where other algorithms fail because numbers are so small (underflow) or so large (overflow) that computer cannot process them. Factors inserted temporarily to prevent underflow and overflow. Implemented in CUMPOIS computer program described in "Cumulative Poisson Distribution Program" (NPO-17714).
Poisson Spot with Magnetic Levitation
ERIC Educational Resources Information Center
Hoover, Matthew; Everhart, Michael; D'Arruda, Jose
2010-01-01
In this paper we describe a unique method for obtaining the famous Poisson spot without adding obstacles to the light path, which could interfere with the effect. A Poisson spot is the interference effect from parallel rays of light diffracting around a solid spherical object, creating a bright spot in the center of the shadow.
The Poisson and Exponential Models
ERIC Educational Resources Information Center
Richards, Winston A.
1978-01-01
The students in a basic course on probability and statistics in Trinidad demonstrated that the number of fatal highway accidents appeared to follow a Poisson distribution while the length of time between deaths followed exponential distribution. (MN)
NASA Astrophysics Data System (ADS)
Liberman, Neomi; Ben-David Kolikant, Yifat; Beeri, Catriel
2012-09-01
Due to a program reform in Israel, experienced CS high-school teachers faced the need to master and teach a new programming paradigm. This situation served as an opportunity to explore the relationship between teachers' content knowledge (CK) and their pedagogical content knowledge (PCK). This article focuses on three case studies, with emphasis on one of them. Using observations and interviews, we examine how the teachers, we observed taught and what development of their teaching occurred as a result of their teaching experience, if at all. Our findings suggest that this situation creates a new hybrid state of teachers, which we term "regressed experts." These teachers incorporate in their professional practice some elements typical of novices and some typical of experts. We also found that these teachers' experience, although established when teaching a different CK, serve as a leverage to improve their knowledge and understanding of aspects of the new content.
Berezin integrals and Poisson processes
NASA Astrophysics Data System (ADS)
DeAngelis, G. F.; Jona-Lasinio, G.; Sidoravicius, V.
1998-01-01
We show that the calculation of Berezin integrals over anticommuting variables can be reduced to the evaluation of expectations of functionals of Poisson processes via an appropriate Feynman-Kac formula. In this way the tools of ordinary analysis can be applied to Berezin integrals and, as an example, we prove a simple upper bound. Possible applications of our results are briefly mentioned.
Flexible regression models for rate differences, risk differences and relative risks.
Donoghoe, Mark W; Marschner, Ian C
2015-05-01
Generalized additive models (GAMs) based on the binomial and Poisson distributions can be used to provide flexible semi-parametric modelling of binary and count outcomes. When used with the canonical link function, these GAMs provide semi-parametrically adjusted odds ratios and rate ratios. For adjustment of other effect measures, including rate differences, risk differences and relative risks, non-canonical link functions must be used together with a constrained parameter space. However, the algorithms used to fit these models typically rely on a form of the iteratively reweighted least squares algorithm, which can be numerically unstable when a constrained non-canonical model is used. We describe an application of a combinatorial EM algorithm to fit identity link Poisson, identity link binomial and log link binomial GAMs in order to estimate semi-parametrically adjusted rate differences, risk differences and relative risks. Using smooth regression functions based on B-splines, the method provides stable convergence to the maximum likelihood estimates, and it ensures that the estimates always remain within the parameter space. It is also straightforward to apply a monotonicity constraint to the smooth regression functions. We illustrate the method using data from a clinical trial in heart attack patients. PMID:25781711
Calculation of the Poisson cumulative distribution function
NASA Technical Reports Server (NTRS)
Bowerman, Paul N.; Nolty, Robert G.; Scheuer, Ernest M.
1990-01-01
A method for calculating the Poisson cdf (cumulative distribution function) is presented. The method avoids computer underflow and overflow during the process. The computer program uses this technique to calculate the Poisson cdf for arbitrary inputs. An algorithm that determines the Poisson parameter required to yield a specified value of the cdf is presented.
From Loss of Memory to Poisson.
ERIC Educational Resources Information Center
Johnson, Bruce R.
1983-01-01
A way of presenting the Poisson process and deriving the Poisson distribution for upper-division courses in probability or mathematical statistics is presented. The main feature of the approach lies in the formulation of Poisson postulates with immediate intuitive appeal. (MNS)
Tunable negative Poisson's ratio in hydrogenated graphene.
Jiang, Jin-Wu; Chang, Tienchong; Guo, Xingming
2016-09-21
We perform molecular dynamics simulations to investigate the effect of hydrogenation on the Poisson's ratio of graphene. It is found that the value of the Poisson's ratio of graphene can be effectively tuned from positive to negative by varying the percentage of hydrogenation. Specifically, the Poisson's ratio decreases with an increase in the percentage of hydrogenation, and reaches a minimum value of -0.04 when the percentage of hydrogenation is about 50%. The Poisson's ratio starts to increase upon a further increase of the percentage of hydrogenation. The appearance of a minimum negative Poisson's ratio in the hydrogenated graphene is attributed to the suppression of the hydrogenation-induced ripples during the stretching of graphene. Our results demonstrate that hydrogenation is a valuable approach for tuning the Poisson's ratio from positive to negative in graphene. PMID:27536878
ERIC Educational Resources Information Center
Matson, Johnny L.; Kozlowski, Alison M.
2010-01-01
Autistic regression is one of the many mysteries in the developmental course of autism and pervasive developmental disorders not otherwise specified (PDD-NOS). Various definitions of this phenomenon have been used, further clouding the study of the topic. Despite this problem, some efforts at establishing prevalence have been made. The purpose of…
Rigid body dynamics on the Poisson torus
NASA Astrophysics Data System (ADS)
Richter, Peter H.
2008-11-01
The theory of rigid body motion with emphasis on the modifications introduced by a Cardan suspension is outlined. The configuration space is no longer SO(3) but a 3-torus; the equivalent of the Poisson sphere, after separation of an angular variable, is a Poisson torus. Iso-energy surfaces and their bifurcations are discussed. A universal Poincaré section method is proposed.
On Generalizing the Two-Poisson Model.
ERIC Educational Resources Information Center
Srinivasan, Padmini
1990-01-01
After reviewing the literature on automatic indexing research an experiment is described which examined term distribution and the effectiveness of the Two Poisson and Three Poisson Models in identifying good index terms. The conclusion reached is that these models should be applied with caution in document retrieval. (25 references) (EAM)
Alternative Derivations for the Poisson Integral Formula
ERIC Educational Resources Information Center
Chen, J. T.; Wu, C. S.
2006-01-01
Poisson integral formula is revisited. The kernel in the Poisson integral formula can be derived in a series form through the direct BEM free of the concept of image point by using the null-field integral equation in conjunction with the degenerate kernels. The degenerate kernels for the closed-form Green's function and the series form of Poisson…
Analysis of Blood Transfusion Data Using Bivariate Zero-Inflated Poisson Model: A Bayesian Approach
Mohammadi, Tayeb; Sedehi, Morteza
2016-01-01
Recognizing the factors affecting the number of blood donation and blood deferral has a major impact on blood transfusion. There is a positive correlation between the variables “number of blood donation” and “number of blood deferral”: as the number of return for donation increases, so does the number of blood deferral. On the other hand, due to the fact that many donors never return to donate, there is an extra zero frequency for both of the above-mentioned variables. In this study, in order to apply the correlation and to explain the frequency of the excessive zero, the bivariate zero-inflated Poisson regression model was used for joint modeling of the number of blood donation and number of blood deferral. The data was analyzed using the Bayesian approach applying noninformative priors at the presence and absence of covariates. Estimating the parameters of the model, that is, correlation, zero-inflation parameter, and regression coefficients, was done through MCMC simulation. Eventually double-Poisson model, bivariate Poisson model, and bivariate zero-inflated Poisson model were fitted on the data and were compared using the deviance information criteria (DIC). The results showed that the bivariate zero-inflated Poisson regression model fitted the data better than the other models. PMID:27703493
Hirozawa, Anne M; Montez-Rath, Maria E; Johnson, Elizabeth C; Solnit, Stephen A; Drennan, Michael J; Katz, Mitchell H; Marx, Rani
2016-01-01
We compared prospective risk adjustment models for adjusting patient panels at the San Francisco Department of Public Health. We used 4 statistical models (linear regression, two-part model, zero-inflated Poisson, and zero-inflated negative binomial) and 4 subsets of predictor variables (age/gender categories, chronic diagnoses, homelessness, and a loss to follow-up indicator) to predict primary care visit frequency. Predicted visit frequency was then used to calculate patient weights and adjusted panel sizes. The two-part model using all predictor variables performed best (R = 0.20). This model, designed specifically for safety net patients, may prove useful for panel adjustment in other public health settings.
Hirozawa, Anne M; Montez-Rath, Maria E; Johnson, Elizabeth C; Solnit, Stephen A; Drennan, Michael J; Katz, Mitchell H; Marx, Rani
2016-01-01
We compared prospective risk adjustment models for adjusting patient panels at the San Francisco Department of Public Health. We used 4 statistical models (linear regression, two-part model, zero-inflated Poisson, and zero-inflated negative binomial) and 4 subsets of predictor variables (age/gender categories, chronic diagnoses, homelessness, and a loss to follow-up indicator) to predict primary care visit frequency. Predicted visit frequency was then used to calculate patient weights and adjusted panel sizes. The two-part model using all predictor variables performed best (R = 0.20). This model, designed specifically for safety net patients, may prove useful for panel adjustment in other public health settings. PMID:27576054
Adaptive Mesh Enrichment for the Poisson-Boltzmann Equation
NASA Astrophysics Data System (ADS)
Dyshlovenko, Pavel
2001-09-01
An adaptive mesh enrichment procedure for a finite-element solution of the two-dimensional Poisson-Boltzmann equation is described. The mesh adaptation is performed by subdividing the cells using information obtained in the previous step of the solution and next rearranging the mesh to be a Delaunay triangulation. The procedure allows the gradual improvement of the quality of the solution and adjustment of the geometry of the problem. The performance of the proposed approach is illustrated by applying it to the problem of two identical colloidal particles in a symmetric electrolyte.
Poisson-process electrical stimulation: circuit and axonal responses.
Moradmand, K; Goldfinger, M D
1995-12-01
This work describes a simple circuit which generated a highly Poisson-like sequence of pulses. Resistor noise was amplified in three series stages followed by rectification through a relatively large shunt resistance. This yielded a sequence of variable-amplitude transients, which were inverted, amplified with DC adjustment, and fed into a Schmitt trigger/multivibrator chip for pulse generation. The pulse generation frequency was modulated by the amplification of the rectified transients. The stochastic characteristics of the output pulse train were Poisson-like over a wide frequency range, as assessed using the intervent interval distribution and expectation density as steady-state and real-time estimators, respectively. In separate tests, the output pulse train was applied to forelimb cutaneous axons of the anesthetized cat; trains of elicited propagating action potentials were recorded extracellularly from individual G1 axons in the cuneate fasciculus. The stochastic properties of the action potential train differed from those of the stimulus, with longer deadtime, lower mean rate, and an early expectation density peak. These physiological responses to circuit output were similar to those elicited by other generators of Poisson-like stimulation. PMID:8788055
Huang, Dong; Cabral, Ricardo; De la Torre, Fernando
2016-02-01
Discriminative methods (e.g., kernel regression, SVM) have been extensively used to solve problems such as object recognition, image alignment and pose estimation from images. These methods typically map image features ( X) to continuous (e.g., pose) or discrete (e.g., object category) values. A major drawback of existing discriminative methods is that samples are directly projected onto a subspace and hence fail to account for outliers common in realistic training sets due to occlusion, specular reflections or noise. It is important to notice that existing discriminative approaches assume the input variables X to be noise free. Thus, discriminative methods experience significant performance degradation when gross outliers are present. Despite its obvious importance, the problem of robust discriminative learning has been relatively unexplored in computer vision. This paper develops the theory of robust regression (RR) and presents an effective convex approach that uses recent advances on rank minimization. The framework applies to a variety of problems in computer vision including robust linear discriminant analysis, regression with missing data, and multi-label classification. Several synthetic and real examples with applications to head pose estimation from images, image and video classification and facial attribute classification with missing data are used to illustrate the benefits of RR. PMID:26761740
Time series regression model for infectious disease and weather.
Imai, Chisato; Armstrong, Ben; Chalabi, Zaid; Mangtani, Punam; Hashizume, Masahiro
2015-10-01
Time series regression has been developed and long used to evaluate the short-term associations of air pollution and weather with mortality or morbidity of non-infectious diseases. The application of the regression approaches from this tradition to infectious diseases, however, is less well explored and raises some new issues. We discuss and present potential solutions for five issues often arising in such analyses: changes in immune population, strong autocorrelations, a wide range of plausible lag structures and association patterns, seasonality adjustments, and large overdispersion. The potential approaches are illustrated with datasets of cholera cases and rainfall from Bangladesh and influenza and temperature in Tokyo. Though this article focuses on the application of the traditional time series regression to infectious diseases and weather factors, we also briefly introduce alternative approaches, including mathematical modeling, wavelet analysis, and autoregressive integrated moving average (ARIMA) models. Modifications proposed to standard time series regression practice include using sums of past cases as proxies for the immune population, and using the logarithm of lagged disease counts to control autocorrelation due to true contagion, both of which are motivated from "susceptible-infectious-recovered" (SIR) models. The complexity of lag structures and association patterns can often be informed by biological mechanisms and explored by using distributed lag non-linear models. For overdispersed models, alternative distribution models such as quasi-Poisson and negative binomial should be considered. Time series regression can be used to investigate dependence of infectious diseases on weather, but may need modifying to allow for features specific to this context.
Speech parts as Poisson processes.
Badalamenti, A F
2001-09-01
This paper presents evidence that six of the seven parts of speech occur in written text as Poisson processes, simple or recurring. The six major parts are nouns, verbs, adjectives, adverbs, prepositions, and conjunctions, with the interjection occurring too infrequently to support a model. The data consist of more than the first 5000 words of works by four major authors coded to label the parts of speech, as well as periods (sentence terminators). Sentence length is measured via the period and found to be normally distributed with no stochastic model identified for its occurrence. The models for all six speech parts but the noun significantly distinguish some pairs of authors and likewise for the joint use of all words types. Any one author is significantly distinguished from any other by at least one word type and sentence length very significantly distinguishes each from all others. The variety of word type use, measured by Shannon entropy, builds to about 90% of its maximum possible value. The rate constants for nouns are close to the fractions of maximum entropy achieved. This finding together with the stochastic models and the relations among them suggest that the noun may be a primitive organizer of written text.
Supervised Gamma Process Poisson Factorization
Anderson, Dylan Zachary
2015-05-01
This thesis develops the supervised gamma process Poisson factorization (S- GPPF) framework, a novel supervised topic model for joint modeling of count matrices and document labels. S-GPPF is fully generative and nonparametric: document labels and count matrices are modeled under a uni ed probabilistic framework and the number of latent topics is controlled automatically via a gamma process prior. The framework provides for multi-class classification of documents using a generative max-margin classifier. Several recent data augmentation techniques are leveraged to provide for exact inference using a Gibbs sampling scheme. The first portion of this thesis reviews supervised topic modeling and several key mathematical devices used in the formulation of S-GPPF. The thesis then introduces the S-GPPF generative model and derives the conditional posterior distributions of the latent variables for posterior inference via Gibbs sampling. The S-GPPF is shown to exhibit state-of-the-art performance for joint topic modeling and document classification on a dataset of conference abstracts, beating out competing supervised topic models. The unique properties of S-GPPF along with its competitive performance make it a novel contribution to supervised topic modeling.
Poisson-Based Inference for Perturbation Models in Adaptive Spelling Training
ERIC Educational Resources Information Center
Baschera, Gian-Marco; Gross, Markus
2010-01-01
We present an inference algorithm for perturbation models based on Poisson regression. The algorithm is designed to handle unclassified input with multiple errors described by independent mal-rules. This knowledge representation provides an intelligent tutoring system with local and global information about a student, such as error classification…
RNA gels with negative Poisson ratio
NASA Astrophysics Data System (ADS)
Ahsan, Amir
2005-03-01
We present a simple model for the elastic properties of very large single-stranded RNA molecules linked by partial complementary pairing, such as a viral RNA genome in solution. It shown that the sign of Poisson's Ratio is determined by the convexity of the force-extension curve of single-stranded RNA. The implications of negative Poisson Ratio's for viral genome encapsidation will be discussed.
Poisson brackets for densities of functionals
NASA Astrophysics Data System (ADS)
Dickey, Leonid A.
In the theory of integrable systems and in other field theories one usually deals with Poisson brackets between functionals. The latter are integrals of densities. Densities are defined up to divergence (boundary) terms. A question arises, is it possible to define a reasonable Poisson bracket for densities themselves? A general theory was suggested by Barnich, Fulp, Lada, Markl and Stasheff which has led them to the notion of a strong homotopy Lie group, (sh Lie). We are giving a few concrete examples.
Prediction of forest fires occurrences with area-level Poisson mixed models.
Boubeta, Miguel; Lombardía, María José; Marey-Pérez, Manuel Francisco; Morales, Domingo
2015-05-01
The number of fires in forest areas of Galicia (north-west of Spain) during the summer period is quite high. Local authorities are interested in analyzing the factors that explain this phenomenon. Poisson regression models are good tools for describing and predicting the number of fires per forest areas. This work employs area-level Poisson mixed models for treating real data about fires in forest areas. A parametric bootstrap method is applied for estimating the mean squared errors of fires predictors. The developed methodology and software are applied to a real data set of fires in forest areas of Galicia.
Poisson-Fermi model of single ion activities in aqueous solutions
NASA Astrophysics Data System (ADS)
Liu, Jinn-Liang; Eisenberg, Bob
2015-09-01
A Poisson-Fermi model is proposed for calculating activity coefficients of single ions in strong electrolyte solutions based on the experimental Born radii and hydration shells of ions in aqueous solutions. The steric effect of water molecules and interstitial voids in the first and second hydration shells play an important role in our model. The screening and polarization effects of water are also included in the model that can thus describe spatial variations of dielectric permittivity, water density, void volume, and ionic concentration. The activity coefficients obtained by the Poisson-Fermi model with only one adjustable parameter are shown to agree with experimental data, which vary nonmonotonically with salt concentrations.
Joe, Harry; Zhu, Rong
2005-04-01
We prove that the generalized Poisson distribution GP(theta, eta) (eta > or = 0) is a mixture of Poisson distributions; this is a new property for a distribution which is the topic of the book by Consul (1989). Because we find that the fits to count data of the generalized Poisson and negative binomial distributions are often similar, to understand their differences, we compare the probability mass functions and skewnesses of the generalized Poisson and negative binomial distributions with the first two moments fixed. They have slight differences in many situations, but their zero-inflated distributions, with masses at zero, means and variances fixed, can differ more. These probabilistic comparisons are helpful in selecting a better fitting distribution for modelling count data with long right tails. Through a real example of count data with large zero fraction, we illustrate how the generalized Poisson and negative binomial distributions as well as their zero-inflated distributions can be discriminated. PMID:16389919
[Structural adjustment, cultural adjustment?].
Dujardin, B; Dujardin, M; Hermans, I
2003-12-01
Over the last two decades, multiple studies have been conducted and many articles published about Structural Adjustment Programmes (SAPs). These studies mainly describe the characteristics of SAPs and analyse their economic consequences as well as their effects upon a variety of sectors: health, education, agriculture and environment. However, very few focus on the sociological and cultural effects of SAPs. Following a summary of SAP's content and characteristics, the paper briefly discusses the historical course of SAPs and the different critiques which have been made. The cultural consequences of SAPs are introduced and are described on four different levels: political, community, familial, and individual. These levels are analysed through examples from the literature and individual testimonies from people in the Southern Hemisphere. The paper concludes that SAPs, alongside economic globalisation processes, are responsible for an acute breakdown of social and cultural structures in societies in the South. It should be a priority, not only to better understand the situation and its determining factors, but also to intervene and act with strategies that support and reinvest in the social and cultural sectors, which is vital in order to allow for individuals and communities in the South to strengthen their autonomy and identify.
The oligarchic structure of Paretian Poisson processes
NASA Astrophysics Data System (ADS)
Eliazar, I.; Klafter, J.
2008-08-01
Paretian Poisson processes are a mathematical model of random fractal populations governed by Paretian power law tail statistics, and connect together and underlie elemental issues in statistical physics. Considering Paretian Poisson processes to represent the wealth of individuals in human populations, we explore their oligarchic structure via the analysis of the following random ratios: the aggregate wealth of the oligarchs ranked from m+1 to n, measured relative to the wealth of the m-th oligarch (n> m). A mean analysis and a stochastic-limit analysis (as n→∞) of these ratios are conducted. We obtain closed-form results which turn out to be highly contingent on the fractal exponent of the Paretian Poisson process considered.
Loop coproducts, Gaudin models and Poisson coalgebras
NASA Astrophysics Data System (ADS)
Musso, F.
2010-10-01
In this paper we show that if A is a Poisson algebra equipped with a set of maps Δ(i)λ: A → Aotimes N satisfying suitable conditions, then the images of the Casimir functions of A under the maps Δ(i)λ (that we call 'loop coproducts') are in involution. Rational, trigonometric and elliptic Gaudin models can be recovered as particular cases of this construction, and we show that the same happens for the integrable (or partially integrable) models that can be obtained through the so-called coproduct method. On the other hand, we show that the loop coproduct approach provides a natural generalization of the Gaudin algebras from the Lie-Poisson to the generic Poisson algebra context and, hopefully, can lead to the definition of new integrable models.
Evolutionary inference via the Poisson Indel Process.
Bouchard-Côté, Alexandre; Jordan, Michael I
2013-01-22
We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114-124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments.
Computation of confidence intervals for Poisson processes
NASA Astrophysics Data System (ADS)
Aguilar-Saavedra, J. A.
2000-07-01
We present an algorithm which allows a fast numerical computation of Feldman-Cousins confidence intervals for Poisson processes, even when the number of background events is relatively large. This algorithm incorporates an appropriate treatment of the singularities that arise as a consequence of the discreteness of the variable.
Easy Demonstration of the Poisson Spot
ERIC Educational Resources Information Center
Gluck, Paul
2010-01-01
Many physics teachers have a set of slides of single, double and multiple slits to show their students the phenomena of interference and diffraction. Thomas Young's historic experiments with double slits were indeed a milestone in proving the wave nature of light. But another experiment, namely the Poisson spot, was also important historically and…
On the Burgers-Poisson equation
NASA Astrophysics Data System (ADS)
Grunert, K.; Nguyen, Khai T.
2016-09-01
In this paper, we prove the existence and uniqueness of weak entropy solutions to the Burgers-Poisson equation for initial data in L1 (R). In addition an Oleinik type estimate is established and some criteria on local smoothness and wave breaking for weak entropy solutions are provided.
Generalized poisson 3-D scatterer distributions.
Laporte, Catherine; Clark, James J; Arbel, Tal
2009-02-01
This paper describes a simple, yet powerful ultrasound scatterer distribution model. The model extends a 1-D generalized Poisson process to multiple dimensions using a Hilbert curve. The model is intuitively tuned by spatial density and regularity parameters which reliably predict the first and second-order statistics of varied synthetic imagery. PMID:19251530
Modelling Documents with Multiple Poisson Distributions.
ERIC Educational Resources Information Center
Margulis, Eugene L.
1993-01-01
Reports on the validity of the Multiple Poisson (nP) model of word distribution in full-text document collections. A practical algorithm for determining whether a certain word is distributed according to an nP distribution and the results of a test of this algorithm in three different document collections are described. (14 references) (KRN)
Rasch's Multiplicative Poisson Model with Covariates.
ERIC Educational Resources Information Center
Ogasawara, Haruhiko
1996-01-01
Rasch's multiplicative Poisson model is extended so that parameters for individuals in the prior gamma distribution have continuous covariates. Parameters for individuals are integrated out, and hyperparameters in the prior distribution are estimated by a numerical method separately from difficulty parameters that are treated as fixed parameters…
Extensions of Rasch's Multiplicative Poisson Model.
ERIC Educational Resources Information Center
Jansen, Margo G. H.; van Duijn, Marijtje A. J.
1992-01-01
A model developed by G. Rasch that assumes scores on some attainment tests can be realizations of a Poisson process is explained and expanded by assuming a prior distribution, with fixed but unknown parameters, for the subject parameters. How additional between-subject and within-subject factors can be incorporated is discussed. (SLD)
ERIC Educational Resources Information Center
Pedrini, D. T.; Pedrini, Bonnie C.
Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…
A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments
NASA Astrophysics Data System (ADS)
Fisicaro, G.; Genovese, L.; Andreussi, O.; Marzari, N.; Goedecker, S.
2016-01-01
The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.
The solution of large multi-dimensional Poisson problems
NASA Technical Reports Server (NTRS)
Stone, H. S.
1974-01-01
The Buneman algorithm for solving Poisson problems can be adapted to solve large Poisson problems on computers with a rotating drum memory so that the computation is done with very little time lost due to rotational latency of the drum.
Application and Interpretation of Hierarchical Multiple Regression.
Jeong, Younhee; Jung, Mi Jung
2016-01-01
The authors reported the association between motivation and self-management behavior of individuals with chronic low back pain after adjusting control variables using hierarchical multiple regression (). This article describes details of the hierarchical regression applying the actual data used in the article by , including how to test assumptions, run the statistical tests, and report the results. PMID:27648796
Irreversible thermodynamics of Poisson processes with reaction.
Méndez, V; Fort, J
1999-11-01
A kinetic model is derived to study the successive movements of particles, described by a Poisson process, as well as their generation. The irreversible thermodynamics of this system is also studied from the kinetic model. This makes it possible to evaluate the differences between thermodynamical quantities computed exactly and up to second-order. Such differences determine the range of validity of the second-order approximation to extended irreversible thermodynamics.
Poisson filtering of laser ranging data
NASA Technical Reports Server (NTRS)
Ricklefs, Randall L.; Shelus, Peter J.
1993-01-01
The filtering of data in a high noise, low signal strength environment is a situation encountered routinely in lunar laser ranging (LLR) and, to a lesser extent, in artificial satellite laser ranging (SLR). The use of Poisson statistics as one of the tools for filtering LLR data is described first in a historical context. The more recent application of this statistical technique to noisy SLR data is also described.
Irreversible thermodynamics of Poisson processes with reaction
NASA Astrophysics Data System (ADS)
Méndez, Vicenç; Fort, Joaquim
1999-11-01
A kinetic model is derived to study the successive movements of particles, described by a Poisson process, as well as their generation. The irreversible thermodynamics of this system is also studied from the kinetic model. This makes it possible to evaluate the differences between thermodynamical quantities computed exactly and up to second-order. Such differences determine the range of validity of the second-order approximation to extended irreversible thermodynamics.
Stabilities for nonisentropic Euler-Poisson equations.
Cheung, Ka Luen; Wong, Sen
2015-01-01
We establish the stabilities and blowup results for the nonisentropic Euler-Poisson equations by the energy method. By analysing the second inertia, we show that the classical solutions of the system with attractive forces blow up in finite time in some special dimensions when the energy is negative. Moreover, we obtain the stabilities results for the system in the cases of attractive and repulsive forces.
Modelling of nonlinear filtering Poisson time series
NASA Astrophysics Data System (ADS)
Bochkarev, Vladimir V.; Belashova, Inna A.
2016-08-01
In this article, algorithms of non-linear filtering of Poisson time series are tested using statistical modelling. The objective is to find a representation of a time series as a wavelet series with a small number of non-linear coefficients, which allows distinguishing statistically significant details. There are well-known efficient algorithms of non-linear wavelet filtering for the case when the values of a time series have a normal distribution. However, if the distribution is not normal, good results can be expected using the maximum likelihood estimations. The filtration is studied according to the criterion of maximum likelihood by the example of Poisson time series. For direct optimisation of the likelihood function, different stochastic (genetic algorithms, annealing method) and deterministic optimization algorithms are used. Testing of the algorithm using both simulated series and empirical data (series of rare words frequencies according to the Google Books Ngram data were used) showed that filtering based on the criterion of maximum likelihood has a great advantage over well-known algorithms for the case of Poisson series. Also, the most perspective methods of optimisation were selected for this problem.
Computation of solar perturbations with Poisson series
NASA Technical Reports Server (NTRS)
Broucke, R.
1974-01-01
Description of a project for computing first-order perturbations of natural or artificial satellites by integrating the equations of motion on a computer with automatic Poisson series expansions. A basic feature of the method of solution is that the classical variation-of-parameters formulation is used rather than rectangular coordinates. However, the variation-of-parameters formulation uses the three rectangular components of the disturbing force rather than the classical disturbing function, so that there is no problem in expanding the disturbing function in series. Another characteristic of the variation-of-parameters formulation employed is that six rather unusual variables are used in order to avoid singularities at the zero eccentricity and zero (or 90 deg) inclination. The integration process starts by assuming that all the orbit elements present on the right-hand sides of the equations of motion are constants. These right-hand sides are then simple Poisson series which can be obtained with the use of the Bessel expansions of the two-body problem in conjunction with certain interation methods. These Poisson series can then be integrated term by term, and a first-order solution is obtained.
First- and second-order Poisson spots
NASA Astrophysics Data System (ADS)
Kelly, William R.; Shirley, Eric L.; Migdall, Alan L.; Polyakov, Sergey V.; Hendrix, Kurt
2009-08-01
Although Thomas Young is generally given credit for being the first to provide evidence against Newton's corpuscular theory of light, it was Augustin Fresnel who first stated the modern theory of diffraction. We review the history surrounding Fresnel's 1818 paper and the role of the Poisson spot in the associated controversy. We next discuss the boundary-diffraction-wave approach to calculating diffraction effects and show how it can reduce the complexity of calculating diffraction patterns. We briefly discuss a generalization of this approach that reduces the dimensionality of integrals needed to calculate the complete diffraction pattern of any order diffraction effect. We repeat earlier demonstrations of the conventional Poisson spot and discuss an experimental setup for demonstrating an analogous phenomenon that we call a "second-order Poisson spot." Several features of the diffraction pattern can be explained simply by considering the path lengths of singly and doubly bent paths and distinguishing between first- and second-order diffraction effects related to such paths, respectively.
Poisson's ratio over two centuries: challenging hypotheses
Greaves, G. Neville
2013-01-01
This article explores Poisson's ratio, starting with the controversy concerning its magnitude and uniqueness in the context of the molecular and continuum hypotheses competing in the development of elasticity theory in the nineteenth century, moving on to its place in the development of materials science and engineering in the twentieth century, and concluding with its recent re-emergence as a universal metric for the mechanical performance of materials on any length scale. During these episodes France lost its scientific pre-eminence as paradigms switched from mathematical to observational, and accurate experiments became the prerequisite for scientific advance. The emergence of the engineering of metals followed, and subsequently the invention of composites—both somewhat separated from the discovery of quantum mechanics and crystallography, and illustrating the bifurcation of technology and science. Nowadays disciplines are reconnecting in the face of new scientific demands. During the past two centuries, though, the shape versus volume concept embedded in Poisson's ratio has remained invariant, but its application has exploded from its origins in describing the elastic response of solids and liquids, into areas such as materials with negative Poisson's ratio, brittleness, glass formation, and a re-evaluation of traditional materials. Moreover, the two contentious hypotheses have been reconciled in their complementarity within the hierarchical structure of materials and through computational modelling. PMID:24687094
On the singularity of the Vlasov-Poisson system
Zheng, Jian; Qin, Hong
2013-09-15
The Vlasov-Poisson system can be viewed as the collisionless limit of the corresponding Fokker-Planck-Poisson system. It is reasonable to expect that the result of Landau damping can also be obtained from the Fokker-Planck-Poisson system when the collision frequency ν approaches zero. However, we show that the collisionless Vlasov-Poisson system is a singular limit of the collisional Fokker-Planck-Poisson system, and Landau's result can be recovered only as the ν approaches zero from the positive side.
Nonlocal Poisson-Fermi model for ionic solvent.
Xie, Dexuan; Liu, Jinn-Liang; Eisenberg, Bob
2016-07-01
We propose a nonlocal Poisson-Fermi model for ionic solvent that includes ion size effects and polarization correlations among water molecules in the calculation of electrostatic potential. It includes the previous Poisson-Fermi models as special cases, and its solution is the convolution of a solution of the corresponding nonlocal Poisson dielectric model with a Yukawa-like kernel function. The Fermi distribution is shown to be a set of optimal ionic concentration functions in the sense of minimizing an electrostatic potential free energy. Numerical results are reported to show the difference between a Poisson-Fermi solution and a corresponding Poisson solution. PMID:27575084
Nonlocal Poisson-Fermi model for ionic solvent
NASA Astrophysics Data System (ADS)
Xie, Dexuan; Liu, Jinn-Liang; Eisenberg, Bob
2016-07-01
We propose a nonlocal Poisson-Fermi model for ionic solvent that includes ion size effects and polarization correlations among water molecules in the calculation of electrostatic potential. It includes the previous Poisson-Fermi models as special cases, and its solution is the convolution of a solution of the corresponding nonlocal Poisson dielectric model with a Yukawa-like kernel function. The Fermi distribution is shown to be a set of optimal ionic concentration functions in the sense of minimizing an electrostatic potential free energy. Numerical results are reported to show the difference between a Poisson-Fermi solution and a corresponding Poisson solution.
On the Singularity of the Vlasov-Poisson System
and Hong Qin, Jian Zheng
2013-04-26
The Vlasov-Poisson system can be viewed as the collisionless limit of the corresponding Fokker- Planck-Poisson system. It is reasonable to expect that the result of Landau damping can also be obtained from the Fokker-Planck-Poisson system when the collision frequency v approaches zero. However, we show that the colllisionless Vlasov-Poisson system is a singular limit of the collisional Fokker-Planck-Poisson system, and Landau's result can be recovered only as the approaching zero from the positive side.
Compositions, Random Sums and Continued Random Fractions of Poisson and Fractional Poisson Processes
NASA Astrophysics Data System (ADS)
Orsingher, Enzo; Polito, Federico
2012-08-01
In this paper we consider the relation between random sums and compositions of different processes. In particular, for independent Poisson processes N α ( t), N β ( t), t>0, we have that N_{α}(N_{β}(t)) stackrel{d}{=} sum_{j=1}^{N_{β}(t)} Xj, where the X j s are Poisson random variables. We present a series of similar cases, where the outer process is Poisson with different inner processes. We highlight generalisations of these results where the external process is infinitely divisible. A section of the paper concerns compositions of the form N_{α}(tauk^{ν}), ν∈(0,1], where tauk^{ν} is the inverse of the fractional Poisson process, and we show how these compositions can be represented as random sums. Furthermore we study compositions of the form Θ( N( t)), t>0, which can be represented as random products. The last section is devoted to studying continued fractions of Cauchy random variables with a Poisson number of levels. We evaluate the exact distribution and derive the scale parameter in terms of ratios of Fibonacci numbers.
On the fractal characterization of Paretian Poisson processes
NASA Astrophysics Data System (ADS)
Eliazar, Iddo I.; Sokolov, Igor M.
2012-06-01
Paretian Poisson processes are Poisson processes which are defined on the positive half-line, have maximal points, and are quantified by power-law intensities. Paretian Poisson processes are elemental in statistical physics, and are the bedrock of a host of power-law statistics ranging from Pareto's law to anomalous diffusion. In this paper we establish evenness-based fractal characterizations of Paretian Poisson processes. Considering an array of socioeconomic evenness-based measures of statistical heterogeneity, we show that: amongst the realm of Poisson processes which are defined on the positive half-line, and have maximal points, Paretian Poisson processes are the unique class of 'fractal processes' exhibiting scale-invariance. The results established in this paper are diametric to previous results asserting that the scale-invariance of Poisson processes-with respect to physical randomness-based measures of statistical heterogeneity-is characterized by exponential Poissonian intensities.
Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.
Hougaard, P; Lee, M L; Whitmore, G A
1997-12-01
Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.
Destructive weighted Poisson cure rate models.
Rodrigues, Josemar; de Castro, Mário; Balakrishnan, N; Cancho, Vicente G
2011-07-01
In this paper, we develop a flexible cure rate survival model by assuming the number of competing causes of the event of interest to follow a compound weighted Poisson distribution. This model is more flexible in terms of dispersion than the promotion time cure model. Moreover, it gives an interesting and realistic interpretation of the biological mechanism of the occurrence of event of interest as it includes a destructive process of the initial risk factors in a competitive scenario. In other words, what is recorded is only from the undamaged portion of the original number of risk factors.
Ductile Titanium Alloy with Low Poisson's Ratio
Hao, Y. L.; Li, S. J.; Sun, B. B.; Sui, M. L.; Yang, R.
2007-05-25
We report a ductile {beta}-type titanium alloy with body-centered cubic (bcc) crystal structure having a low Poisson's ratio of 0.14. The almost identical ultralow bulk and shear moduli of {approx}24 GPa combined with an ultrahigh strength of {approx}0.9 GPa contribute to easy crystal distortion due to much-weakened chemical bonding of atoms in the crystal, leading to significant elastic softening in tension and elastic hardening in compression. The peculiar elastic and plastic deformation behaviors of the alloy are interpreted as a result of approaching the elastic limit of the bcc crystal under applied stress.
A technique for determining the Poisson`s ratio of thin films
Krulevitch, P.
1996-04-18
The theory and experimental approach for a new technique used to determine the Poisson`s ratio of thin films are presented. The method involves taking the ratio of curvatures of cantilever beams and plates micromachined out of the film of interest. Curvature is induced by a through-thickness variation in residual stress, or by depositing a thin film under residual stress onto the beams and plates. This approach is made practical by the fact that the two curvatures air, the only required experimental parameters, and small calibration errors cancel when the ratio is taken. To confirm the accuracy of the technique, it was tested on a 2.5 {mu}m thick film of single crystal silicon. Micromachined beams 1 mm long by 100 {mu} wide and plates 700 {mu}m by 700 {mu}m were coated with 35 nm of gold and the curvatures were measured with a scanning optical profilometer. For the orientation tested ([100] film normal, [011] beam axis, [0{bar 1}1] contraction direction) silicon`s Poisson`s ratio is 0.064, and the measured result was 0.066 {+-} 0.043. The uncertainty in this technique is due primarily to variation in the measured curvatures, and should range from {+-} 0.02 to 0.04 with proper measurement technique.
Lattice sums arising from the Poisson equation
NASA Astrophysics Data System (ADS)
Bailey, D. H.; Borwein, J. M.; Crandall, R. E.; Zucker, I. J.
2013-03-01
In recent times, attention has been directed to the problem of solving the Poisson equation, either in engineering scenarios (computational) or in regard to crystal structure (theoretical). Herein we study a class of lattice sums that amount to Poisson solutions, namely the n-dimensional forms \\begin{eqnarray*} \\phi _n(r_1, \\dots ,r_n) = \\frac{1}{\\pi ^2} \\sum _{m_1, \\dots , m_n \\ odd} \\frac{e^{i \\pi ( m_1 r_1 + \\cdots + m_n r_n)}}{m_1^2 + \\cdots + m_n^2}. \\end{eqnarray*} By virtue of striking connections with Jacobi ϑ-function values, we are able to develop new closed forms for certain values of the coordinates rk, and extend such analysis to similar lattice sums. A primary result is that for rational x, y, the natural potential ϕ2(x, y) is \\frac{1}{\\pi } log A where A is an algebraic number. Various extensions and explicit evaluations are given. Such work is made possible by number-theoretical analysis, symbolic computation and experimental mathematics, including extensive numerical computations using up to 20 000-digit arithmetic.
A Poisson model for random multigraphs
Ranola, John M. O.; Ahn, Sangtae; Sehl, Mary; Smith, Desmond J.; Lange, Kenneth
2010-01-01
Motivation: Biological networks are often modeled by random graphs. A better modeling vehicle is a multigraph where each pair of nodes is connected by a Poisson number of edges. In the current model, the mean number of edges equals the product of two propensities, one for each node. In this context it is possible to construct a simple and effective algorithm for rapid maximum likelihood estimation of all propensities. Given estimated propensities, it is then possible to test statistically for functionally connected nodes that show an excess of observed edges over expected edges. The model extends readily to directed multigraphs. Here, propensities are replaced by outgoing and incoming propensities. Results: The theory is applied to real data on neuronal connections, interacting genes in radiation hybrids, interacting proteins in a literature curated database, and letter and word pairs in seven Shaskespearean plays. Availability: All data used are fully available online from their respective sites. Source code and software is available from http://code.google.com/p/poisson-multigraph/ Contact: klange@ucla.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20554690
An empirical Bayesian and Buhlmann approach with non-homogenous Poisson process
NASA Astrophysics Data System (ADS)
Noviyanti, Lienda
2015-12-01
All general insurance companies in Indonesia have to adjust their current premium rates according to maximum and minimum limit rates in the new regulation established by the Financial Services Authority (Otoritas Jasa Keuangan / OJK). In this research, we estimated premium rate by means of the Bayesian and the Buhlmann approach using historical claim frequency and claim severity in a five-group risk. We assumed a Poisson distributed claim frequency and a Normal distributed claim severity. Particularly, we used a non-homogenous Poisson process for estimating the parameters of claim frequency. We found that estimated premium rates are higher than the actual current rate. Regarding to the OJK upper and lower limit rates, the estimates among the five-group risk are varied; some are in the interval and some are out of the interval.
Method of Poisson's ratio imaging within a material part
NASA Technical Reports Server (NTRS)
Roth, Don J. (Inventor)
1996-01-01
The present invention is directed to a method of displaying the Poisson's ratio image of a material part. In the present invention longitudinal data is produced using a longitudinal wave transducer and shear wave data is produced using a shear wave transducer. The respective data is then used to calculate the Poisson's ratio for the entire material part. The Poisson's ratio approximations are then used to displayed the image.
A Method of Poisson's Ration Imaging Within a Material Part
NASA Technical Reports Server (NTRS)
Roth, Don J. (Inventor)
1994-01-01
The present invention is directed to a method of displaying the Poisson's ratio image of a material part. In the present invention, longitudinal data is produced using a longitudinal wave transducer and shear wave data is produced using a shear wave transducer. The respective data is then used to calculate the Poisson's ratio for the entire material part. The Poisson's ratio approximations are then used to display the data.
Surface reconstruction through poisson disk sampling.
Hou, Wenguang; Xu, Zekai; Qin, Nannan; Xiong, Dongping; Ding, Mingyue
2015-01-01
This paper intends to generate the approximate Voronoi diagram in the geodesic metric for some unbiased samples selected from original points. The mesh model of seeds is then constructed on basis of the Voronoi diagram. Rather than constructing the Voronoi diagram for all original points, the proposed strategy is to run around the obstacle that the geodesic distances among neighboring points are sensitive to nearest neighbor definition. It is obvious that the reconstructed model is the level of detail of original points. Hence, our main motivation is to deal with the redundant scattered points. In implementation, Poisson disk sampling is taken to select seeds and helps to produce the Voronoi diagram. Adaptive reconstructions can be achieved by slightly changing the uniform strategy in selecting seeds. Behaviors of this method are investigated and accuracy evaluations are done. Experimental results show the proposed method is reliable and effective. PMID:25915744
Stochastic search with Poisson and deterministic resetting
NASA Astrophysics Data System (ADS)
Bhat, Uttam; De Bacco, Caterina; Redner, S.
2016-08-01
We investigate a stochastic search process in one, two, and three dimensions in which N diffusing searchers that all start at x 0 seek a target at the origin. Each of the searchers is also reset to its starting point, either with rate r, or deterministically, with a reset time T. In one dimension and for a small number of searchers, the search time and the search cost are minimized at a non-zero optimal reset rate (or time), while for sufficiently large N, resetting always hinders the search. In general, a single searcher leads to the minimum search cost in one, two, and three dimensions. When the resetting is deterministic, several unexpected feature arise for N searchers, including the search time being independent of T for 1/T\\to 0 and the search cost being independent of N over a suitable range of N. Moreover, deterministic resetting typically leads to a lower search cost than in Poisson resetting.
Efficient information transfer by Poisson neurons.
Kostal, Lubomir; Shinomoto, Shigeru
2016-06-01
Recently, it has been suggested that certain neurons with Poissonian spiking statistics may communicate by discontinuously switching between two levels of firing intensity. Such a situation resembles in many ways the optimal information transmission protocol for the continuous-time Poisson channel known from information theory. In this contribution we employ the classical information-theoretic results to analyze the efficiency of such a transmission from different perspectives, emphasising the neurobiological viewpoint. We address both the ultimate limits, in terms of the information capacity under metabolic cost constraints, and the achievable bounds on performance at rates below capacity with fixed decoding error probability. In doing so we discuss optimal values of experimentally measurable quantities that can be compared with the actual neuronal recordings in a future effort. PMID:27106184
Unitary Response Regression Models
ERIC Educational Resources Information Center
Lipovetsky, S.
2007-01-01
The dependent variable in a regular linear regression is a numerical variable, and in a logistic regression it is a binary or categorical variable. In these models the dependent variable has varying values. However, there are problems yielding an identity output of a constant value which can also be modelled in a linear or logistic regression with…
Tharrington, Arnold N.
2015-09-09
The NCCS Regression Test Harness is a software package that provides a framework to perform regression and acceptance testing on NCCS High Performance Computers. The package is written in Python and has only the dependency of a Subversion repository to store the regression tests.
A Cartesian grid embedded boundary method for Poisson`s equation on irregular domains
Johansen, H.; Colella, P.
1997-01-31
The authors present a numerical method for solving Poisson`s equation, with variable coefficients and Dirichlet boundary conditions, on two-dimensional regions. The approach uses a finite-volume discretization, which embeds the domain in a regular Cartesian grid. They treat the solution as a cell-centered quantity, even when those centers are outside the domain. Cells that contain a portion of the domain boundary use conservation differencing of second-order accurate fluxes, on each cell volume. The calculation of the boundary flux ensures that the conditioning of the matrix is relatively unaffected by small cell volumes. This allows them to use multi-grid iterations with a simple point relaxation strategy. They have combined this with an adaptive mesh refinement (AMR) procedure. They provide evidence that the algorithm is second-order accurate on various exact solutions, and compare the adaptive and non-adaptive calculations.
Harry, H.H.
1988-03-11
Abstract and method for the adjustment and alignment of shafts in high power devices. A plurality of adjacent rotatable angled cylinders are positioned between a base and the shaft to be aligned which when rotated introduce an axial offset. The apparatus is electrically conductive and constructed of a structurally rigid material. The angled cylinders allow the shaft such as the center conductor in a pulse line machine to be offset in any desired alignment position within the range of the apparatus. 3 figs.
Harry, Herbert H.
1989-01-01
Apparatus and method for the adjustment and alignment of shafts in high power devices. A plurality of adjacent rotatable angled cylinders are positioned between a base and the shaft to be aligned which when rotated introduce an axial offset. The apparatus is electrically conductive and constructed of a structurally rigid material. The angled cylinders allow the shaft such as the center conductor in a pulse line machine to be offset in any desired alignment position within the range of the apparatus.
Solves Poisson's Equation in Axizymmetric Geometry on a Rectangular Mesh
1996-09-10
DATHETA4.0 computes the magnetostatic field produced by multiple point current sources in the presence of perfect conductors in axisymmetric geometry. DATHETA4.0 has an interactive user interface and solves Poisson''s equation using the ADI method on a rectangular finite-difference mesh. DATHETA4.0 uncludes models specific to applied-B ion diodes.
Ehrsam, Eric; Kallini, Joseph R.; Lebas, Damien; Modiano, Philippe; Cotten, Hervé
2016-01-01
Fully regressive melanoma is a phenomenon in which the primary cutaneous melanoma becomes completely replaced by fibrotic components as a result of host immune response. Although 10 to 35 percent of cases of cutaneous melanomas may partially regress, fully regressive melanoma is very rare; only 47 cases have been reported in the literature to date. AH of the cases of fully regressive melanoma reported in the literature were diagnosed in conjunction with metastasis on a patient. The authors describe a case of fully regressive melanoma without any metastases at the time of its diagnosis. Characteristic findings on dermoscopy, as well as the absence of melanoma on final biopsy, confirmed the diagnosis.
Ehrsam, Eric; Kallini, Joseph R.; Lebas, Damien; Modiano, Philippe; Cotten, Hervé
2016-01-01
Fully regressive melanoma is a phenomenon in which the primary cutaneous melanoma becomes completely replaced by fibrotic components as a result of host immune response. Although 10 to 35 percent of cases of cutaneous melanomas may partially regress, fully regressive melanoma is very rare; only 47 cases have been reported in the literature to date. AH of the cases of fully regressive melanoma reported in the literature were diagnosed in conjunction with metastasis on a patient. The authors describe a case of fully regressive melanoma without any metastases at the time of its diagnosis. Characteristic findings on dermoscopy, as well as the absence of melanoma on final biopsy, confirmed the diagnosis. PMID:27672418
Classification and Casimir Invariants of Lie--Poisson Brackets
NASA Astrophysics Data System (ADS)
Thiffeault, Jean-Luc; Morrison, P. J.
1997-11-01
Several types of fluid and plasma systems admit a Hamiltonian formulation using Lie-Poisson brackets, including Euler's equation for fluids, reduced MHD for plasmas, and others. Lie-Poisson brackets, which are examples of noncanonical Poisson brackets, consist of an inner product, < , >, and the bracket, [ , ], of a Lie algebra which we call the inner bracket. The Lie-Poisson bracket is then lF,Gr = l<Ψ, l[F_Ψ , G_Ψr]r>. Here Ψ is a vector of field variables, and subscripts denote functional differentiation. The algebras corresponding to the inner brackets are algebras by extension: they are defined for multiple field variables from the bracket for a single variable. We derive a classification scheme for all such brackets using cohomology theory for Lie algebras. We then derive the Casimir invariants for the classes of Lie-Poisson brackets where the inner bracket is of canonical type.
Quasi-likelihood estimation for relative risk regression models.
Carter, Rickey E; Lipsitz, Stuart R; Tilley, Barbara C
2005-01-01
For a prospective randomized clinical trial with two groups, the relative risk can be used as a measure of treatment effect and is directly interpretable as the ratio of success probabilities in the new treatment group versus the placebo group. For a prospective study with many covariates and a binary outcome (success or failure), relative risk regression may be of interest. If we model the log of the success probability as a linear function of covariates, the regression coefficients are log-relative risks. However, using such a log-linear model with a Bernoulli likelihood can lead to convergence problems in the Newton-Raphson algorithm. This is likely to occur when the success probabilities are close to one. A constrained likelihood method proposed by Wacholder (1986, American Journal of Epidemiology 123, 174-184), also has convergence problems. We propose a quasi-likelihood method of moments technique in which we naively assume the Bernoulli outcome is Poisson, with the mean (success probability) following a log-linear model. We use the Poisson maximum likelihood equations to estimate the regression coefficients without constraints. Using method of moment ideas, one can show that the estimates using the Poisson likelihood will be consistent and asymptotically normal. We apply these methods to a double-blinded randomized trial in primary biliary cirrhosis of the liver (Markus et al., 1989, New England Journal of Medicine 320, 1709-1713). PMID:15618526
Poisson-Boltzmann-Nernst-Planck model
NASA Astrophysics Data System (ADS)
Zheng, Qiong; Wei, Guo-Wei
2011-05-01
The Poisson-Nernst-Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species in the system, since each ion species corresponds to one Nernst-Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst-Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst-Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson-Boltzmann and Nernst-Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external
Poisson-Boltzmann-Nernst-Planck model
Zheng Qiong; Wei Guowei
2011-05-21
The Poisson-Nernst-Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species in the system, since each ion species corresponds to one Nernst-Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst-Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst-Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson-Boltzmann and Nernst-Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external
Generalized HPC method for the Poisson equation
NASA Astrophysics Data System (ADS)
Bardazzi, A.; Lugni, C.; Antuono, M.; Graziani, G.; Faltinsen, O. M.
2015-10-01
An efficient and innovative numerical algorithm based on the use of Harmonic Polynomials on each Cell of the computational domain (HPC method) has been recently proposed by Shao and Faltinsen (2014) [1], to solve Boundary Value Problem governed by the Laplace equation. Here, we extend the HPC method for the solution of non-homogeneous elliptic boundary value problems. The homogeneous solution, i.e. the Laplace equation, is represented through a polynomial function with harmonic polynomials while the particular solution of the Poisson equation is provided by a bi-quadratic function. This scheme has been called generalized HPC method. The present algorithm, accurate up to the 4th order, proved to be efficient, i.e. easy to be implemented and with a low computational effort, for the solution of two-dimensional elliptic boundary value problems. Furthermore, it provides an analytical representation of the solution within each computational stencil, which allows its coupling with existing numerical algorithms within an efficient domain-decomposition strategy or within an adaptive mesh refinement algorithm.
Causal Poisson bracket via deformation quantization
NASA Astrophysics Data System (ADS)
Berra-Montiel, Jasel; Molgado, Alberto; Palacios-García, César D.
2016-06-01
Starting with the well-defined product of quantum fields at two spacetime points, we explore an associated Poisson structure for classical field theories within the deformation quantization formalism. We realize that the induced star-product is naturally related to the standard Moyal product through an appropriate causal Green’s functions connecting points in the space of classical solutions to the equations of motion. Our results resemble the Peierls-DeWitt bracket that has been analyzed in the multisymplectic context. Once our star-product is defined, we are able to apply the Wigner-Weyl map in order to introduce a generalized version of Wick’s theorem. Finally, we include some examples to explicitly test our method: the real scalar field, the bosonic string and a physically motivated nonlinear particle model. For the field theoretic models, we have encountered causal generalizations of the creation/annihilation relations, and also a causal generalization of the Virasoro algebra for the bosonic string. For the nonlinear particle case, we use the approximate solution in terms of the Green’s function, in order to construct a well-behaved causal bracket.
Rethinking Poisson-based statistics for ground water quality monitoring
Loftis, J.C.; Iyer, H.K.; Baker, H.J.
1999-03-01
Both the US Environmental Protection Agency (EPA) and the American Society for Testing and Materials (ASTM) provide guidance for selecting statistical procedures for ground water detection monitoring at Resource Conservation and Recovery Act (RCRA) solid and hazardous waste facilities. The procedures recommended for dealing with large numbers of nondetects, as may often be found in data for volatile organic compounds (VOCs), include, but are not limited to, Poisson prediction limits and Poisson tolerance limits. However, many of the proposed applications of the Poisson model are inappropriate. The development and application of the Poisson-based methods are explored for two types of data, counts of analytical hits and actual concentration measurements. Each of these two applications is explored along two lines of reasoning, a first-principles argument and a simple empirical fit. The application of Poisson-based methods to counts of analytical hits, including simultaneous consideration of multiple VOCs, appears to have merit from both a first principles and an empirical standpoint. On the other hand, the Poisson distribution is not appropriate for modeling concentration data, primarily because the variance of the distribution does not scale appropriately with changing units of measurement. Tolerance and prediction limits based on the Poisson distribution are not scale invariant. By changing the units of observation in example problems drawn from EPA guidance, use of the Poisson-based tolerance and prediction limits can result in significant errors. In short, neither the Poisson distribution nor associated tolerance or prediction limits should be used with concentration data. EPA guidance does present, however, other, more appropriate, methods for dealing with concentration data in which the number of nondetects is large. These include nonparametric tolerance and prediction limits and a test of proportions based on the binomial distribution.
Intertime jump statistics of state-dependent Poisson processes.
Daly, Edoardo; Porporato, Amilcare
2007-01-01
A method to obtain the probability distribution of the interarrival times of jump occurrences in systems driven by state-dependent Poisson noise is proposed. Such a method uses the survivor function obtained by a modified version of the master equation associated to the stochastic process under analysis. A model for the timing of human activities shows the capability of state-dependent Poisson noise to generate power-law distributions. The application of the method to a model for neuron dynamics and to a hydrological model accounting for land-atmosphere interaction elucidates the origin of characteristic recurrence intervals and possible persistence in state-dependent Poisson models.
Intertime jump statistics of state-dependent Poisson processes
NASA Astrophysics Data System (ADS)
Daly, Edoardo; Porporato, Amilcare
2007-01-01
A method to obtain the probability distribution of the interarrival times of jump occurrences in systems driven by state-dependent Poisson noise is proposed. Such a method uses the survivor function obtained by a modified version of the master equation associated to the stochastic process under analysis. A model for the timing of human activities shows the capability of state-dependent Poisson noise to generate power-law distributions. The application of the method to a model for neuron dynamics and to a hydrological model accounting for land-atmosphere interaction elucidates the origin of characteristic recurrence intervals and possible persistence in state-dependent Poisson models.
Poly-symplectic Groupoids and Poly-Poisson Structures
NASA Astrophysics Data System (ADS)
Martinez, Nicolas
2015-05-01
We introduce poly-symplectic groupoids, which are natural extensions of symplectic groupoids to the context of poly-symplectic geometry, and define poly-Poisson structures as their infinitesimal counterparts. We present equivalent descriptions of poly-Poisson structures, including one related with AV-Dirac structures. We also discuss symmetries and reduction in the setting of poly-symplectic groupoids and poly-Poisson structures, and use our viewpoint to revisit results and develop new aspects of the theory initiated in Iglesias et al. (Lett Math Phys 103:1103-1133, 2013).
On classification of discrete, scalar-valued Poisson brackets
NASA Astrophysics Data System (ADS)
Parodi, E.
2012-10-01
We address the problem of classifying discrete differential-geometric Poisson brackets (dDGPBs) of any fixed order on a target space of dimension 1. We prove that these Poisson brackets (PBs) are in one-to-one correspondence with the intersection points of certain projective hypersurfaces. In addition, they can be reduced to a cubic PB of the standard Volterra lattice by discrete Miura-type transformations. Finally, by improving a lattice consolidation procedure, we obtain new families of non-degenerate, vector-valued and first-order dDGPBs that can be considered in the framework of admissible Lie-Poisson group theory.
Improved Regression Calibration
ERIC Educational Resources Information Center
Skrondal, Anders; Kuha, Jouni
2012-01-01
The likelihood for generalized linear models with covariate measurement error cannot in general be expressed in closed form, which makes maximum likelihood estimation taxing. A popular alternative is regression calibration which is computationally efficient at the cost of inconsistent estimation. We propose an improved regression calibration…
Prediction in Multiple Regression.
ERIC Educational Resources Information Center
Osborne, Jason W.
2000-01-01
Presents the concept of prediction via multiple regression (MR) and discusses the assumptions underlying multiple regression analyses. Also discusses shrinkage, cross-validation, and double cross-validation of prediction equations and describes how to calculate confidence intervals around individual predictions. (SLD)
Gerber, Samuel; Rubel, Oliver; Bremer, Peer -Timo; Pascucci, Valerio; Whitaker, Ross T.
2012-01-19
This paper introduces a novel partition-based regression approach that incorporates topological information. Partition-based regression typically introduces a quality-of-fit-driven decomposition of the domain. The emphasis in this work is on a topologically meaningful segmentation. Thus, the proposed regression approach is based on a segmentation induced by a discrete approximation of the Morse–Smale complex. This yields a segmentation with partitions corresponding to regions of the function with a single minimum and maximum that are often well approximated by a linear model. This approach yields regression models that are amenable to interpretation and have good predictive capacity. Typically, regression estimates are quantified by their geometrical accuracy. For the proposed regression, an important aspect is the quality of the segmentation itself. Thus, this article introduces a new criterion that measures the topological accuracy of the estimate. The topological accuracy provides a complementary measure to the classical geometrical error measures and is very sensitive to overfitting. The Morse–Smale regression is compared to state-of-the-art approaches in terms of geometry and topology and yields comparable or improved fits in many cases. Finally, a detailed study on climate-simulation data demonstrates the application of the Morse–Smale regression. Supplementary Materials are available online and contain an implementation of the proposed approach in the R package msr, an analysis and simulations on the stability of the Morse–Smale complex approximation, and additional tables for the climate-simulation study.
Vlasov-Poisson in 1D: waterbags
NASA Astrophysics Data System (ADS)
Colombi, Stéphane; Touma, Jihad
2014-07-01
We revisit in one dimension the waterbag method to solve numerically Vlasov-Poisson equations. In this approach, the phase-space distribution function f (x, v) is initially sampled by an ensemble of patches, the waterbags, where f is assumed to be constant. As a consequence of Liouville theorem, it is only needed to follow the evolution of the border of these waterbags, which can be done by employing an orientated, self-adaptive polygon tracing isocontours of f. This method, which is entropy conserving in essence, is very accurate and can trace very well non-linear instabilities as illustrated by specific examples. As an application of the method, we generate an ensemble of single-waterbag simulations with decreasing thickness to perform a convergence study to the cold case. Our measurements show that the system relaxes to a steady state where the gravitational potential profile is a power law of slowly varying index β, with β close to 3/2 as found in the literature. However, detailed analysis of the properties of the gravitational potential shows that at the centre, β > 1.54. Moreover, our measurements are consistent with the value β = 8/5 = 1.6 that can be analytically derived by assuming that the average of the phase-space density per energy level obtained at crossing times is conserved during the mixing phase. These results are incompatible with the logarithmic slope of the projected density profile β - 2 ≃ -0.47 obtained recently by Schulz et al. using an N-body technique. This sheds again strong doubts on the capability of N-body techniques to converge to the correct steady state expected in the continuous limit.
Doubly stochastic Poisson processes in artificial neural learning.
Card, H C
1998-01-01
This paper investigates neuron activation statistics in artificial neural networks employing stochastic arithmetic. It is shown that a doubly stochastic Poisson process is an appropriate model for the signals in these circuits.
Modeling laser velocimeter signals as triply stochastic Poisson processes
NASA Technical Reports Server (NTRS)
Mayo, W. T., Jr.
1976-01-01
Previous models of laser Doppler velocimeter (LDV) systems have not adequately described dual-scatter signals in a manner useful for analysis and simulation of low-level photon-limited signals. At low photon rates, an LDV signal at the output of a photomultiplier tube is a compound nonhomogeneous filtered Poisson process, whose intensity function is another (slower) Poisson process with the nonstationary rate and frequency parameters controlled by a random flow (slowest) process. In the present paper, generalized Poisson shot noise models are developed for low-level LDV signals. Theoretical results useful in detection error analysis and simulation are presented, along with measurements of burst amplitude statistics. Computer generated simulations illustrate the difference between Gaussian and Poisson models of low-level signals.
Evolution of Fermionic Systems as AN Expectation Over Poisson Processes
NASA Astrophysics Data System (ADS)
Beccaria, M.; Presilla, C.; de Angelis, G. F.; Jona-Lasinio, G.
We derive an exact probabilistic representation for the evolution of a Hubbard model with site- and spin-dependent hopping coefficients and site-dependent interactions in terms of an associated stochastic dynamics of a collection of Poisson processes.
Negative Poisson's ratios for extreme states of matter
Baughman; Dantas; Stafstrom; Zakhidov; Mitchell; Dubin
2000-06-16
Negative Poisson's ratios are predicted for body-centered-cubic phases that likely exist in white dwarf cores and neutron star outer crusts, as well as those found for vacuumlike ion crystals, plasma dust crystals, and colloidal crystals (including certain virus crystals). The existence of this counterintuitive property, which means that a material laterally expands when stretched, is experimentally demonstrated for very low density crystals of trapped ions. At very high densities, the large predicted negative and positive Poisson's ratios might be important for understanding the asteroseismology of neutron stars and white dwarfs and the effect of stellar stresses on nuclear reaction rates. Giant Poisson's ratios are both predicted and observed for highly strained coulombic photonic crystals, suggesting possible applications of large, tunable Poisson's ratios for photonic crystal devices. PMID:10856209
Simulating the Effect of Poisson Ratio on Metallic Glass Properties
Morris, James R; Aga, Rachel S; Egami, Takeshi; Levashov, Valentin A.
2009-01-01
Recent work has shown that many metallic glass properties correlate with the Poisson ratio of the glass. We have developed a new model for simulating the atomistic behavior of liquids and glasses that allows us to change the Poisson ratio, while keeping the crystalline phase cohesive energy, lattice constant, and bulk modulus fixed. A number of liquid and glass properties are shown to be directly affected by the Poisson ratio. An increasing Poisson ratio stabilizes the liquid structure relative to the crystal phase, as indicated by a significantly lower melting temperature and by a lower enthalpy of the liquid phase. The liquids clearly exhibit two changes in behavior: one at low temperatures, associated with the conventional glass transition T{sub g}, and a second, higher temperature change associated with the shear properties of the liquids. This second crossover has a characteristic, measurable change in the liquid structure.
Negative Poisson's ratios for extreme states of matter
Baughman; Dantas; Stafstrom; Zakhidov; Mitchell; Dubin
2000-06-16
Negative Poisson's ratios are predicted for body-centered-cubic phases that likely exist in white dwarf cores and neutron star outer crusts, as well as those found for vacuumlike ion crystals, plasma dust crystals, and colloidal crystals (including certain virus crystals). The existence of this counterintuitive property, which means that a material laterally expands when stretched, is experimentally demonstrated for very low density crystals of trapped ions. At very high densities, the large predicted negative and positive Poisson's ratios might be important for understanding the asteroseismology of neutron stars and white dwarfs and the effect of stellar stresses on nuclear reaction rates. Giant Poisson's ratios are both predicted and observed for highly strained coulombic photonic crystals, suggesting possible applications of large, tunable Poisson's ratios for photonic crystal devices.
Almost Poisson brackets for nonholonomic systems on Lie groups
NASA Astrophysics Data System (ADS)
Garcia-Naranjo, Luis Constantino
We present a geometric construction of almost Poisson brackets for nonholonomic mechanical systems whose configuration space is a Lie group G. We study the so-called LL and LR systems where the kinetic energy defines a left invariant metric on G and the constraints are invariant with respect to left (respectively right) translation on G. For LL systems, the equations on the momentum phase space, T*G , can be left translated onto g *, the dual space of the Lie algebra g . We show that the reduced equations on g * can be cast in Poisson form with respect to an almost Poisson bracket that is obtained by projecting the standard Lie-Poisson bracket onto the constraint space. For LR systems we use ideas of semidirect product reduction to transfer the equations on T*G into the dual Lie algebra, s *, of a semidirect product. This provides a natural Lie algebraic setting for the equations of motion commonly found in the literature. We show that these equations can also be cast in Poisson form with respect to an almost Poisson bracket that is obtained by projecting the Lie-Poisson structure on s * onto a constraint submanifold. In both cases the constraint functions are Casimirs of the bracket and are satisfied automatically. Our construction is a natural generalization of the classical ideas of Lie-Poisson and semidirect product reduction to the nonholonomic case. It also sets a convenient stage for the study of Hamiltonization of certain nonholonomic systems. Our examples include the Suslov and the Veselova problems of constrained motion of a rigid body, and the Chaplygin sleigh. In addition we study the almost Poisson reduction of the Chaplygin sphere. We show that the bracket given by Borisov and Mamaev in [7] is obtained by reducing a nonstandard almost Poisson bracket that is obtained by projecting a non-canonical bivector onto the constraint submanifold using the Lagrange-D'Alembert principle. The examples that we treat show that it is possible to cast the reduced
Nonlocal quadratic Poisson algebras, monodromy map, and Bogoyavlensky lattices
NASA Astrophysics Data System (ADS)
Suris, Yuri B.
1997-08-01
A new Lax representation for the Bogoyavlensky lattice is found and its r-matrix interpretation is elaborated. The r-matrix structure turns out to be related to a highly nonlocal quadratic Poisson structure on a direct sum of associative algebras. The theory of such nonlocal structures is developed and the Poisson property of the monodromy map is worked out in the most general situation. Some problems concerning the duality of Lax representations are raised.
Lamb wave propagation in negative Poisson's ratio composites
NASA Astrophysics Data System (ADS)
Remillat, Chrystel; Wilcox, Paul; Scarpa, Fabrizio
2008-03-01
Lamb wave propagation is evaluated for cross-ply laminate composites exhibiting through-the-thickness negative Poisson's ratio. The laminates are mechanically modeled using the Classical Laminate Theory, while the propagation of Lamb waves is investigated using a combination of semi analytical models and Finite Element time-stepping techniques. The auxetic laminates exhibit well spaced bending, shear and symmetric fundamental modes, while featuring normal stresses for A 0 mode 3 times lower than composite laminates with positive Poisson's ratio.
Bicrossed products induced by Poisson vector fields and their integrability
NASA Astrophysics Data System (ADS)
Djiba, Samson Apourewagne; Wade, Aïssa
2016-01-01
First we show that, associated to any Poisson vector field E on a Poisson manifold (M,π), there is a canonical Lie algebroid structure on the first jet bundle J1M which, depends only on the cohomology class of E. We then introduce the notion of a cosymplectic groupoid and we discuss the integrability of the first jet bundle into a cosymplectic groupoid. Finally, we give applications to Atiyah classes and L∞-algebras.
Regression problems for magnitudes
NASA Astrophysics Data System (ADS)
Castellaro, S.; Mulargia, F.; Kagan, Y. Y.
2006-06-01
Least-squares linear regression is so popular that it is sometimes applied without checking whether its basic requirements are satisfied. In particular, in studying earthquake phenomena, the conditions (a) that the uncertainty on the independent variable is at least one order of magnitude smaller than the one on the dependent variable, (b) that both data and uncertainties are normally distributed and (c) that residuals are constant are at times disregarded. This may easily lead to wrong results. As an alternative to least squares, when the ratio between errors on the independent and the dependent variable can be estimated, orthogonal regression can be applied. We test the performance of orthogonal regression in its general form against Gaussian and non-Gaussian data and error distributions and compare it with standard least-square regression. General orthogonal regression is found to be superior or equal to the standard least squares in all the cases investigated and its use is recommended. We also compare the performance of orthogonal regression versus standard regression when, as often happens in the literature, the ratio between errors on the independent and the dependent variables cannot be estimated and is arbitrarily set to 1. We apply these results to magnitude scale conversion, which is a common problem in seismology, with important implications in seismic hazard evaluation, and analyse it through specific tests. Our analysis concludes that the commonly used standard regression may induce systematic errors in magnitude conversion as high as 0.3-0.4, and, even more importantly, this can introduce apparent catalogue incompleteness, as well as a heavy bias in estimates of the slope of the frequency-magnitude distributions. All this can be avoided by using the general orthogonal regression in magnitude conversions.
Multivariate Regression with Calibration*
Liu, Han; Wang, Lie; Zhao, Tuo
2014-01-01
We propose a new method named calibrated multivariate regression (CMR) for fitting high dimensional multivariate regression models. Compared to existing methods, CMR calibrates the regularization for each regression task with respect to its noise level so that it is simultaneously tuning insensitive and achieves an improved finite-sample performance. Computationally, we develop an efficient smoothed proximal gradient algorithm which has a worst-case iteration complexity O(1/ε), where ε is a pre-specified numerical accuracy. Theoretically, we prove that CMR achieves the optimal rate of convergence in parameter estimation. We illustrate the usefulness of CMR by thorough numerical simulations and show that CMR consistently outperforms other high dimensional multivariate regression methods. We also apply CMR on a brain activity prediction problem and find that CMR is as competitive as the handcrafted model created by human experts. PMID:25620861
Metamorphic geodesic regression.
Hong, Yi; Joshi, Sarang; Sanchez, Mar; Styner, Martin; Niethammer, Marc
2012-01-01
We propose a metamorphic geodesic regression approach approximating spatial transformations for image time-series while simultaneously accounting for intensity changes. Such changes occur for example in magnetic resonance imaging (MRI) studies of the developing brain due to myelination. To simplify computations we propose an approximate metamorphic geodesic regression formulation that only requires pairwise computations of image metamorphoses. The approximated solution is an appropriately weighted average of initial momenta. To obtain initial momenta reliably, we develop a shooting method for image metamorphosis.
Comparing regression methods for the two-stage clonal expansion model of carcinogenesis.
Kaiser, J C; Heidenreich, W F
2004-11-15
In the statistical analysis of cohort data with risk estimation models, both Poisson and individual likelihood regressions are widely used methods of parameter estimation. In this paper, their performance has been tested with the biologically motivated two-stage clonal expansion (TSCE) model of carcinogenesis. To exclude inevitable uncertainties of existing data, cohorts with simple individual exposure history have been created by Monte Carlo simulation. To generate some similar properties of atomic bomb survivors and radon-exposed mine workers, both acute and protracted exposure patterns have been generated. Then the capacity of the two regression methods has been compared to retrieve a priori known model parameters from the simulated cohort data. For simple models with smooth hazard functions, the parameter estimates from both methods come close to their true values. However, for models with strongly discontinuous functions which are generated by the cell mutation process of transformation, the Poisson regression method fails to produce reliable estimates. This behaviour is explained by the construction of class averages during data stratification. Thereby, some indispensable information on the individual exposure history was destroyed. It could not be repaired by countermeasures such as the refinement of Poisson classes or a more adequate choice of Poisson groups. Although this choice might still exist we were unable to discover it. In contrast to this, the individual likelihood regression technique was found to work reliably for all considered versions of the TSCE model. PMID:15490436
An INAR(1) Negative Multinomial Regression Model for Longitudinal Count Data.
ERIC Educational Resources Information Center
Bockenholt, Ulf
1999-01-01
Discusses a regression model for the analysis of longitudinal count data in a panel study by adapting an integer-valued first-order autoregressive (INAR(1)) Poisson process to represent time-dependent correlation between counts. Derives a new negative multinomial distribution by combining INAR(1) representation with a random effects approach.…
Semiclassical Limits of Ore Extensions and a Poisson Generalized Weyl Algebra
NASA Astrophysics Data System (ADS)
Cho, Eun-Hee; Oh, Sei-Qwon
2016-07-01
We observe [Launois and Lecoutre, Trans. Am. Math. Soc. 368:755-785, 2016, Proposition 4.1] that Poisson polynomial extensions appear as semiclassical limits of a class of Ore extensions. As an application, a Poisson generalized Weyl algebra A 1, considered as a Poisson version of the quantum generalized Weyl algebra, is constructed and its Poisson structures are studied. In particular, a necessary and sufficient condition is obtained, such that A 1 is Poisson simple and established that the Poisson endomorphisms of A 1 are Poisson analogues of the endomorphisms of the quantum generalized Weyl algebra.
Tarpey, Thaddeus; Petkova, Eva
2010-07-01
Finite mixture models have come to play a very prominent role in modelling data. The finite mixture model is predicated on the assumption that distinct latent groups exist in the population. The finite mixture model therefore is based on a categorical latent variable that distinguishes the different groups. Often in practice distinct sub-populations do not actually exist. For example, disease severity (e.g. depression) may vary continuously and therefore, a distinction of diseased and not-diseased may not be based on the existence of distinct sub-populations. Thus, what is needed is a generalization of the finite mixture's discrete latent predictor to a continuous latent predictor. We cast the finite mixture model as a regression model with a latent Bernoulli predictor. A latent regression model is proposed by replacing the discrete Bernoulli predictor by a continuous latent predictor with a beta distribution. Motivation for the latent regression model arises from applications where distinct latent classes do not exist, but instead individuals vary according to a continuous latent variable. The shapes of the beta density are very flexible and can approximate the discrete Bernoulli distribution. Examples and a simulation are provided to illustrate the latent regression model. In particular, the latent regression model is used to model placebo effect among drug treated subjects in a depression study. PMID:20625443
Semiparametric Regression Pursuit.
Huang, Jian; Wei, Fengrong; Ma, Shuangge
2012-10-01
The semiparametric partially linear model allows flexible modeling of covariate effects on the response variable in regression. It combines the flexibility of nonparametric regression and parsimony of linear regression. The most important assumption in the existing methods for the estimation in this model is to assume a priori that it is known which covariates have a linear effect and which do not. However, in applied work, this is rarely known in advance. We consider the problem of estimation in the partially linear models without assuming a priori which covariates have linear effects. We propose a semiparametric regression pursuit method for identifying the covariates with a linear effect. Our proposed method is a penalized regression approach using a group minimax concave penalty. Under suitable conditions we show that the proposed approach is model-pursuit consistent, meaning that it can correctly determine which covariates have a linear effect and which do not with high probability. The performance of the proposed method is evaluated using simulation studies, which support our theoretical results. A real data example is used to illustrated the application of the proposed method. PMID:23559831
[Understanding logistic regression].
El Sanharawi, M; Naudet, F
2013-10-01
Logistic regression is one of the most common multivariate analysis models utilized in epidemiology. It allows the measurement of the association between the occurrence of an event (qualitative dependent variable) and factors susceptible to influence it (explicative variables). The choice of explicative variables that should be included in the logistic regression model is based on prior knowledge of the disease physiopathology and the statistical association between the variable and the event, as measured by the odds ratio. The main steps for the procedure, the conditions of application, and the essential tools for its interpretation are discussed concisely. We also discuss the importance of the choice of variables that must be included and retained in the regression model in order to avoid the omission of important confounding factors. Finally, by way of illustration, we provide an example from the literature, which should help the reader test his or her knowledge.
Poisson image reconstruction with Hessian Schatten-norm regularization.
Lefkimmiatis, Stamatios; Unser, Michael
2013-11-01
Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.
Poisson image reconstruction with Hessian Schatten-norm regularization.
Lefkimmiatis, Stamatios; Unser, Michael
2013-11-01
Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework. PMID:23846472
Practical Session: Logistic Regression
NASA Astrophysics Data System (ADS)
Clausel, M.; Grégoire, G.
2014-12-01
An exercise is proposed to illustrate the logistic regression. One investigates the different risk factors in the apparition of coronary heart disease. It has been proposed in Chapter 5 of the book of D.G. Kleinbaum and M. Klein, "Logistic Regression", Statistics for Biology and Health, Springer Science Business Media, LLC (2010) and also by D. Chessel and A.B. Dufour in Lyon 1 (see Sect. 6 of http://pbil.univ-lyon1.fr/R/pdf/tdr341.pdf). This example is based on data given in the file evans.txt coming from http://www.sph.emory.edu/dkleinb/logreg3.htm#data.
Markov modulated Poisson process models incorporating covariates for rainfall intensity.
Thayakaran, R; Ramesh, N I
2013-01-01
Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.
Electrostatic forces in the Poisson-Boltzmann systems.
Xiao, Li; Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray
2013-09-01
Continuum modeling of electrostatic interactions based upon numerical solutions of the Poisson-Boltzmann equation has been widely used in structural and functional analyses of biomolecules. A limitation of the numerical strategies is that it is conceptually difficult to incorporate these types of models into molecular mechanics simulations, mainly because of the issue in assigning atomic forces. In this theoretical study, we first derived the Maxwell stress tensor for molecular systems obeying the full nonlinear Poisson-Boltzmann equation. We further derived formulations of analytical electrostatic forces given the Maxwell stress tensor and discussed the relations of the formulations with those published in the literature. We showed that the formulations derived from the Maxwell stress tensor require a weaker condition for its validity, applicable to nonlinear Poisson-Boltzmann systems with a finite number of singularities such as atomic point charges and the existence of discontinuous dielectric as in the widely used classical piece-wise constant dielectric models. PMID:24028101
A spectral Poisson solver for kinetic plasma simulation
NASA Astrophysics Data System (ADS)
Szeremley, Daniel; Obberath, Jens; Brinkmann, Ralf
2011-10-01
Plasma resonance spectroscopy is a well established plasma diagnostic method, realized in several designs. One of these designs is the multipole resonance probe (MRP). In its idealized - geometrically simplified - version it consists of two dielectrically shielded, hemispherical electrodes to which an RF signal is applied. A numerical tool is under development which is capable of simulating the dynamics of the plasma surrounding the MRP in electrostatic approximation. In this contribution we concentrate on the specialized Poisson solver for that tool. The plasma is represented by an ensemble of point charges. By expanding both the charge density and the potential into spherical harmonics, a largely analytical solution of the Poisson problem can be employed. For a practical implementation, the expansion must be appropriately truncated. With this spectral solver we are able to efficiently solve the Poisson equation in a kinetic plasma simulation without the need of introducing a spatial discretization.
Blocked Shape Memory Effect in Negative Poisson's Ratio Polymer Metamaterials.
Boba, Katarzyna; Bianchi, Matteo; McCombe, Greg; Gatt, Ruben; Griffin, Anselm C; Richardson, Robert M; Scarpa, Fabrizio; Hamerton, Ian; Grima, Joseph N
2016-08-10
We describe a new class of negative Poisson's ratio (NPR) open cell PU-PE foams produced by blocking the shape memory effect in the polymer. Contrary to classical NPR open cell thermoset and thermoplastic foams that return to their auxetic phase after reheating (and therefore limit their use in technological applications), this new class of cellular solids has a permanent negative Poisson's ratio behavior, generated through multiple shape memory (mSM) treatments that lead to a fixity of the topology of the cell foam. The mSM-NPR foams have Poisson's ratio values similar to the auxetic foams prior their return to the conventional phase, but compressive stress-strain curves similar to the ones of conventional foams. The results show that by manipulating the shape memory effect in polymer microstructures it is possible to obtain new classes of materials with unusual deformation mechanisms. PMID:27377708
Detection of Gaussian signals in Poisson-modulated interference.
Streit, R L
2000-10-01
Passive broadband detection of target signals by an array of hydrophones in the presence of multiple discrete interferers is analyzed under Gaussian statistics and low signal-to-noise ratio conditions. A nonhomogeneous Poisson-modulated interference process is used to model the ensemble of possible arrival directions of the discrete interferers. Closed-form expressions are derived for the recognition differential of the passive-sonar equation in the presence of Poisson-modulated interference. The interference-compensated recognition differential differs from the classical recognition differential by an additive positive term that depend on the interference-to-noise ratio, the directionality of the Poisson-modulated interference, and the array beam pattern.
Blocked Shape Memory Effect in Negative Poisson's Ratio Polymer Metamaterials.
Boba, Katarzyna; Bianchi, Matteo; McCombe, Greg; Gatt, Ruben; Griffin, Anselm C; Richardson, Robert M; Scarpa, Fabrizio; Hamerton, Ian; Grima, Joseph N
2016-08-10
We describe a new class of negative Poisson's ratio (NPR) open cell PU-PE foams produced by blocking the shape memory effect in the polymer. Contrary to classical NPR open cell thermoset and thermoplastic foams that return to their auxetic phase after reheating (and therefore limit their use in technological applications), this new class of cellular solids has a permanent negative Poisson's ratio behavior, generated through multiple shape memory (mSM) treatments that lead to a fixity of the topology of the cell foam. The mSM-NPR foams have Poisson's ratio values similar to the auxetic foams prior their return to the conventional phase, but compressive stress-strain curves similar to the ones of conventional foams. The results show that by manipulating the shape memory effect in polymer microstructures it is possible to obtain new classes of materials with unusual deformation mechanisms.
Explorations in Statistics: Regression
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2011-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This seventh installment of "Explorations in Statistics" explores regression, a technique that estimates the nature of the relationship between two things for which we may only surmise a mechanistic or predictive connection.…
Modern Regression Discontinuity Analysis
ERIC Educational Resources Information Center
Bloom, Howard S.
2012-01-01
This article provides a detailed discussion of the theory and practice of modern regression discontinuity (RD) analysis for estimating the effects of interventions or treatments. Part 1 briefly chronicles the history of RD analysis and summarizes its past applications. Part 2 explains how in theory an RD analysis can identify an average effect of…
Multiple linear regression analysis
NASA Technical Reports Server (NTRS)
Edwards, T. R.
1980-01-01
Program rapidly selects best-suited set of coefficients. User supplies only vectors of independent and dependent data and specifies confidence level required. Program uses stepwise statistical procedure for relating minimal set of variables to set of observations; final regression contains only most statistically significant coefficients. Program is written in FORTRAN IV for batch execution and has been implemented on NOVA 1200.
Mechanisms of neuroblastoma regression
Brodeur, Garrett M.; Bagatell, Rochelle
2014-01-01
Recent genomic and biological studies of neuroblastoma have shed light on the dramatic heterogeneity in the clinical behaviour of this disease, which spans from spontaneous regression or differentiation in some patients, to relentless disease progression in others, despite intensive multimodality therapy. This evidence also suggests several possible mechanisms to explain the phenomena of spontaneous regression in neuroblastomas, including neurotrophin deprivation, humoral or cellular immunity, loss of telomerase activity and alterations in epigenetic regulation. A better understanding of the mechanisms of spontaneous regression might help to identify optimal therapeutic approaches for patients with these tumours. Currently, the most druggable mechanism is the delayed activation of developmentally programmed cell death regulated by the tropomyosin receptor kinase A pathway. Indeed, targeted therapy aimed at inhibiting neurotrophin receptors might be used in lieu of conventional chemotherapy or radiation in infants with biologically favourable tumours that require treatment. Alternative approaches consist of breaking immune tolerance to tumour antigens or activating neurotrophin receptor pathways to induce neuronal differentiation. These approaches are likely to be most effective against biologically favourable tumours, but they might also provide insights into treatment of biologically unfavourable tumours. We describe the different mechanisms of spontaneous neuroblastoma regression and the consequent therapeutic approaches. PMID:25331179
Validation of the Poisson Stochastic Radiative Transfer Model
NASA Technical Reports Server (NTRS)
Zhuravleva, Tatiana; Marshak, Alexander
2004-01-01
A new approach to validation of the Poisson stochastic radiative transfer method is proposed. In contrast to other validations of stochastic models, the main parameter of the Poisson model responsible for cloud geometrical structure - cloud aspect ratio - is determined entirely by matching measurements and calculations of the direct solar radiation. If the measurements of the direct solar radiation is unavailable, it was shown that there is a range of the aspect ratios that allows the stochastic model to accurately approximate the average measurements of surface downward and cloud top upward fluxes. Realizations of the fractionally integrated cascade model are taken as a prototype of real measurements.
? filtering for stochastic systems driven by Poisson processes
NASA Astrophysics Data System (ADS)
Song, Bo; Wu, Zheng-Guang; Park, Ju H.; Shi, Guodong; Zhang, Ya
2015-01-01
This paper investigates the ? filtering problem for stochastic systems driven by Poisson processes. By utilising the martingale theory such as the predictable projection operator and the dual predictable projection operator, this paper transforms the expectation of stochastic integral with respect to the Poisson process into the expectation of Lebesgue integral. Then, based on this, this paper designs an ? filter such that the filtering error system is mean-square asymptotically stable and satisfies a prescribed ? performance level. Finally, a simulation example is given to illustrate the effectiveness of the proposed filtering scheme.
Acoustic Poisson-like effect in periodic structures.
Titovich, Alexey S; Norris, Andrew N
2016-06-01
Redirection of acoustic energy by 90° is shown to be possible in an otherwise acoustically transparent sonic crystal. An unresponsive "deaf" antisymmetric mode is excited by matching Bragg scattering with a quadrupole scatterer resonance. The dynamic effect causes normal unidirectional wave motion to strongly couple to perpendicular motion, analogous to the quasi-static Poisson effect in solids. The Poisson-like effect is demonstrated using the first flexural resonance in cylindrical shells of elastic solids. Simulations for a finite array of acrylic shells that are impedance and index matched to water show dramatic acoustic energy redirection in an otherwise acoustically transparent medium. PMID:27369161
A Study of Poisson's Ratio in the Yield Region
NASA Technical Reports Server (NTRS)
Gerard, George; Wildhorn, Sorrel
1952-01-01
In the yield region of the stress-strain curve the variation in Poisson's ratio from the elastic to the plastic value is most pronounced. This variation was studied experimentally by a systematic series of tests on several aluminum alloys. The tests were conducted under simple tensile and compressive loading along three orthogonal axes. A theoretical variation of Poisson's ratio for an orthotropic solid was obtained from dilatational considerations. The assumptions used in deriving the theory were examined by use of the test data and were found to be in reasonable agreement with experimental evidence.
Nonparametric Inference of Doubly Stochastic Poisson Process Data via the Kernel Method.
Zhang, Tingting; Kou, S C
2010-01-01
Doubly stochastic Poisson processes, also known as the Cox processes, frequently occur in various scientific fields. In this article, motivated primarily by analyzing Cox process data in biophysics, we propose a nonparametric kernel-based inference method. We conduct a detailed study, including an asymptotic analysis, of the proposed method, and provide guidelines for its practical use, introducing a fast and stable regression method for bandwidth selection. We apply our method to real photon arrival data from recent single-molecule biophysical experiments, investigating proteins' conformational dynamics. Our result shows that conformational fluctuation is widely present in protein systems, and that the fluctuation covers a broad range of time scales, highlighting the dynamic and complex nature of proteins' structure.
Ridge Regression Signal Processing
NASA Technical Reports Server (NTRS)
Kuhl, Mark R.
1990-01-01
The introduction of the Global Positioning System (GPS) into the National Airspace System (NAS) necessitates the development of Receiver Autonomous Integrity Monitoring (RAIM) techniques. In order to guarantee a certain level of integrity, a thorough understanding of modern estimation techniques applied to navigational problems is required. The extended Kalman filter (EKF) is derived and analyzed under poor geometry conditions. It was found that the performance of the EKF is difficult to predict, since the EKF is designed for a Gaussian environment. A novel approach is implemented which incorporates ridge regression to explain the behavior of an EKF in the presence of dynamics under poor geometry conditions. The basic principles of ridge regression theory are presented, followed by the derivation of a linearized recursive ridge estimator. Computer simulations are performed to confirm the underlying theory and to provide a comparative analysis of the EKF and the recursive ridge estimator.
Green's function enriched Poisson solver for electrostatics in many-particle systems
NASA Astrophysics Data System (ADS)
Sutmann, Godehard
2016-06-01
A highly accurate method is presented for the construction of the charge density for the solution of the Poisson equation in particle simulations. The method is based on an operator adjusted source term which can be shown to produce exact results up to numerical precision in the case of a large support of the charge distribution, therefore compensating the discretization error of finite difference schemes. This is achieved by balancing an exact representation of the known Green's function of regularized electrostatic problem with a discretized representation of the Laplace operator. It is shown that the exact calculation of the potential is possible independent of the order of the finite difference scheme but the computational efficiency for higher order methods is found to be superior due to a faster convergence to the exact result as a function of the charge support.
Subsonic Flow for the Multidimensional Euler-Poisson System
NASA Astrophysics Data System (ADS)
Bae, Myoungjean; Duan, Ben; Xie, Chunjing
2016-04-01
We establish the existence and stability of subsonic potential flow for the steady Euler-Poisson system in a multidimensional nozzle of a finite length when prescribing the electric potential difference on a non-insulated boundary from a fixed point at the exit, and prescribing the pressure at the exit of the nozzle. The Euler-Poisson system for subsonic potential flow can be reduced to a nonlinear elliptic system of second order. In this paper, we develop a technique to achieve a priori {C^{1,α}} estimates of solutions to a quasi-linear second order elliptic system with mixed boundary conditions in a multidimensional domain enclosed by a Lipschitz continuous boundary. In particular, we discovered a special structure of the Euler-Poisson system which enables us to obtain {C^{1,α}} estimates of the velocity potential and the electric potential functions, and this leads us to establish structural stability of subsonic flows for the Euler-Poisson system under perturbations of various data.
Using the Gamma-Poisson Model to Predict Library Circulations.
ERIC Educational Resources Information Center
Burrell, Quentin L.
1990-01-01
Argues that the gamma mixture of Poisson processes, for all its perceived defects, can be used to make predictions regarding future library book circulations of a quality adequate for general management requirements. The use of the model is extensively illustrated with data from two academic libraries. (Nine references) (CLB)
Poisson processes on groups and Feynman path integrals
NASA Astrophysics Data System (ADS)
Combe, Ph.; Høegh-Krohn, R.; Rodriguez, R.; Sirugue, M.; Sirugue-Collin, M.
1980-10-01
We give an expression for the perturbed evolution of a free evolution by gentle, possibly velocity dependent, potential, in terms of the expectation with respect to a Poisson process on a group. Various applications are given in particular to usual quantum mechanics but also to Fermi and spin systems.
Vectorized multigrid Poisson solver for the CDC CYBER 205
NASA Technical Reports Server (NTRS)
Barkai, D.; Brandt, M. A.
1984-01-01
The full multigrid (FMG) method is applied to the two dimensional Poisson equation with Dirichlet boundary conditions. This has been chosen as a relatively simple test case for examining the efficiency of fully vectorizing of the multigrid method. Data structure and programming considerations and techniques are discussed, accompanied by performance details.
On supermatrix models, Poisson geometry, and noncommutative supersymmetric gauge theories
Klimčík, Ctirad
2015-12-15
We construct a new supermatrix model which represents a manifestly supersymmetric noncommutative regularisation of the UOSp(2|1) supersymmetric Schwinger model on the supersphere. Our construction is much simpler than those already existing in the literature and it was found by using Poisson geometry in a substantial way.
The Poisson-Lognormal Model for Bibliometric/Scientometric Distributions.
ERIC Educational Resources Information Center
Stewart, John A.
1994-01-01
Illustrates that the Poisson-lognormal model provides good fits to a diverse set of distributions commonly studied in bibliometrics and scientometrics. Topics discussed include applications to the empirical data sets related to the laws of Lotka, Bradford, and Zipf; causal processes that could generate lognormal distributions; and implications for…
Some applications of the fractional Poisson probability distribution
Laskin, Nick
2009-11-15
Physical and mathematical applications of the recently invented fractional Poisson probability distribution have been presented. As a physical application, a new family of quantum coherent states has been introduced and studied. As mathematical applications, we have developed the fractional generalization of Bell polynomials, Bell numbers, and Stirling numbers of the second kind. The appearance of fractional Bell polynomials is natural if one evaluates the diagonal matrix element of the evolution operator in the basis of newly introduced quantum coherent states. Fractional Stirling numbers of the second kind have been introduced and applied to evaluate the skewness and kurtosis of the fractional Poisson probability distribution function. A representation of the Bernoulli numbers in terms of fractional Stirling numbers of the second kind has been found. In the limit case when the fractional Poisson probability distribution becomes the Poisson probability distribution, all of the above listed developments and implementations turn into the well-known results of the quantum optics and the theory of combinatorial numbers.
Poisson and Multinomial Mixture Models for Multivariate SIMS Image Segmentation
Willse, Alan R.; Tyler, Bonnie
2002-11-08
Multivariate statistical methods have been advocated for analysis of spectral images, such as those obtained with imaging time-of-flight secondary ion mass spectrometry (TOF-SIMS). TOF-SIMS images using total secondary ion counts or secondary ion counts at individual masses often fail to reveal all salient chemical patterns on the surface. Multivariate methods simultaneously analyze peak intensities at all masses. We propose multivariate methods based on Poisson and multinomial mixture models to segment SIMS images into chemically homogeneous regions. The Poisson mixture model is derived from the assumption that secondary ion counts at any mass in a chemically homogeneous region vary according to the Poisson distribution. The multinomial model is derived as a standardized Poisson mixture model, which is analogous to standardizing the data by dividing by total secondary ion counts. The methods are adapted for contextual image segmentation, allowing for spatial correlation of neighboring pixels. The methods are applied to 52 mass units of a SIMS image with known chemical components. The spectral profile and relative prevalence for each chemical phase are obtained from estimates of model parameters.
The Poisson Distribution: An Experimental Approach to Teaching Statistics
ERIC Educational Resources Information Center
Lafleur, Mimi S.; And Others
1972-01-01
Explains an experimental approach to teaching statistics to students who are essentially either non-science and non-mathematics majors or just beginning study of science. With every day examples, the article illustrates the method of teaching Poisson Distribution. (PS)
Wide-area traffic: The failure of Poisson modeling
Paxson, V.; Floyd, S.
1994-08-01
Network arrivals are often modeled as Poisson processes for analytic simplicity, even though a number of traffic studies have shown that packet interarrivals are not exponentially distributed. The authors evaluate 21 wide-area traces, investigating a number of wide-area TCP arrival processes (session and connection arrivals, FTPDATA connection arrivals within FTP sessions, and TELNET packet arrivals) to determine the error introduced by modeling them using Poisson processes. The authors find that user-initiated TCP session arrivals, such as remote-login and file-transfer, are well-modeled as Poisson processes with fixed hourly rates, but that other connection arrivals deviate considerably from Poisson; that modeling TELNET packet interarrivals as exponential grievously underestimates the burstiness of TELNET traffic, but using the empirical Tcplib[DJCME92] interarrivals preserves burstiness over many time scales; and that FTPDATA connection arrivals within FTP sessions come bunched into ``connection bursts``, the largest of which are so large that they completely dominate FTPDATA traffic. Finally, they offer some preliminary results regarding how the findings relate to the possible self-similarity of wide-area traffic.
On covariant Poisson brackets in classical field theory
Forger, Michael; Salles, Mário O.
2015-10-15
How to give a natural geometric definition of a covariant Poisson bracket in classical field theory has for a long time been an open problem—as testified by the extensive literature on “multisymplectic Poisson brackets,” together with the fact that all these proposals suffer from serious defects. On the other hand, the functional approach does provide a good candidate which has come to be known as the Peierls–De Witt bracket and whose construction in a geometrical setting is now well understood. Here, we show how the basic “multisymplectic Poisson bracket” already proposed in the 1970s can be derived from the Peierls–De Witt bracket, applied to a special class of functionals. This relation allows to trace back most (if not all) of the problems encountered in the past to ambiguities (the relation between differential forms on multiphase space and the functionals they define is not one-to-one) and also to the fact that this class of functionals does not form a Poisson subalgebra.
Void-containing materials with tailored Poisson's ratio
NASA Astrophysics Data System (ADS)
Goussev, Olga A.; Richner, Peter; Rozman, Michael G.; Gusev, Andrei A.
2000-10-01
Assuming square, hexagonal, and random packed arrays of nonoverlapping identical parallel cylindrical voids dispersed in an aluminum matrix, we have calculated numerically the concentration dependence of the transverse Poisson's ratios. It was shown that the transverse Poisson's ratio of the hexagonal and random packed arrays approached 1 upon increasing the concentration of voids while the ratio of the square packed array along the principal continuation directions approached 0. Experimental measurements were carried out on rectangular aluminum bricks with identical cylindrical holes drilled in square and hexagonal packed arrays. Experimental results were in good agreement with numerical predictions. We then demonstrated, based on the numerical and experimental results, that by varying the spatial arrangement of the holes and their volume fraction, one can design and manufacture voided materials with a tailored Poisson's ratio between 0 and 1. In practice, those with a high Poisson's ratio, i.e., close to 1, can be used to amplify the lateral responses of the structures while those with a low one, i.e., close to 0, can largely attenuate the lateral responses and can therefore be used in situations where stringent lateral stability is needed.
Conceição, Katiane S; Andrade, Marinho G; Louzada, Francisco
2013-09-01
In this paper, a Bayesian method for inference is developed for the zero-modified Poisson (ZMP) regression model. This model is very flexible for analyzing count data without requiring any information about inflation or deflation of zeros in the sample. A general class of prior densities based on an information matrix is considered for the model parameters. A sensitivity study to detect influential cases that can change the results is performed based on the Kullback-Leibler divergence. Simulation studies are presented in order to illustrate the performance of the developed methodology. Two real datasets on leptospirosis notification in Bahia State (Brazil) are analyzed using the proposed methodology for the ZMP model.
Orthogonal Regression: A Teaching Perspective
ERIC Educational Resources Information Center
Carr, James R.
2012-01-01
A well-known approach to linear least squares regression is that which involves minimizing the sum of squared orthogonal projections of data points onto the best fit line. This form of regression is known as orthogonal regression, and the linear model that it yields is known as the major axis. A similar method, reduced major axis regression, is…
Leffondré, Karen; Jager, Kitty J; Boucquemont, Julie; Stel, Vianda S; Heinze, Georg
2014-10-01
Regression models are being used to quantify the effect of an exposure on an outcome, while adjusting for potential confounders. While the type of regression model to be used is determined by the nature of the outcome variable, e.g. linear regression has to be applied for continuous outcome variables, all regression models can handle any kind of exposure variables. However, some fundamentals of representation of the exposure in a regression model and also some potential pitfalls have to be kept in mind in order to obtain meaningful interpretation of results. The objective of this educational paper was to illustrate these fundamentals and pitfalls, using various multiple regression models applied to data from a hypothetical cohort of 3000 patients with chronic kidney disease. In particular, we illustrate how to represent different types of exposure variables (binary, categorical with two or more categories and continuous), and how to interpret the regression coefficients in linear, logistic and Cox models. We also discuss the linearity assumption in these models, and show how wrongly assuming linearity may produce biased results and how flexible modelling using spline functions may provide better estimates.
Peterson, Leif E; Kovyrshina, Tatiana
2015-12-01
Background. The healthy worker effect (HWE) is a source of bias in occupational studies of mortality among workers caused by use of comparative disease rates based on public data, which include mortality of unhealthy members of the public who are screened out of the workplace. For the US astronaut corp, the HWE is assumed to be strong due to the rigorous medical selection and surveillance. This investigation focused on the effect of correcting for HWE on projected lifetime risk estimates for radiation-induced cancer mortality and incidence. Methods. We performed radiation-induced cancer risk assessment using Poisson regression of cancer mortality and incidence rates among Hiroshima and Nagasaki atomic bomb survivors. Regression coefficients were used for generating risk coefficients for the excess absolute, transfer, and excess relative models. Excess lifetime risks (ELR) for radiation exposure and baseline lifetime risks (BLR) were adjusted for the HWE using standardized mortality ratios (SMR) for aviators and nuclear workers who were occupationally exposed to ionizing radiation. We also adjusted lifetime risks by cancer mortality misclassification among atomic bomb survivors. Results. For all cancers combined ("Nonleukemia"), the effect of adjusting the all-cause hazard rate by the simulated quantiles of the all-cause SMR resulted in a mean difference (not percent difference) in ELR of 0.65% and mean difference of 4% for mortality BLR, and mean change of 6.2% in BLR for incidence. The effect of adjusting the excess (radiation-induced) cancer rate or baseline cancer hazard rate by simulated quantiles of cancer-specific SMRs resulted in a mean difference of [Formula: see text] in the all-cancer mortality ELR and mean difference of [Formula: see text] in the mortality BLR. Whereas for incidence, the effect of adjusting by cancer-specific SMRs resulted in a mean change of [Formula: see text] for the all-cancer BLR. Only cancer mortality risks were adjusted by
Kramer, S.
1996-12-31
In many real-world domains the task of machine learning algorithms is to learn a theory for predicting numerical values. In particular several standard test domains used in Inductive Logic Programming (ILP) are concerned with predicting numerical values from examples and relational and mostly non-determinate background knowledge. However, so far no ILP algorithm except one can predict numbers and cope with nondeterminate background knowledge. (The only exception is a covering algorithm called FORS.) In this paper we present Structural Regression Trees (SRT), a new algorithm which can be applied to the above class of problems. SRT integrates the statistical method of regression trees into ILP. It constructs a tree containing a literal (an atomic formula or its negation) or a conjunction of literals in each node, and assigns a numerical value to each leaf. SRT provides more comprehensible results than purely statistical methods, and can be applied to a class of problems most other ILP systems cannot handle. Experiments in several real-world domains demonstrate that the approach is competitive with existing methods, indicating that the advantages are not at the expense of predictive accuracy.
CSWS-related autistic regression versus autistic regression without CSWS.
Tuchman, Roberto
2009-08-01
Continuous spike-waves during slow-wave sleep (CSWS) and Landau-Kleffner syndrome (LKS) are two clinical epileptic syndromes that are associated with the electroencephalography (EEG) pattern of electrical status epilepticus during slow wave sleep (ESES). Autistic regression occurs in approximately 30% of children with autism and is associated with an epileptiform EEG in approximately 20%. The behavioral phenotypes of CSWS, LKS, and autistic regression overlap. However, the differences in age of regression, degree and type of regression, and frequency of epilepsy and EEG abnormalities suggest that these are distinct phenotypes. CSWS with autistic regression is rare, as is autistic regression associated with ESES. The pathophysiology and as such the treatment implications for children with CSWS and autistic regression are distinct from those with autistic regression without CSWS.
New method for blowup of the Euler-Poisson system
NASA Astrophysics Data System (ADS)
Kwong, Man Kam; Yuen, Manwai
2016-08-01
In this paper, we provide a new method for establishing the blowup of C2 solutions for the pressureless Euler-Poisson system with attractive forces for RN (N ≥ 2) with ρ(0, x0) > 0 and Ω 0 i j ( x 0 ) = /1 2 [" separators=" ∂ i u j ( 0 , x 0 ) - ∂ j u i ( 0 , x 0 ) ] = 0 at some point x0 ∈ RN. By applying the generalized Hubble transformation div u ( t , x 0 ( t ) ) = /N a ˙ ( t ) a ( t ) to a reduced Riccati differential inequality derived from the system, we simplify the inequality into the Emden equation a ̈ ( t ) = - /λ a ( t ) N - 1 , a ( 0 ) = 1 , a ˙ ( 0 ) = /div u ( 0 , x 0 ) N . Known results on its blowup set allow us to easily obtain the blowup conditions of the Euler-Poisson system.
MODELING PAVEMENT DETERIORATION PROCESSES BY POISSON HIDDEN MARKOV MODELS
NASA Astrophysics Data System (ADS)
Nam, Le Thanh; Kaito, Kiyoyuki; Kobayashi, Kiyoshi; Okizuka, Ryosuke
In pavement management, it is important to estimate lifecycle cost, which is composed of the expenses for repairing local damages, including potholes, and repairing and rehabilitating the surface and base layers of pavements, including overlays. In this study, a model is produced under the assumption that the deterioration process of pavement is a complex one that includes local damages, which occur frequently, and the deterioration of the surface and base layers of pavement, which progresses slowly. The variation in pavement soundness is expressed by the Markov deterioration model and the Poisson hidden Markov deterioration model, in which the frequency of local damage depends on the distribution of pavement soundness, is formulated. In addition, the authors suggest a model estimation method using the Markov Chain Monte Carlo (MCMC) method, and attempt to demonstrate the applicability of the proposed Poisson hidden Markov deterioration model by studying concrete application cases.
Histogram bin width selection for time-dependent Poisson processes
NASA Astrophysics Data System (ADS)
Koyama, Shinsuke; Shinomoto, Shigeru
2004-07-01
In constructing a time histogram of the event sequences derived from a nonstationary point process, we wish to determine the bin width such that the mean squared error of the histogram from the underlying rate of occurrence is minimized. We find that the optimal bin widths obtained for a doubly stochastic Poisson process and a sinusoidally regulated Poisson process exhibit different scaling relations with respect to the number of sequences, time scale and amplitude of rate modulation, but both diverge under similar parametric conditions. This implies that under these conditions, no determination of the time-dependent rate can be made. We also apply the kernel method to these point processes, and find that the optimal kernels do not exhibit any critical phenomena, unlike the time histogram method.
Intrinsic Negative Poisson's Ratio for Single-Layer Graphene.
Jiang, Jin-Wu; Chang, Tienchong; Guo, Xingming; Park, Harold S
2016-08-10
Negative Poisson's ratio (NPR) materials have drawn significant interest because the enhanced toughness, shear resistance, and vibration absorption that typically are seen in auxetic materials may enable a range of novel applications. In this work, we report that single-layer graphene exhibits an intrinsic NPR, which is robust and independent of its size and temperature. The NPR arises due to the interplay between two intrinsic deformation pathways (one with positive Poisson's ratio, the other with NPR), which correspond to the bond stretching and angle bending interactions in graphene. We propose an energy-based deformation pathway criteria, which predicts that the pathway with NPR has lower energy and thus becomes the dominant deformation mode when graphene is stretched by a strain above 6%, resulting in the NPR phenomenon. PMID:27408994
Piezoelectrically-induced ultrasonic lubrication by way of Poisson effect
NASA Astrophysics Data System (ADS)
Dong, Sheng; Dapino, Marcelo J.
2012-04-01
It has been shown that the coefficient of dynamic friction between two surfaces decreases when ultrasonic vibra- tions are superimposed on the macroscopic sliding velocity. Instead of longitudinal vibrations, this paper focuses on the lateral contractions and expansions of an object in and around the half wavelength node region. This lateral motion is due to the Poisson effect (ratio of lateral strain to longitudinal strain) present in all materials. We numerically and experimentally investigate the Poisson-effect ultrasonic lubrication. A motor effect region is identified in which the effective friction force becomes negative as the vibratory waves drive the motion of the interface. Outside of the motor region, friction lubrication is observed with between 30% and 60% friction force reduction. A "stick-slip" contact model associated with horn kinematics is presented for simulation and analysis purposes. The model accurately matches the experiments for normal loads under 120 N.
Quantized Nambu-Poisson manifolds and n-Lie algebras
NASA Astrophysics Data System (ADS)
DeBellis, Joshua; Sämann, Christian; Szabo, Richard J.
2010-12-01
We investigate the geometric interpretation of quantized Nambu-Poisson structures in terms of noncommutative geometries. We describe an extension of the usual axioms of quantization in which classical Nambu-Poisson structures are translated to n-Lie algebras at quantum level. We demonstrate that this generalized procedure matches an extension of Berezin-Toeplitz quantization yielding quantized spheres, hyperboloids, and superspheres. The extended Berezin quantization of spheres is closely related to a deformation quantization of n-Lie algebras as well as the approach based on harmonic analysis. We find an interpretation of Nambu-Heisenberg n-Lie algebras in terms of foliations of {{R}}^n by fuzzy spheres, fuzzy hyperboloids, and noncommutative hyperplanes. Some applications to the quantum geometry of branes in M-theory are also briefly discussed.
Tensorial Basis Spline Collocation Method for Poisson's Equation
NASA Astrophysics Data System (ADS)
Plagne, Laurent; Berthou, Jean-Yves
2000-01-01
This paper aims to describe the tensorial basis spline collocation method applied to Poisson's equation. In the case of a localized 3D charge distribution in vacuum, this direct method based on a tensorial decomposition of the differential operator is shown to be competitive with both iterative BSCM and FFT-based methods. We emphasize the O(h4) and O(h6) convergence of TBSCM for cubic and quintic splines, respectively. We describe the implementation of this method on a distributed memory parallel machine. Performance measurements on a Cray T3E are reported. Our code exhibits high performance and good scalability: As an example, a 27 Gflops performance is obtained when solving Poisson's equation on a 2563 non-uniform 3D Cartesian mesh by using 128 T3E-750 processors. This represents 215 Mflops per processors.
Reference manual for the POISSON/SUPERFISH Group of Codes
Not Available
1987-01-01
The POISSON/SUPERFISH Group codes were set up to solve two separate problems: the design of magnets and the design of rf cavities in a two-dimensional geometry. The first stage of either problem is to describe the layout of the magnet or cavity in a way that can be used as input to solve the generalized Poisson equation for magnets or the Helmholtz equations for cavities. The computer codes require that the problems be discretized by replacing the differentials (dx,dy) by finite differences ({delta}X,{delta}Y). Instead of defining the function everywhere in a plane, the function is defined only at a finite number of points on a mesh in the plane.
Filtering with Marked Point Process Observations via Poisson Chaos Expansion
Sun Wei; Zeng Yong; Zhang Shu
2013-06-15
We study a general filtering problem with marked point process observations. The motivation comes from modeling financial ultra-high frequency data. First, we rigorously derive the unnormalized filtering equation with marked point process observations under mild assumptions, especially relaxing the bounded condition of stochastic intensity. Then, we derive the Poisson chaos expansion for the unnormalized filter. Based on the chaos expansion, we establish the uniqueness of solutions of the unnormalized filtering equation. Moreover, we derive the Poisson chaos expansion for the unnormalized filter density under additional conditions. To explore the computational advantage, we further construct a new consistent recursive numerical scheme based on the truncation of the chaos density expansion for a simple case. The new algorithm divides the computations into those containing solely system coefficients and those including the observations, and assign the former off-line.
Finite-size effects and percolation properties of Poisson geometries
NASA Astrophysics Data System (ADS)
Larmier, C.; Dumonteil, E.; Malvagi, F.; Mazzolo, A.; Zoia, A.
2016-07-01
Random tessellations of the space represent a class of prototype models of heterogeneous media, which are central in several applications in physics, engineering, and life sciences. In this work, we investigate the statistical properties of d -dimensional isotropic Poisson geometries by resorting to Monte Carlo simulation, with special emphasis on the case d =3 . We first analyze the behavior of the key features of these stochastic geometries as a function of the dimension d and the linear size L of the domain. Then, we consider the case of Poisson binary mixtures, where the polyhedra are assigned two labels with complementary probabilities. For this latter class of random geometries, we numerically characterize the percolation threshold, the strength of the percolating cluster, and the average cluster size.
Quantized Nambu-Poisson manifolds and n-Lie algebras
DeBellis, Joshua; Saemann, Christian; Szabo, Richard J.
2010-12-15
We investigate the geometric interpretation of quantized Nambu-Poisson structures in terms of noncommutative geometries. We describe an extension of the usual axioms of quantization in which classical Nambu-Poisson structures are translated to n-Lie algebras at quantum level. We demonstrate that this generalized procedure matches an extension of Berezin-Toeplitz quantization yielding quantized spheres, hyperboloids, and superspheres. The extended Berezin quantization of spheres is closely related to a deformation quantization of n-Lie algebras as well as the approach based on harmonic analysis. We find an interpretation of Nambu-Heisenberg n-Lie algebras in terms of foliations of R{sup n} by fuzzy spheres, fuzzy hyperboloids, and noncommutative hyperplanes. Some applications to the quantum geometry of branes in M-theory are also briefly discussed.
Finite-size effects and percolation properties of Poisson geometries.
Larmier, C; Dumonteil, E; Malvagi, F; Mazzolo, A; Zoia, A
2016-07-01
Random tessellations of the space represent a class of prototype models of heterogeneous media, which are central in several applications in physics, engineering, and life sciences. In this work, we investigate the statistical properties of d-dimensional isotropic Poisson geometries by resorting to Monte Carlo simulation, with special emphasis on the case d=3. We first analyze the behavior of the key features of these stochastic geometries as a function of the dimension d and the linear size L of the domain. Then, we consider the case of Poisson binary mixtures, where the polyhedra are assigned two labels with complementary probabilities. For this latter class of random geometries, we numerically characterize the percolation threshold, the strength of the percolating cluster, and the average cluster size. PMID:27575099
Correlation between supercooled liquid relaxation and glass Poisson's ratio
NASA Astrophysics Data System (ADS)
Sun, Qijing; Hu, Lina; Zhou, Chao; Zheng, Haijiao; Yue, Yuanzheng
2015-10-01
We report on a correlation between the supercooled liquid (SL) relaxation and glass Poisson's ratio (v) by comparing the activation energy ratio (r) of the α and the slow β relaxations and the v values for both metallic and nonmetallic glasses. Poisson's ratio v generally increases with an increase in the ratio r and this relation can be described by the empirical function v = 0.5 - A*exp(-B*r), where A and B are constants. This correlation might imply that glass plasticity is associated with the competition between the α and the slow β relaxations in SLs. The underlying physics of this correlation lies in the heredity of the structural heterogeneity from liquid to glass. This work gives insights into both the microscopic mechanism of glass deformation through the SL dynamics and the complex structural evolution during liquid-glass transition.
Poisson problems for semilinear Brinkman systems on Lipschitz domains in
NASA Astrophysics Data System (ADS)
Kohr, Mirela; Lanza de Cristoforis, Massimo; Wendland, Wolfgang L.
2015-06-01
The purpose of this paper is to combine a layer potential analysis with the Schauder fixed point theorem to show the existence of solutions of the Poisson problem for a semilinear Brinkman system on bounded Lipschitz domains in with Dirichlet or Robin boundary conditions and data in L 2-based Sobolev spaces. We also obtain an existence and uniqueness result for the Dirichlet problem for a special semilinear elliptic system, called the Darcy-Forchheimer-Brinkman system.
Studying Resist Stochastics with the Multivariate Poisson Propagation Model
Naulleau, Patrick; Anderson, Christopher; Chao, Weilun; Bhattarai, Suchit; Neureuther, Andrew
2014-01-01
Progress in the ultimate performance of extreme ultraviolet resist has arguably decelerated in recent years suggesting an approach to stochastic limits both in photon counts and material parameters. Here we report on the performance of a variety of leading extreme ultraviolet resist both with and without chemical amplification. The measured performance is compared to stochastic modeling results using the Multivariate Poisson Propagation Model. The results show that the best materials are indeed nearing modeled performance limits.
A more general system for Poisson series manipulation.
NASA Technical Reports Server (NTRS)
Cherniack, J. R.
1973-01-01
The design of a working Poisson series processor system is described that is more general than those currently in use. This system is the result of a series of compromises among efficiency, generality, ease of programing, and ease of use. The most general form of coefficients that can be multiplied efficiently is pointed out, and the place of general-purpose algebraic systems in celestial mechanics is discussed.
Indentability of conventional and negative Poisson's ratio foams
NASA Technical Reports Server (NTRS)
Lakes, R. S.; Elms, K.
1992-01-01
The indentation resistance of foams, both of conventional structure and of re-entrant structure giving rise to negative Poisson's ratio, is studied using holographic interferometry. In holographic indentation tests, re-entrant foams had higher yield strengths sigma(sub y) and lower stiffness E than conventional foams of the same original relative density. Calculated energy absorption for dynamic impact is considerably higher for re-entrant foam than conventional foam.
Experimental dead-time distortions of poisson processes
NASA Astrophysics Data System (ADS)
Faraci, G.; Pennisi, A. R.
1983-07-01
In order to check the distortions, introduced by a non-extended dead time on the Poisson statistics, accurate experiments have been made in single channel counting. At a given measuring time, the dependence on the choice of the time origin and on the width of the dead time has been verified. An excellent agreement has been found between the theoretical expressions and the experimental curves.
Soft elasticity of RNA gels and negative Poisson ratio
NASA Astrophysics Data System (ADS)
Ahsan, Amir; Rudnick, Joseph; Bruinsma, Robijn
2007-12-01
We propose a model for the elastic properties of RNA gels. The model predicts anomalous elastic properties in the form of a negative Poisson ratio and shape instabilities. The anomalous elasticity is generated by the non-Gaussian force-deformation relation of single-stranded RNA. The effect is greatly magnified by broken rotational symmetry produced by double-stranded sequences and the concomitant soft modes of uniaxial elastomers.
On third Poisson structure of KdV equation
Gorsky, A.; Marshakov, A.; Orlov, A.
1995-12-01
The third Poisson structure of the KdV equation in terms of canonical {open_quote}free fields{close_quote} and the reduced WZNW model is discussed. We prove that it is {open_quotes}diagonalized{close_quotes} in the Lagrange variables which were used before in the formulation of 2d gravity. We propose a quantum path integral for the KdV equation based on this representation.
Events in time: Basic analysis of Poisson data
Engelhardt, M.E.
1994-09-01
The report presents basic statistical methods for analyzing Poisson data, such as the member of events in some period of time. It gives point estimates, confidence intervals, and Bayesian intervals for the rate of occurrence per unit of time. It shows how to compare subsets of the data, both graphically and by statistical tests, and how to look for trends in time. It presents a compound model when the rate of occurrence varies randomly. Examples and SAS programs are given.
Binomial and Poisson Mixtures, Maximum Likelihood, and Maple Code
Bowman, Kimiko o; Shenton, LR
2006-01-01
The bias, variance, and skewness of maximum likelihoood estimators are considered for binomial and Poisson mixture distributions. The moments considered are asymptotic, and they are assessed using the Maple code. Question of existence of solutions and Karl Pearson's study are mentioned, along with the problems of valid sample space. Large samples to reduce variances are not unusual; this also applies to the size of the asymptotic skewness.
Blind beam-hardening correction from Poisson measurements
NASA Astrophysics Data System (ADS)
Gu, Renliang; Dogandžić, Aleksandar
2016-02-01
We develop a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. We employ our mass-attenuation spectrum parameterization of the noiseless measurements and express the mass- attenuation spectrum as a linear combination of B-spline basis functions of order one. A block coordinate-descent algorithm is developed for constrained minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density map image; the image sparsity is imposed using a convex total-variation (TV) norm penalty term. This algorithm alternates between a Nesterov's proximal-gradient (NPG) step for estimating the density map image and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (L-BFGS-B) step for estimating the incident-spectrum parameters. To accelerate convergence of the density- map NPG steps, we apply function restart and a step-size selection scheme that accounts for varying local Lipschitz constants of the Poisson NLL. Real X-ray CT reconstruction examples demonstrate the performance of the proposed scheme.
A Poisson-Boltzmann dynamics method with nonperiodic boundary condition
NASA Astrophysics Data System (ADS)
Lu, Qiang; Luo, Ray
2003-12-01
We have developed a well-behaved and efficient finite difference Poisson-Boltzmann dynamics method with a nonperiodic boundary condition. This is made possible, in part, by a rather fine grid spacing used for the finite difference treatment of the reaction field interaction. The stability is also made possible by a new dielectric model that is smooth both over time and over space, an important issue in the application of implicit solvents. In addition, the electrostatic focusing technique facilitates the use of an accurate yet efficient nonperiodic boundary condition: boundary grid potentials computed by the sum of potentials from individual grid charges. Finally, the particle-particle particle-mesh technique is adopted in the computation of the Coulombic interaction to balance accuracy and efficiency in simulations of large biomolecules. Preliminary testing shows that the nonperiodic Poisson-Boltzmann dynamics method is numerically stable in trajectories at least 4 ns long. The new model is also fairly efficient: it is comparable to that of the pairwise generalized Born solvent model, making it a strong candidate for dynamics simulations of biomolecules in dilute aqueous solutions. Note that the current treatment of total electrostatic interactions is with no cutoff, which is important for simulations of biomolecules. Rigorous treatment of the Debye-Hückel screening is also possible within the Poisson-Boltzmann framework: its importance is demonstrated by a simulation of a highly charged protein.
Modeling environmental noise exceedances using non-homogeneous Poisson processes.
Guarnaccia, Claudio; Quartieri, Joseph; Barrios, Juan M; Rodrigues, Eliane R
2014-10-01
In this work a non-homogeneous Poisson model is considered to study noise exposure. The Poisson process, counting the number of times that a sound level surpasses a threshold, is used to estimate the probability that a population is exposed to high levels of noise a certain number of times in a given time interval. The rate function of the Poisson process is assumed to be of a Weibull type. The presented model is applied to community noise data from Messina, Sicily (Italy). Four sets of data are used to estimate the parameters involved in the model. After the estimation and tuning are made, a way of estimating the probability that an environmental noise threshold is exceeded a certain number of times in a given time interval is presented. This estimation can be very useful in the study of noise exposure of a population and also to predict, given the current behavior of the data, the probability of occurrence of high levels of noise in the near future. One of the most important features of the model is that it implicitly takes into account different noise sources, which need to be treated separately when using usual models.
Improved central confidence intervals for the ratio of Poisson means
NASA Astrophysics Data System (ADS)
Cousins, R. D.
The problem of confidence intervals for the ratio of two unknown Poisson means was "solved" decades ago, but a closer examination reveals that the standard solution is far from optimal from the frequentist point of view. We construct a more powerful set of central confidence intervals, each of which is a (typically proper) subinterval of the corresponding standard interval. They also provide upper and lower confidence limits which are more restrictive than the standard limits. The construction follows Neyman's original prescription, though discreteness of the Poisson distribution and the presence of a nuisance parameter (one of the unknown means) lead to slightly conservative intervals. Philosophically, the issue of the appropriateness of the construction method is similar to the issue of conditioning on the margins in 2×2 contingency tables. From a frequentist point of view, the new set maintains (over) coverage of the unknown true value of the ratio of means at each stated confidence level, even though the new intervals are shorter than the old intervals by any measure (except for two cases where they are identical). As an example, when the number 2 is drawn from each Poisson population, the 90% CL central confidence interval on the ratio of means is (0.169, 5.196), rather than (0.108, 9.245). In the cited literature, such confidence intervals have applications in numerous branches of pure and applied science, including agriculture, wildlife studies, manufacturing, medicine, reliability theory, and elementary particle physics.
Matrix decomposition graphics processing unit solver for Poisson image editing
NASA Astrophysics Data System (ADS)
Lei, Zhao; Wei, Li
2012-10-01
In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.
A generalized Poisson-gamma model for spatially overdispersed data.
Neyens, Thomas; Faes, Christel; Molenberghs, Geert
2012-09-01
Modern disease mapping commonly uses hierarchical Bayesian methods to model overdispersion and spatial correlation. Classical random-effects based solutions include the Poisson-gamma model, which uses the conjugacy between the Poisson and gamma distributions, but which does not model spatial correlation, on the one hand, and the more advanced CAR model, which also introduces a spatial autocorrelation term but without a closed-form posterior distribution on the other. In this paper, a combined model is proposed: an alternative convolution model accounting for both overdispersion and spatial correlation in the data by combining the Poisson-gamma model with a spatially-structured normal CAR random effect. The Limburg Cancer Registry data on kidney and prostate cancer in Limburg were used to compare the conventional and new models. A simulation study confirmed results and interpretations coming from the real datasets. Relative risk maps showed that the combined model provides an intermediate between the non-patterned negative binomial and the sometimes oversmoothed CAR convolution model. PMID:22749204
A generalized Poisson solver for first-principles device simulations
NASA Astrophysics Data System (ADS)
Bani-Hashemian, Mohammad Hossein; Brück, Sascha; Luisier, Mathieu; VandeVondele, Joost
2016-01-01
Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative method in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated.
Magnetic axis alignment and the Poisson alignment reference system
NASA Astrophysics Data System (ADS)
Griffith, Lee V.; Schenz, Richard F.; Sommargren, Gary E.
1989-01-01
Three distinct metrological operations are necessary to align a free-electron laser (FEL): the magnetic axis must be located, a straight line reference (SLR) must be generated, and the magnetic axis must be related to the SLR. This paper begins with a review of the motivation for developing an alignment system that will assure better than 100 micrometer accuracy in the alignment of the magnetic axis throughout an FEL. The paper describes techniques for identifying the magnetic axis of solenoids, quadrupoles, and wiggler poles. Propagation of a laser beam is described to the extent of revealing sources of nonlinearity in the beam. Development and use of the Poisson line, a diffraction effect, is described in detail. Spheres in a large-diameter laser beam create Poisson lines and thus provide a necessary mechanism for gauging between the magnetic axis and the SLR. Procedures for installing FEL components and calibrating alignment fiducials to the magnetic axes of the components are also described. An error budget shows that the Poisson alignment reference system will make it possible to meet the alignment tolerances for an FEL.
Magnetic alignment and the Poisson alignment reference system
NASA Astrophysics Data System (ADS)
Griffith, L. V.; Schenz, R. F.; Sommargren, G. E.
1990-08-01
Three distinct metrological operations are necessary to align a free-electron laser (FEL): the magnetic axis must be located, a straight line reference (SLR) must be generated, and the magnetic axis must be related to the SLR. This article begins with a review of the motivation for developing an alignment system that will assure better than 100-μm accuracy in the alignment of the magnetic axis throughout an FEL. The 100-μm accuracy is an error circle about an ideal axis for 300 m or more. The article describes techniques for identifying the magnetic axes of solenoids, quadrupoles, and wiggler poles. Propagation of a laser beam is described to the extent of revealing sources of nonlinearity in the beam. Development of a straight-line reference based on the Poisson line, a diffraction effect, is described in detail. Spheres in a large-diameter laser beam create Poisson lines and thus provide a necessary mechanism for gauging between the magnetic axis and the SLR. Procedures for installing FEL components and calibrating alignment fiducials to the magnetic axes of the components are also described. The Poisson alignment reference system should be accurate to 25 μm over 300 m, which is believed to be a factor-of-4 improvement over earlier techniques. An error budget shows that only 25% of the total budgeted tolerance is used for the alignment reference system, so the remaining tolerances should fall within the allowable range for FEL alignment.
A generalized Poisson solver for first-principles device simulations.
Bani-Hashemian, Mohammad Hossein; Brück, Sascha; Luisier, Mathieu; VandeVondele, Joost
2016-01-28
Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative method in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated. PMID:26827208
Numerical methods for the Poisson-Fermi equation in electrolytes
NASA Astrophysics Data System (ADS)
Liu, Jinn-Liang
2013-08-01
The Poisson-Fermi equation proposed by Bazant, Storey, and Kornyshev [Phys. Rev. Lett. 106 (2011) 046102] for ionic liquids is applied to and numerically studied for electrolytes and biological ion channels in three-dimensional space. This is a fourth-order nonlinear PDE that deals with both steric and correlation effects of all ions and solvent molecules involved in a model system. The Fermi distribution follows from classical lattice models of configurational entropy of finite size ions and solvent molecules and hence prevents the long and outstanding problem of unphysical divergence predicted by the Gouy-Chapman model at large potentials due to the Boltzmann distribution of point charges. The equation reduces to Poisson-Boltzmann if the correlation length vanishes. A simplified matched interface and boundary method exhibiting optimal convergence is first developed for this equation by using a gramicidin A channel model that illustrates challenging issues associated with the geometric singularities of molecular surfaces of channel proteins in realistic 3D simulations. Various numerical methods then follow to tackle a range of numerical problems concerning the fourth-order term, nonlinearity, stability, efficiency, and effectiveness. The most significant feature of the Poisson-Fermi equation, namely, its inclusion of steric and correlation effects, is demonstrated by showing good agreement with Monte Carlo simulation data for a charged wall model and an L type calcium channel model.
Wang, Yiyi; Kockelman, Kara M
2013-11-01
This work examines the relationship between 3-year pedestrian crash counts across Census tracts in Austin, Texas, and various land use, network, and demographic attributes, such as land use balance, residents' access to commercial land uses, sidewalk density, lane-mile densities (by roadway class), and population and employment densities (by type). The model specification allows for region-specific heterogeneity, correlation across response types, and spatial autocorrelation via a Poisson-based multivariate conditional auto-regressive (CAR) framework and is estimated using Bayesian Markov chain Monte Carlo methods. Least-squares regression estimates of walk-miles traveled per zone serve as the exposure measure. Here, the Poisson-lognormal multivariate CAR model outperforms an aspatial Poisson-lognormal multivariate model and a spatial model (without cross-severity correlation), both in terms of fit and inference. Positive spatial autocorrelation emerges across neighborhoods, as expected (due to latent heterogeneity or missing variables that trend in space, resulting in spatial clustering of crash counts). In comparison, the positive aspatial, bivariate cross correlation of severe (fatal or incapacitating) and non-severe crash rates reflects latent covariates that have impacts across severity levels but are more local in nature (such as lighting conditions and local sight obstructions), along with spatially lagged cross correlation. Results also suggest greater mixing of residences and commercial land uses is associated with higher pedestrian crash risk across different severity levels, ceteris paribus, presumably since such access produces more potential conflicts between pedestrian and vehicle movements. Interestingly, network densities show variable effects, and sidewalk provision is associated with lower severe-crash rates. PMID:24036167
Computing measures of explained variation for logistic regression models.
Mittlböck, M; Schemper, M
1999-01-01
The proportion of explained variation (R2) is frequently used in the general linear model but in logistic regression no standard definition of R2 exists. We present a SAS macro which calculates two R2-measures based on Pearson and on deviance residuals for logistic regression. Also, adjusted versions for both measures are given, which should prevent the inflation of R2 in small samples. PMID:10195643
Wild bootstrap for quantile regression.
Feng, Xingdong; He, Xuming; Hu, Jianhua
2011-12-01
The existing theory of the wild bootstrap has focused on linear estimators. In this note, we broaden its validity by providing a class of weight distributions that is asymptotically valid for quantile regression estimators. As most weight distributions in the literature lead to biased variance estimates for nonlinear estimators of linear regression, we propose a modification of the wild bootstrap that admits a broader class of weight distributions for quantile regression. A simulation study on median regression is carried out to compare various bootstrap methods. With a simple finite-sample correction, the wild bootstrap is shown to account for general forms of heteroscedasticity in a regression model with fixed design points.
ADJUSTABLE DOUBLE PULSE GENERATOR
Gratian, J.W.; Gratian, A.C.
1961-08-01
>A modulator pulse source having adjustable pulse width and adjustable pulse spacing is described. The generator consists of a cross coupled multivibrator having adjustable time constant circuitry in each leg, an adjustable differentiating circuit in the output of each leg, a mixing and rectifying circuit for combining the differentiated pulses and generating in its output a resultant sequence of negative pulses, and a final amplifying circuit for inverting and square-topping the pulses. (AEC)
Teaching and hospital production: the use of regression estimates.
Lehner, L A; Burgess, J F
1995-01-01
Medicare's Prospective Payment System pays U.S. teaching hospitals for the indirect costs of medical education based on a regression coefficient in a cost function. In regression studies using health care data, it is common for explanatory variables to be measured imperfectly, yet the potential for measurement error is often ignored. In this paper, U.S. Department of Veterans Affairs data is used to examine issues of health care production estimation and the use of regression estimates like the teaching adjustment factor. The findings show that measurement error and persistent multicollinearity confound attempts to have a large degree of confidence in the precise magnitude of parameter estimates.
Using regression models to determine the poroelastic properties of cartilage.
Chung, Chen-Yuan; Mansour, Joseph M
2013-07-26
The feasibility of determining biphasic material properties using regression models was investigated. A transversely isotropic poroelastic finite element model of stress relaxation was developed and validated against known results. This model was then used to simulate load intensity for a wide range of material properties. Linear regression equations for load intensity as a function of the five independent material properties were then developed for nine time points (131, 205, 304, 390, 500, 619, 700, 800, and 1000s) during relaxation. These equations illustrate the effect of individual material property on the stress in the time history. The equations at the first four time points, as well as one at a later time (five equations) could be solved for the five unknown material properties given computed values of the load intensity. Results showed that four of the five material properties could be estimated from the regression equations to within 9% of the values used in simulation if time points up to 1000s are included in the set of equations. However, reasonable estimates of the out of plane Poisson's ratio could not be found. Although all regression equations depended on permeability, suggesting that true equilibrium was not realized at 1000s of simulation, it was possible to estimate material properties to within 10% of the expected values using equations that included data up to 800s. This suggests that credible estimates of most material properties can be obtained from tests that are not run to equilibrium, which is typically several thousand seconds.
Evaluating differential effects using regression interactions and regression mixture models
Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung
2015-01-01
Research increasingly emphasizes understanding differential effects. This paper focuses on understanding regression mixture models, a relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their formulation, and their assumptions are compared using Monte Carlo simulations and real data analysis. The capabilities of regression mixture models are described and specific issues to be addressed when conducting regression mixtures are proposed. The paper aims to clarify the role that regression mixtures can take in the estimation of differential effects and increase awareness of the benefits and potential pitfalls of this approach. Regression mixture models are shown to be a potentially effective exploratory method for finding differential effects when these effects can be defined by a small number of classes of respondents who share a typical relationship between a predictor and an outcome. It is also shown that the comparison between regression mixture models and interactions becomes substantially more complex as the number of classes increases. It is argued that regression interactions are well suited for direct tests of specific hypotheses about differential effects and regression mixtures provide a useful approach for exploring effect heterogeneity given adequate samples and study design. PMID:26556903
Brain, music, and non-Poisson renewal processes.
Bianco, Simone; Ignaccolo, Massimiliano; Rider, Mark S; Ross, Mary J; Winsor, Phil; Grigolini, Paolo
2007-06-01
In this paper we show that both music composition and brain function, as revealed by the electroencephalogram (EEG) analysis, are renewal non-Poisson processes living in the nonergodic dominion. To reach this important conclusion we process the data with the minimum spanning tree method, so as to detect significant events, thereby building a sequence of times, which is the time series to analyze. Then we show that in both cases, EEG and music composition, these significant events are the signature of a non-Poisson renewal process. This conclusion is reached using a technique of statistical analysis recently developed by our group, the aging experiment (AE). First, we find that in both cases the distances between two consecutive events are described by nonexponential histograms, thereby proving the non-Poisson nature of these processes. The corresponding survival probabilities Psi(t) are well fitted by stretched exponentials [Psi(t) proportional, variant exp (-(gammat){alpha}) , with 0.5
Brain, music, and non-Poisson renewal processes
NASA Astrophysics Data System (ADS)
Bianco, Simone; Ignaccolo, Massimiliano; Rider, Mark S.; Ross, Mary J.; Winsor, Phil; Grigolini, Paolo
2007-06-01
In this paper we show that both music composition and brain function, as revealed by the electroencephalogram (EEG) analysis, are renewal non-Poisson processes living in the nonergodic dominion. To reach this important conclusion we process the data with the minimum spanning tree method, so as to detect significant events, thereby building a sequence of times, which is the time series to analyze. Then we show that in both cases, EEG and music composition, these significant events are the signature of a non-Poisson renewal process. This conclusion is reached using a technique of statistical analysis recently developed by our group, the aging experiment (AE). First, we find that in both cases the distances between two consecutive events are described by nonexponential histograms, thereby proving the non-Poisson nature of these processes. The corresponding survival probabilities Ψ(t) are well fitted by stretched exponentials [ Ψ(t)∝exp (-(γt)α) , with 0.5<α<1 .] The second step rests on the adoption of AE, which shows that these are renewal processes. We show that the stretched exponential, due to its renewal character, is the emerging tip of an iceberg, whose underwater part has slow tails with an inverse power law structure with power index μ=1+α . Adopting the AE procedure we find that both EEG and music composition yield μ<2 . On the basis of the recently discovered complexity matching effect, according to which a complex system S with μS<2 responds only to a complex driving signal P with μP⩽μS , we conclude that the results of our analysis may explain the influence of music on the human brain.
Polarizable Atomic Multipole Solutes in a Poisson-Boltzmann Continuum
Schnieders, Michael J.; Baker, Nathan A.; Ren, Pengyu; Ponder, Jay W.
2008-01-01
Modeling the change in the electrostatics of organic molecules upon moving from vacuum into solvent, due to polarization, has long been an interesting problem. In vacuum, experimental values for the dipole moments and polarizabilities of small, rigid molecules are known to high accuracy; however, it has generally been difficult to determine these quantities for a polar molecule in water. A theoretical approach introduced by Onsager used vacuum properties of small molecules, including polarizability, dipole moment and size, to predict experimentally known permittivities of neat liquids via the Poisson equation. Since this important advance in understanding the condensed phase, a large number of computational methods have been developed to study solutes embedded in a continuum via numerical solutions to the Poisson-Boltzmann equation (PBE). Only recently have the classical force fields used for studying biomolecules begun to include explicit polarization in their functional forms. Here we describe the theory underlying a newly developed Polarizable Multipole Poisson-Boltzmann (PMPB) continuum electrostatics model, which builds on the Atomic Multipole Optimized Energetics for Biomolecular Applications (AMOEBA) force field. As an application of the PMPB methodology, results are presented for several small folded proteins studied by molecular dynamics in explicit water as well as embedded in the PMPB continuum. The dipole moment of each protein increased on average by a factor of 1.27 in explicit water and 1.26 in continuum solvent. The essentially identical electrostatic response in both models suggests that PMPB electrostatics offers an efficient alternative to sampling explicit solvent molecules for a variety of interesting applications, including binding energies, conformational analysis, and pKa prediction. Introduction of 150 mM salt lowered the electrostatic solvation energy between 2–13 kcal/mole, depending on the formal charge of the protein, but had only a
On population size estimators in the Poisson mixture model.
Mao, Chang Xuan; Yang, Nan; Zhong, Jinhua
2013-09-01
Estimating population sizes via capture-recapture experiments has enormous applications. The Poisson mixture model can be adopted for those applications with a single list in which individuals appear one or more times. We compare several nonparametric estimators, including the Chao estimator, the Zelterman estimator, two jackknife estimators and the bootstrap estimator. The target parameter of the Chao estimator is a lower bound of the population size. Those of the other four estimators are not lower bounds, and they may produce lower confidence limits for the population size with poor coverage probabilities. A simulation study is reported and two examples are investigated. PMID:23865502
Fission meter and neutron detection using poisson distribution comparison
Rowland, Mark S; Snyderman, Neal J
2014-11-18
A neutron detector system and method for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly into information that a first responder can use to discriminate materials. The system comprises counting neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source. Comparison of the observed neutron count distribution with a Poisson distribution is performed to distinguish fissile material from non-fissile material.
Numerical Poisson-Boltzmann Model for Continuum Membrane Systems.
Botello-Smith, Wesley M; Liu, Xingping; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray
2013-01-01
Membrane protein systems are important computational research topics due to their roles in rational drug design. In this study, we developed a continuum membrane model utilizing a level set formulation under the numerical Poisson-Boltzmann framework within the AMBER molecular mechanics suite for applications such as protein-ligand binding affinity and docking pose predictions. Two numerical solvers were adapted for periodic systems to alleviate possible edge effects. Validation on systems ranging from organic molecules to membrane proteins up to 200 residues, demonstrated good numerical properties. This lays foundations for sophisticated models with variable dielectric treatments and second-order accurate modeling of solvation interactions.
Poisson-Boltzmann theory for two parallel uniformly charged plates
NASA Astrophysics Data System (ADS)
Xing, Xiangjun
2011-04-01
We solve the nonlinear Poisson-Boltzmann equation for two parallel and like-charged plates both inside a symmetric electrolyte, and inside a 2:1 asymmetric electrolyte, in terms of Weierstrass elliptic functions. From these solutions we derive the functional relation between the surface charge density, the plate separation, and the pressure between plates. For the one plate problem, we obtain exact expressions for the electrostatic potential and for the renormalized surface charge density, both in symmetric and in asymmetric electrolytes. For the two plate problems, we obtain new exact asymptotic results in various regimes.
Lie-Poisson bifurcations for the Maxwell-Bloch equations
David, D.
1990-01-01
We present a study of the set of Maxwell-Bloch equations on R{sup 3} from the point of view of Hamiltonian dynamics. These equations are shown to be bi-Hamiltonian, on the one hand, and to possess several inequivalent Lie-Poisson structures, on the other hand, parametrized by the group SL(2,R). Each structure is characterized by a particular distinguished function. The level sets of this function provide two-dimensional surfaces onto which the motion takes various symplectic forms. 4 refs.
A Poisson process approximation for generalized K-5 confidence regions
NASA Technical Reports Server (NTRS)
Arsham, H.; Miller, D. R.
1982-01-01
One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.
Theoretical Analysis of Radiographic Images by Nonstationary Poisson Processes
NASA Astrophysics Data System (ADS)
Tanaka, Kazuo; Yamada, Isao; Uchida, Suguru
1980-12-01
This paper deals with the noise analysis of radiographic images obtained in the usual fluorescent screen-film system. The theory of nonstationary Poisson processes is applied to the analysis of the radiographic images containing the object information. The ensemble averages, the autocorrelation functions, and the Wiener spectrum densities of the light-energy distribution at the fluorescent screen and of the film optical-density distribution are obtained. The detection characteristics of the system are evaluated theoretically. Numerical examples of the one-dimensional image are shown and the results are compared with those obtained under the assumption that the object image is related to the background noise by the additive process.
Numerical Poisson-Boltzmann Model for Continuum Membrane Systems.
Botello-Smith, Wesley M; Liu, Xingping; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray
2013-01-01
Membrane protein systems are important computational research topics due to their roles in rational drug design. In this study, we developed a continuum membrane model utilizing a level set formulation under the numerical Poisson-Boltzmann framework within the AMBER molecular mechanics suite for applications such as protein-ligand binding affinity and docking pose predictions. Two numerical solvers were adapted for periodic systems to alleviate possible edge effects. Validation on systems ranging from organic molecules to membrane proteins up to 200 residues, demonstrated good numerical properties. This lays foundations for sophisticated models with variable dielectric treatments and second-order accurate modeling of solvation interactions. PMID:23439886
Non-porous Elastic Sheets with Negative Poisson's Ratio
NASA Astrophysics Data System (ADS)
Javid, Farhad; Smith-Roberge, Evelyne; Innes, Matthew; Shanian, Ali; Bertoldi, Katia; Harvard University Collaboration; Rolls-Royce Energy Collaboration
2015-03-01
Negative Poisson's ratio (NPR) materials--materials that contract (expand) in transverse directions when compressed (stretched) uniaxially--have attracted significant interest both because of their unusual properties and their many potential applications. However, complex fabrication processes, high porosity, and low structural stiffness of most of the proposed NPR materials have significantly limited their practical applications. In this work, a novel NPR material is designed by coupling the in- and out-of-plane (popping) deformations in an elastic sheet with a periodic distribution of dimples. As a result, such NPR material has zero porosity, relatively high structural stiffness, and can be made from both hard and soft materials using industrial fabrication techniques.
On population size estimators in the Poisson mixture model.
Mao, Chang Xuan; Yang, Nan; Zhong, Jinhua
2013-09-01
Estimating population sizes via capture-recapture experiments has enormous applications. The Poisson mixture model can be adopted for those applications with a single list in which individuals appear one or more times. We compare several nonparametric estimators, including the Chao estimator, the Zelterman estimator, two jackknife estimators and the bootstrap estimator. The target parameter of the Chao estimator is a lower bound of the population size. Those of the other four estimators are not lower bounds, and they may produce lower confidence limits for the population size with poor coverage probabilities. A simulation study is reported and two examples are investigated.
The Poisson equation at second order in relativistic cosmology
Hidalgo, J.C.; Christopherson, Adam J.; Malik, Karim A. E-mail: Adam.Christopherson@nottingham.ac.uk
2013-08-01
We calculate the relativistic constraint equation which relates the curvature perturbation to the matter density contrast at second order in cosmological perturbation theory. This relativistic ''second order Poisson equation'' is presented in a gauge where the hydrodynamical inhomogeneities coincide with their Newtonian counterparts exactly for a perfect fluid with constant equation of state. We use this constraint to introduce primordial non-Gaussianity in the density contrast in the framework of General Relativity. We then derive expressions that can be used as the initial conditions of N-body codes for structure formation which probe the observable signature of primordial non-Gaussianity in the statistics of the evolved matter density field.
Poisson's Ratio and the Densification of Glass under High Pressure
Rouxel, T.; Ji, H.; Hammouda, T.; Moreac, A.
2008-06-06
Because of a relatively low atomic packing density, (C{sub g}) glasses experience significant densification under high hydrostatic pressure. Poisson's ratio ({nu}) is correlated to C{sub g} and typically varies from 0.15 for glasses with low C{sub g} such as amorphous silica to 0.38 for close-packed atomic networks such as in bulk metallic glasses. Pressure experiments were conducted up to 25 GPa at 293 K on silica, soda-lime-silica, chalcogenide, and bulk metallic glasses. We show from these high-pressure data that there is a direct correlation between {nu} and the maximum post-decompression density change.
Maslov indices, Poisson brackets, and singular differential forms
NASA Astrophysics Data System (ADS)
Esterlis, I.; Haggard, H. M.; Hedeman, A.; Littlejohn, R. G.
2014-06-01
Maslov indices are integers that appear in semiclassical wave functions and quantization conditions. They are often notoriously difficult to compute. We present methods of computing the Maslov index that rely only on typically elementary Poisson brackets and simple linear algebra. We also present a singular differential form, whose integral along a curve gives the Maslov index of that curve. The form is closed but not exact, and transforms by an exact differential under canonical transformations. We illustrate the method with the 6j-symbol, which is important in angular-momentum theory and in quantum gravity.
Theory of multicolor lattice gas - A cellular automaton Poisson solver
NASA Technical Reports Server (NTRS)
Chen, H.; Matthaeus, W. H.; Klein, L. W.
1990-01-01
The present class of models for cellular automata involving a quiescent hydrodynamic lattice gas with multiple-valued passive labels termed 'colors', the lattice collisions change individual particle colors while preserving net color. The rigorous proofs of the multicolor lattice gases' essential features are rendered more tractable by an equivalent subparticle representation in which the color is represented by underlying two-state 'spins'. Schemes for the introduction of Dirichlet and Neumann boundary conditions are described, and two illustrative numerical test cases are used to verify the theory. The lattice gas model is equivalent to a Poisson equation solution.
Some Poisson structures and Lax equations associated with the Toeplitz lattice and the Schur lattice
NASA Astrophysics Data System (ADS)
Lemarie, Caroline
2016-01-01
The Toeplitz lattice is a Hamiltonian system whose Poisson structure is known. In this paper, we unveil the origins of this Poisson structure and derive from it the associated Lax equations for this lattice. We first construct a Poisson subvariety H n of GL n (C), which we view as a real or complex Poisson-Lie group whose Poisson structure comes from a quadratic R-bracket on gl n (C) for a fixed R-matrix. The existence of Hamiltonians, associated to the Toeplitz lattice for the Poisson structure on H n , combined with the properties of the quadratic R-bracket allow us to give explicit formulas for the Lax equation. Then we derive from it the integrability in the sense of Liouville of the Toeplitz lattice. When we view the lattice as being defined over R, we can construct a Poisson subvariety H n τ of U n which is itself a Poisson-Dirac subvariety of GL n R (C). We then construct a Hamiltonian for the Poisson structure induced on H n τ , corresponding to another system which derives from the Toeplitz lattice the modified Schur lattice. Thanks to the properties of Poisson-Dirac subvarieties, we give an explicit Lax equation for the new system and derive from it a Lax equation for the Schur lattice. We also deduce the integrability in the sense of Liouville of the modified Schur lattice.
Linear regression in astronomy. II
NASA Technical Reports Server (NTRS)
Feigelson, Eric D.; Babu, Gutti J.
1992-01-01
A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.
Quantile regression for climate data
NASA Astrophysics Data System (ADS)
Marasinghe, Dilhani Shalika
Quantile regression is a developing statistical tool which is used to explain the relationship between response and predictor variables. This thesis describes two examples of climatology using quantile regression.Our main goal is to estimate derivatives of a conditional mean and/or conditional quantile function. We introduce a method to handle autocorrelation in the framework of quantile regression and used it with the temperature data. Also we explain some properties of the tornado data which is non-normally distributed. Even though quantile regression provides a more comprehensive view, when talking about residuals with the normality and the constant variance assumption, we would prefer least square regression for our temperature analysis. When dealing with the non-normality and non constant variance assumption, quantile regression is a better candidate for the estimation of the derivative.
Risk-adjusted monitoring of survival times
Sego, Landon H.; Reynolds, Marion R.; Woodall, William H.
2009-02-26
We consider the monitoring of clinical outcomes, where each patient has a di®erent risk of death prior to undergoing a health care procedure.We propose a risk-adjusted survival time CUSUM chart (RAST CUSUM) for monitoring clinical outcomes where the primary endpoint is a continuous, time-to-event variable that may be right censored. Risk adjustment is accomplished using accelerated failure time regression models. We compare the average run length performance of the RAST CUSUM chart to the risk-adjusted Bernoulli CUSUM chart, using data from cardiac surgeries to motivate the details of the comparison. The comparisons show that the RAST CUSUM chart is more efficient at detecting a sudden decrease in the odds of death than the risk-adjusted Bernoulli CUSUM chart, especially when the fraction of censored observations is not too high. We also discuss the implementation of a prospective monitoring scheme using the RAST CUSUM chart.
Evaluating Differential Effects Using Regression Interactions and Regression Mixture Models
ERIC Educational Resources Information Center
Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung
2015-01-01
Research increasingly emphasizes understanding differential effects. This article focuses on understanding regression mixture models, which are relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their…
Retro-regression--another important multivariate regression improvement.
Randić, M
2001-01-01
We review the serious problem associated with instabilities of the coefficients of regression equations, referred to as the MRA (multivariate regression analysis) "nightmare of the first kind". This is manifested when in a stepwise regression a descriptor is included or excluded from a regression. The consequence is an unpredictable change of the coefficients of the descriptors that remain in the regression equation. We follow with consideration of an even more serious problem, referred to as the MRA "nightmare of the second kind", arising when optimal descriptors are selected from a large pool of descriptors. This process typically causes at different steps of the stepwise regression a replacement of several previously used descriptors by new ones. We describe a procedure that resolves these difficulties. The approach is illustrated on boiling points of nonanes which are considered (1) by using an ordered connectivity basis; (2) by using an ordering resulting from application of greedy algorithm; and (3) by using an ordering derived from an exhaustive search for optimal descriptors. A novel variant of multiple regression analysis, called retro-regression (RR), is outlined showing how it resolves the ambiguities associated with both "nightmares" of the first and the second kind of MRA. PMID:11410035
Transfer Learning Based on Logistic Regression
NASA Astrophysics Data System (ADS)
Paul, A.; Rottensteiner, F.; Heipke, C.
2015-08-01
In this paper we address the problem of classification of remote sensing images in the framework of transfer learning with a focus on domain adaptation. The main novel contribution is a method for transductive transfer learning in remote sensing on the basis of logistic regression. Logistic regression is a discriminative probabilistic classifier of low computational complexity, which can deal with multiclass problems. This research area deals with methods that solve problems in which labelled training data sets are assumed to be available only for a source domain, while classification is needed in the target domain with different, yet related characteristics. Classification takes place with a model of weight coefficients for hyperplanes which separate features in the transformed feature space. In term of logistic regression, our domain adaptation method adjusts the model parameters by iterative labelling of the target test data set. These labelled data features are iteratively added to the current training set which, at the beginning, only contains source features and, simultaneously, a number of source features are deleted from the current training set. Experimental results based on a test series with synthetic and real data constitutes a first proof-of-concept of the proposed method.
Generalized master equations for non-Poisson dynamics on networks.
Hoffmann, Till; Porter, Mason A; Lambiotte, Renaud
2012-10-01
The traditional way of studying temporal networks is to aggregate the dynamics of the edges to create a static weighted network. This implicitly assumes that the edges are governed by Poisson processes, which is not typically the case in empirical temporal networks. Accordingly, we examine the effects of non-Poisson inter-event statistics on the dynamics of edges, and we apply the concept of a generalized master equation to the study of continuous-time random walks on networks. We show that this equation reduces to the standard rate equations when the underlying process is Poissonian and that its stationary solution is determined by an effective transition matrix whose leading eigenvector is easy to calculate. We conduct numerical simulations and also derive analytical results for the stationary solution under the assumption that all edges have the same waiting-time distribution. We discuss the implications of our work for dynamical processes on temporal networks and for the construction of network diagnostics that take into account their nontrivial stochastic nature.
Prescription-induced jump distributions in multiplicative Poisson processes.
Suweis, Samir; Porporato, Amilcare; Rinaldo, Andrea; Maritan, Amos
2011-06-01
Generalized Langevin equations (GLE) with multiplicative white Poisson noise pose the usual prescription dilemma leading to different evolution equations (master equations) for the probability distribution. Contrary to the case of multiplicative Gaussian white noise, the Stratonovich prescription does not correspond to the well-known midpoint (or any other intermediate) prescription. By introducing an inertial term in the GLE, we show that the Itô and Stratonovich prescriptions naturally arise depending on two time scales, one induced by the inertial term and the other determined by the jump event. We also show that, when the multiplicative noise is linear in the random variable, one prescription can be made equivalent to the other by a suitable transformation in the jump probability distribution. We apply these results to a recently proposed stochastic model describing the dynamics of primary soil salinization, in which the salt mass balance within the soil root zone requires the analysis of different prescriptions arising from the resulting stochastic differential equation forced by multiplicative white Poisson noise, the features of which are tailored to the characters of the daily precipitation. A method is finally suggested to infer the most appropriate prescription from the data.
Prescription-induced jump distributions in multiplicative Poisson processes
NASA Astrophysics Data System (ADS)
Suweis, Samir; Porporato, Amilcare; Rinaldo, Andrea; Maritan, Amos
2011-06-01
Generalized Langevin equations (GLE) with multiplicative white Poisson noise pose the usual prescription dilemma leading to different evolution equations (master equations) for the probability distribution. Contrary to the case of multiplicative Gaussian white noise, the Stratonovich prescription does not correspond to the well-known midpoint (or any other intermediate) prescription. By introducing an inertial term in the GLE, we show that the Itô and Stratonovich prescriptions naturally arise depending on two time scales, one induced by the inertial term and the other determined by the jump event. We also show that, when the multiplicative noise is linear in the random variable, one prescription can be made equivalent to the other by a suitable transformation in the jump probability distribution. We apply these results to a recently proposed stochastic model describing the dynamics of primary soil salinization, in which the salt mass balance within the soil root zone requires the analysis of different prescriptions arising from the resulting stochastic differential equation forced by multiplicative white Poisson noise, the features of which are tailored to the characters of the daily precipitation. A method is finally suggested to infer the most appropriate prescription from the data.
Generalized master equations for non-Poisson dynamics on networks
NASA Astrophysics Data System (ADS)
Hoffmann, Till; Porter, Mason A.; Lambiotte, Renaud
2012-10-01
The traditional way of studying temporal networks is to aggregate the dynamics of the edges to create a static weighted network. This implicitly assumes that the edges are governed by Poisson processes, which is not typically the case in empirical temporal networks. Accordingly, we examine the effects of non-Poisson inter-event statistics on the dynamics of edges, and we apply the concept of a generalized master equation to the study of continuous-time random walks on networks. We show that this equation reduces to the standard rate equations when the underlying process is Poissonian and that its stationary solution is determined by an effective transition matrix whose leading eigenvector is easy to calculate. We conduct numerical simulations and also derive analytical results for the stationary solution under the assumption that all edges have the same waiting-time distribution. We discuss the implications of our work for dynamical processes on temporal networks and for the construction of network diagnostics that take into account their nontrivial stochastic nature.
Sparsity-based Poisson denoising with dictionary learning.
Giryes, Raja; Elad, Michael
2014-12-01
The problem of Poisson denoising appears in various imaging applications, such as low-light photography, medical imaging, and microscopy. In cases of high SNR, several transformations exist so as to convert the Poisson noise into an additive-independent identically distributed. Gaussian noise, for which many effective algorithms are available. However, in a low-SNR regime, these transformations are significantly less accurate, and a strategy that relies directly on the true noise statistics is required. Salmon et al took this route, proposing a patch-based exponential image representation model based on Gaussian mixture model, leading to state-of-the-art results. In this paper, we propose to harness sparse-representation modeling to the image patches, adopting the same exponential idea. Our scheme uses a greedy pursuit with boot-strapping-based stopping condition and dictionary learning within the denoising process. The reconstruction performance of the proposed scheme is competitive with leading methods in high SNR and achieving state-of-the-art results in cases of low SNR. PMID:25312930
Poisson validity for orbital debris: II. Combinatorics and simulation
NASA Astrophysics Data System (ADS)
Fudge, Michael L.; Maclay, Timothy D.
1997-10-01
The International Space Station (ISS) will be at risk from orbital debris and micrometeorite impact (i.e., an impact that penetrates a critical component, possibly leading to loss of life). In support of ISS, last year the authors examined a fundamental assumption upon which the modeling of risk is based; namely, the assertion that the orbital collision problem can be modeled using a Poisson distribution. The assumption was found to be appropriate based upon the Poisson's general use as an approximation for the binomial distribution and the fact that is it proper to physically model exposure to the orbital debris flux environment using the binomial. This paper examines another fundamental issue in the expression of risk posed to space structures: the methodology by which individual incremental collision probabilities are combined to express an overall collision probability. The specific situation of ISS in this regard is that the determination of the level of safety for ISS is made via a single overall expression of critical component penetration risk. This paper details the combinatorial mathematical methods for calculating and expressing individual component (or incremental) penetration risks, utilizing component risk probabilities to produce an overall station penetration risk probability, and calculating an expected probability of loss from estimates for the loss of life given a penetration. Additionally, the paper will examine whether the statistical Poissonian answer to the orbital collision problem can be favorably compared to the results of a Monte Carlo simulation.
Seismic velocities and Poisson's ratio of shallow unconsolidated sands
Bachrach, R.; Dvorkin, J.; Nur, A.M.
2000-04-01
The authors determined P- and S-wave velocity depth profiles in shallow, unconsolidated beach sand by analyzing three-component surface seismic data. P- and S-wave velocity profiles were calculated from travel time measurements of vertical and tangential component seismograms, respectively. The results reveal two discrepancies between theory and data. Whereas both velocities were found to be proportional to the pressure raised to the power of 1/6, as predicted by the Hertz-Mindlin contact theory, the actual values of the velocities are less than half of those calculated from this theory. The authors attribute this discrepancy to the angularity of the sand grains. Assuming that the average radii of curvature at the grain contacts are smaller than the average radii of the grains, they modify the Hertz-Hindlin theory accordingly. They found that the ratio of the contact radius to the grain radius is about 0.086. The second disparity is between the observed Poisson's ratio of 0.15 and the theoretical value (0.008 for random pack of quartz spheres). This discrepancy can be reconciled by assuming slip at the grain contacts. Because slip decreases the shearing between grains, Poisson's ratio increases.
Prescription-induced jump distributions in multiplicative Poisson processes.
Suweis, Samir; Porporato, Amilcare; Rinaldo, Andrea; Maritan, Amos
2011-06-01
Generalized Langevin equations (GLE) with multiplicative white Poisson noise pose the usual prescription dilemma leading to different evolution equations (master equations) for the probability distribution. Contrary to the case of multiplicative Gaussian white noise, the Stratonovich prescription does not correspond to the well-known midpoint (or any other intermediate) prescription. By introducing an inertial term in the GLE, we show that the Itô and Stratonovich prescriptions naturally arise depending on two time scales, one induced by the inertial term and the other determined by the jump event. We also show that, when the multiplicative noise is linear in the random variable, one prescription can be made equivalent to the other by a suitable transformation in the jump probability distribution. We apply these results to a recently proposed stochastic model describing the dynamics of primary soil salinization, in which the salt mass balance within the soil root zone requires the analysis of different prescriptions arising from the resulting stochastic differential equation forced by multiplicative white Poisson noise, the features of which are tailored to the characters of the daily precipitation. A method is finally suggested to infer the most appropriate prescription from the data. PMID:21797314
Polyelectrolyte Microcapsules: Ion Distributions from a Poisson-Boltzmann Model
NASA Astrophysics Data System (ADS)
Tang, Qiyun; Denton, Alan R.; Rozairo, Damith; Croll, Andrew B.
2014-03-01
Recent experiments have shown that polystyrene-polyacrylic-acid-polystyrene (PS-PAA-PS) triblock copolymers in a solvent mixture of water and toluene can self-assemble into spherical microcapsules. Suspended in water, the microcapsules have a toluene core surrounded by an elastomer triblock shell. The longer, hydrophilic PAA blocks remain near the outer surface of the shell, becoming charged through dissociation of OH functional groups in water, while the shorter, hydrophobic PS blocks form a networked (glass or gel) structure. Within a mean-field Poisson-Boltzmann theory, we model these polyelectrolyte microcapsules as spherical charged shells, assuming different dielectric constants inside and outside the capsule. By numerically solving the nonlinear Poisson-Boltzmann equation, we calculate the radial distribution of anions and cations and the osmotic pressure within the shell as a function of salt concentration. Our predictions, which can be tested by comparison with experiments, may guide the design of microcapsules for practical applications, such as drug delivery. This work was supported by the National Science Foundation under Grant No. DMR-1106331.
The multisensor PHD filter: II. Erroneous solution via Poisson magic
NASA Astrophysics Data System (ADS)
Mahler, Ronald
2009-05-01
The theoretical foundation for the probability hypothesis density (PHD) filter is the FISST multitarget differential and integral calculus. The "core" PHD filter presumes a single sensor. Theoretically rigorous formulas for the multisensor PHD filter can be derived using the FISST calculus, but are computationally intractable. A less theoretically desirable solution-the iterated-corrector approximation-must be used instead. Recently, it has been argued that an "elementary" methodology, the "Poisson-intensity approach," renders FISST obsolete. It has further been claimed that the iterated-corrector approximation is suspect, and in its place an allegedly superior "general multisensor intensity filter" has been proposed. In this and a companion paper I demonstrate that it is these claims which are erroneous. The companion paper introduces formulas for the actual "general multisensor intensity filter." In this paper I demonstrate that (1) the "general multisensor intensity filter" fails in important special cases; (2) it will perform badly in even the easiest multitarget tracking problems; and (3) these rather serious missteps suggest that the "Poisson-intensity approach" is inherently faulty.
Spatial correlation in Bayesian logistic regression with misclassification.
Bihrmann, Kristine; Toft, Nils; Nielsen, Søren Saxmose; Ersbøll, Annette Kjær
2014-06-01
Standard logistic regression assumes that the outcome is measured perfectly. In practice, this is often not the case, which could lead to biased estimates if not accounted for. This study presents Bayesian logistic regression with adjustment for misclassification of the outcome applied to data with spatial correlation. The models assessed include a fixed effects model, an independent random effects model, and models with spatially correlated random effects modelled using conditional autoregressive prior distributions (ICAR and ICAR(ρ)). Performance of these models was evaluated in a simulation study. Parameters were estimated by Markov Chain Monte Carlo methods, using slice sampling to improve convergence. The results demonstrated that adjustment for misclassification must be included to produce unbiased regression estimates. With strong correlation the ICAR model performed best. With weak or moderate correlation the ICAR(ρ) performed best. With unknown spatial correlation the recommended model would be the ICAR(ρ), assuming convergence can be obtained. PMID:24889989
ERIC Educational Resources Information Center
Shih, Ching-Lin; Liu, Tien-Hsiang; Wang, Wen-Chung
2014-01-01
The simultaneous item bias test (SIBTEST) method regression procedure and the differential item functioning (DIF)-free-then-DIF strategy are applied to the logistic regression (LR) method simultaneously in this study. These procedures are used to adjust the effects of matching true score on observed score and to better control the Type I error…
Precision Efficacy Analysis for Regression.
ERIC Educational Resources Information Center
Brooks, Gordon P.
When multiple linear regression is used to develop a prediction model, sample size must be large enough to ensure stable coefficients. If the derivation sample size is inadequate, the model may not predict well for future subjects. The precision efficacy analysis for regression (PEAR) method uses a cross- validity approach to select sample sizes…
Can luteal regression be reversed?
Telleria, Carlos M
2006-01-01
The corpus luteum is an endocrine gland whose limited lifespan is hormonally programmed. This debate article summarizes findings of our research group that challenge the principle that the end of function of the corpus luteum or luteal regression, once triggered, cannot be reversed. Overturning luteal regression by pharmacological manipulations may be of critical significance in designing strategies to improve fertility efficacy. PMID:17074090
Logistic Regression: Concept and Application
ERIC Educational Resources Information Center
Cokluk, Omay
2010-01-01
The main focus of logistic regression analysis is classification of individuals in different groups. The aim of the present study is to explain basic concepts and processes of binary logistic regression analysis intended to determine the combination of independent variables which best explain the membership in certain groups called dichotomous…
NASA Astrophysics Data System (ADS)
Fabian, Karl; Shcherbakov, Valera
2010-05-01
In contrast to the predominant paradigm, recent studies indicate that the lengths of polarity intervals do not follow Poisson statistics, not even if non-stationary Poisson processes are considered. It is here shown that first-passage time (FPT) statistics for a one-dimensional random walk provides a good fit to the polarity time scale (PTS) in the range of stable polarity durations between 10 ka and 3000 ka. This fit is achieved by adjusting only a single diffusion time T , which comes to lie between 70 ka and 100 ka depending on the PTS chosen. A physical interpretation, why the FPT distribution of a random-walk process applies to the geodynamo, could relate to a balance between decay of stochastic turbulence and generation of the magnetic field. A simplified picture assumes the field generation to occur from a collection of 10-100 statistically independent dynamo processes, where each is described, e.g., by a Rikitake equation in the chaotic regime. An interesting feature of the random walk model is that it naturally introduces an internal variable, the position of the walk, which could be linked to field intensity. This connection would suggest that the variance of field intensity increases with the duration of the polarity interval. It does not predict a strong correlation between the strength of the paleofield and the duration of a chron. A further strength of the random walk model is that superchrons are not outliers, but natural rare events within the system. The apparent non-stationary nature of the geodynamo can be interpreted in the random walk model by a continuous shift in the governing parameters, and does not require major restructuring of the internal geodynamo process as in case of the Poisson picture.
Wild bootstrap for quantile regression.
Feng, Xingdong; He, Xuming; Hu, Jianhua
2011-12-01
The existing theory of the wild bootstrap has focused on linear estimators. In this note, we broaden its validity by providing a class of weight distributions that is asymptotically valid for quantile regression estimators. As most weight distributions in the literature lead to biased variance estimates for nonlinear estimators of linear regression, we propose a modification of the wild bootstrap that admits a broader class of weight distributions for quantile regression. A simulation study on median regression is carried out to compare various bootstrap methods. With a simple finite-sample correction, the wild bootstrap is shown to account for general forms of heteroscedasticity in a regression model with fixed design points. PMID:23049133
[Regression grading in gastrointestinal tumors].
Tischoff, I; Tannapfel, A
2012-02-01
Preoperative neoadjuvant chemoradiation therapy is a well-established and essential part of the interdisciplinary treatment of gastrointestinal tumors. Neoadjuvant treatment leads to regressive changes in tumors. To evaluate the histological tumor response different scoring systems describing regressive changes are used and known as tumor regression grading. Tumor regression grading is usually based on the presence of residual vital tumor cells in proportion to the total tumor size. Currently, no nationally or internationally accepted grading systems exist. In general, common guidelines should be used in the pathohistological diagnostics of tumors after neoadjuvant therapy. In particularly, the standard tumor grading will be replaced by tumor regression grading. Furthermore, tumors after neoadjuvant treatment are marked with the prefix "y" in the TNM classification. PMID:22293790
Fungible weights in logistic regression.
Jones, Jeff A; Waller, Niels G
2016-06-01
In this article we develop methods for assessing parameter sensitivity in logistic regression models. To set the stage for this work, we first review Waller's (2008) equations for computing fungible weights in linear regression. Next, we describe 2 methods for computing fungible weights in logistic regression. To demonstrate the utility of these methods, we compute fungible logistic regression weights using data from the Centers for Disease Control and Prevention's (2010) Youth Risk Behavior Surveillance Survey, and we illustrate how these alternate weights can be used to evaluate parameter sensitivity. To make our work accessible to the research community, we provide R code (R Core Team, 2015) that will generate both kinds of fungible logistic regression weights. (PsycINFO Database Record
Non-linear properties of metallic cellular materials with a negative Poisson's ratio
NASA Technical Reports Server (NTRS)
Choi, J. B.; Lakes, R. S.
1992-01-01
Negative Poisson's ratio copper foam was prepared and characterized experimentally. The transformation into re-entrant foam was accomplished by applying sequential permanent compressions above the yield point to achieve a triaxial compression. The Poisson's ratio of the re-entrant foam depended on strain and attained a relative minimum at strains near zero. Poisson's ratio as small as -0.8 was achieved. The strain dependence of properties occurred over a narrower range of strain than in the polymer foams studied earlier. Annealing of the foam resulted in a slightly greater magnitude of negative Poisson's ratio and greater toughness at the expense of a decrease in the Young's modulus.
Practical Session: Simple Linear Regression
NASA Astrophysics Data System (ADS)
Clausel, M.; Grégoire, G.
2014-12-01
Two exercises are proposed to illustrate the simple linear regression. The first one is based on the famous Galton's data set on heredity. We use the lm R command and get coefficients estimates, standard error of the error, R2, residuals …In the second example, devoted to data related to the vapor tension of mercury, we fit a simple linear regression, predict values, and anticipate on multiple linear regression. This pratical session is an excerpt from practical exercises proposed by A. Dalalyan at EPNC (see Exercises 1 and 2 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_4.pdf).
Numerical calibration of the stable poisson loaded specimen
NASA Technical Reports Server (NTRS)
Ghosn, Louis J.; Calomino, Anthony M.; Brewer, Dave N.
1992-01-01
An analytical calibration of the Stable Poisson Loaded (SPL) specimen is presented. The specimen configuration is similar to the ASTM E-561 compact-tension specimen with displacement controlled wedge loading used for R-Curve determination. The crack mouth opening displacements (CMOD's) are produced by the diametral expansion of an axially compressed cylindrical pin located in the wake of a machined notch. Due to the unusual loading configuration, a three-dimensional finite element analysis was performed with gap elements simulating the contact between the pin and specimen. In this report, stress intensity factors, CMOD's, and crack displacement profiles are reported for different crack lengths and different contacting conditions. It was concluded that the computed stress intensity factor decreases sharply with increasing crack length, thus making the SPL specimen configuration attractive for fracture testing of brittle, high modulus materials.
Deformations of non-semisimple Poisson pencils of hydrodynamic type
NASA Astrophysics Data System (ADS)
Della Vedova, Alberto; Lorenzoni, Paolo; Savoldi, Andrea
2016-09-01
We study the deformations of two-component non-semisimple Poisson pencils of hydrodynamic type associated with Balinskiǐ–Novikov algebras. We show that in most cases the second order deformations are parametrized by two functions of a single variable. We find that one function is invariant with respect to the subgroup of Miura transformations, preserving the dispersionless limit, and another function is related to a one-parameter family of truncated structures. In two exceptional cases the second order deformations are parametrized by four functions. Among these two are invariants and two are related to a two-parameter family of truncated structures. We also study the lift of the deformations of n-component semisimple structures. This example suggests that deformations of non-semisimple pencils corresponding to the lifted invariant parameters are unobstructed.
Lindley frailty model for a class of compound Poisson processes
NASA Astrophysics Data System (ADS)
Kadilar, Gamze Özel; Ata, Nihal
2013-10-01
The Lindley distribution gain importance in survival analysis for the similarity of exponential distribution and allowance for the different shapes of hazard function. Frailty models provide an alternative to proportional hazards model where misspecified or omitted covariates are described by an unobservable random variable. Despite of the distribution of the frailty is generally assumed to be continuous, it is appropriate to consider discrete frailty distributions In some circumstances. In this paper, frailty models with discrete compound Poisson process for the Lindley distributed failure time are introduced. Survival functions are derived and maximum likelihood estimation procedures for the parameters are studied. Then, the fit of the models to the earthquake data set of Turkey are examined.
Application of the sine-Poisson equation in solar magnetostatics
NASA Technical Reports Server (NTRS)
Webb, G. M.; Zank, G. P.
1990-01-01
Solutions of the sine-Poisson equations are used to construct a class of isothermal magnetostatic atmospheres, with one ignorable coordinate corresponding to a uniform gravitational field in a plane geometry. The distributed current in the model (j) is directed along the x-axis, where x is the horizontal ignorable coordinate; (j) varies as the sine of the magnetostatic potential and falls off exponentially with distance vertical to the base with an e-folding distance equal to the gravitational scale height. Solutions for the magnetostatic potential A corresponding to the one-soliton, two-soliton, and breather solutions of the sine-Gordon equation are studied. Depending on the values of the free parameters in the soliton solutions, horizontally periodic magnetostatic structures are obtained possessing either a single X-type neutral point, multiple neural X-points, or solutions without X-points.
Optical Signal Processing: Poisson Image Restoration and Shearing Interferometry
NASA Technical Reports Server (NTRS)
Hong, Yie-Ming
1973-01-01
Optical signal processing can be performed in either digital or analog systems. Digital computers and coherent optical systems are discussed as they are used in optical signal processing. Topics include: image restoration; phase-object visualization; image contrast reversal; optical computation; image multiplexing; and fabrication of spatial filters. Digital optical data processing deals with restoration of images degraded by signal-dependent noise. When the input data of an image restoration system are the numbers of photoelectrons received from various areas of a photosensitive surface, the data are Poisson distributed with mean values proportional to the illuminance of the incoherently radiating object and background light. Optical signal processing using coherent optical systems is also discussed. Following a brief review of the pertinent details of Ronchi's diffraction grating interferometer, moire effect, carrier-frequency photography, and achromatic holography, two new shearing interferometers based on them are presented. Both interferometers can produce variable shear.
Analytical stress intensity solution for the Stable Poisson Loaded specimen
NASA Technical Reports Server (NTRS)
Ghosn, Louis J.; Calomino, Anthony M.; Brewer, David N.
1993-01-01
An analytical calibration of the Stable Poisson Loaded (SPL) specimen is presented. The specimen configuration is similar to the ASTM E-561 compact-tension specimen with displacement controlled wedge loading used for R-curve determination. The crack mouth opening displacements (CMODs) are produced by the diametral expansion of an axially compressed cylindrical pin located in the wake of a machined notch. Due to the unusual loading configuration, a three-dimensional finite element analysis was performed with gap elements simulating the contact between the pin and specimen. In this report, stress intensity factors, CMODs, and crack displacement profiles, are reported for different crack lengths and different contacting conditions. It was concluded that the computed stress intensity factor decreases sharply with increasing crack length thus making the SPL specimen configuration attractive for fracture testing of brittle, high modulus materials.
Numerical Solution of the Gyrokinetic Poisson Equation in TEMPEST
NASA Astrophysics Data System (ADS)
Dorr, Milo; Cohen, Bruce; Cohen, Ronald; Dimits, Andris; Hittinger, Jeffrey; Kerbel, Gary; Nevins, William; Rognlien, Thomas; Umansky, Maxim; Xiong, Andrew; Xu, Xueqiao
2006-10-01
The gyrokinetic Poisson (GKP) model in the TEMPEST continuum gyrokinetic edge plasma code yields the electrostatic potential due to the charge density of electrons and an arbitrary number of ion species including the effects of gyroaveraging in the limit kρ1. The TEMPEST equations are integrated as a differential algebraic system involving a nonlinear system solve via Newton-Krylov iteration. The GKP preconditioner block is inverted using a multigrid preconditioned conjugate gradient (CG) algorithm. Electrons are treated as kinetic or adiabatic. The Boltzmann relation in the adiabatic option employs flux surface averaging to maintain neutrality within field lines and is solved self-consistently with the GKP equation. A decomposition procedure circumvents the near singularity of the GKP Jacobian block that otherwise degrades CG convergence.
Nonstationary elementary-field light randomly triggered by Poisson impulses.
Fernández-Pousa, Carlos R
2013-05-01
A stochastic theory of nonstationary light describing the random emission of elementary pulses is presented. The emission is governed by a nonhomogeneous Poisson point process determined by a time-varying emission rate. The model describes, in the appropriate limits, stationary, cyclostationary, locally stationary, and pulsed radiation, and reduces to a Gaussian theory in the limit of dense emission rate. The first- and second-order coherence theories are solved after the computation of second- and fourth-order correlation functions by use of the characteristic function. The ergodicity of second-order correlations under various types of detectors is explored and a number of observables, including optical spectrum, amplitude, and intensity correlations, are analyzed. PMID:23695325
Variational Principle and Stability of Nonmonotonic Vlasov-Poisson Equilibria
NASA Astrophysics Data System (ADS)
Morrison, P. J.
1987-10-01
The stability of nonmonotonic equilibria of the Vlasov-Poisson equation is assessed by using nonlinear constants of motion . The constants of motion make up the free energy of the system , which upon variation yields nonmonotonic equilibria. Such equilibria have not previously been obtainable from a variation principle, but here this is accomplished by the inclusion of a passively advected tracer field. Definiteness of the second variation of the free energy gives a sufficient condition for stability in agreement with Gardner's theorem [5], Previously, we have argued that indefiniteness implies either spectral in stability or negative energy modes, which are generically unstable when one adds dissipation or nonlinearity [6]. Such is the case for the nonmonotonic equilibria considered.
Note on the Poisson structure of the damped oscillator
Hone, A. N. W.; Senthilvelan, M.
2009-10-15
The damped harmonic oscillator is one of the most studied systems with respect to the problem of quantizing dissipative systems. Recently Chandrasekar et al. [J. Math. Phys. 48, 032701 (2007)] applied the Prelle-Singer method to construct conserved quantities and an explicit time-independent Lagrangian and Hamiltonian structure for the damped oscillator. Here we describe the associated Poisson bracket which generates the continuous flow, pointing out that there is a subtle problem of definition on the whole phase space. The action-angle variables for the system are also presented, and we further explain how to extend these considerations to the discrete setting. Some implications for the quantum case are briefly mentioned.
A class of Poisson Nijenhuis structures on a tangent bundle
NASA Astrophysics Data System (ADS)
Sarlet, W.; Vermeire, F.
2004-06-01
Equipping the tangent bundle TQ of a manifold with a symplectic form coming from a regular Lagrangian L, we explore how to obtain a Poisson-Nijenhuis structure from a given type (1, 1) tensor field J on Q. It is argued that the complete lift Jc of J is not the natural candidate for a Nijenhuis tensor on TQ, but plays a crucial role in the construction of a different tensor R, which appears to be the pullback under the Legendre transform of the lift of J to T*Q. We show how this tangent bundle view brings new insights and is capable also of producing all important results which are known from previous studies on the cotangent bundle, in the case when Q is equipped with a Riemannian metric. The present approach further paves the way for future generalizations.
Deformations of non-semisimple Poisson pencils of hydrodynamic type
NASA Astrophysics Data System (ADS)
Della Vedova, Alberto; Lorenzoni, Paolo; Savoldi, Andrea
2016-09-01
We study the deformations of two-component non-semisimple Poisson pencils of hydrodynamic type associated with Balinskiǐ-Novikov algebras. We show that in most cases the second order deformations are parametrized by two functions of a single variable. We find that one function is invariant with respect to the subgroup of Miura transformations, preserving the dispersionless limit, and another function is related to a one-parameter family of truncated structures. In two exceptional cases the second order deformations are parametrized by four functions. Among these two are invariants and two are related to a two-parameter family of truncated structures. We also study the lift of the deformations of n-component semisimple structures. This example suggests that deformations of non-semisimple pencils corresponding to the lifted invariant parameters are unobstructed.
A geometric multigrid Poisson solver for domains containing solid inclusions
NASA Astrophysics Data System (ADS)
Botto, Lorenzo
2013-03-01
A Cartesian grid method for the fast solution of the Poisson equation in three-dimensional domains with embedded solid inclusions is presented and its performance analyzed. The efficiency of the method, which assume Neumann conditions at the immersed boundaries, is comparable to that of a multigrid method for regular domains. The method is light in terms of memory usage, and easily adaptable to parallel architectures. Tests with random and ordered arrays of solid inclusions, including spheres and ellipsoids, demonstrate smooth convergence of the residual for small separation between the inclusion surfaces. This feature is important, for instance, in simulations of nearly-touching finite-size particles. The implementation of the method, “MG-Inc”, is available online. Catalogue identifier: AEOE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 19068 No. of bytes in distributed program, including test data, etc.: 215118 Distribution format: tar.gz Programming language: C++ (fully tested with GNU GCC compiler). Computer: Any machine supporting standard C++ compiler. Operating system: Any OS supporting standard C++ compiler. RAM: About 150MB for 1283 resolution Classification: 4.3. Nature of problem: Poisson equation in domains containing inclusions; Neumann boundary conditions at immersed boundaries. Solution method: Geometric multigrid with finite-volume discretization. Restrictions: Stair-case representation of the immersed boundaries. Running time: Typically a fraction of a minute for 1283 resolution.
Identifying Seismicity Levels via Poisson Hidden Markov Models
NASA Astrophysics Data System (ADS)
Orfanogiannaki, K.; Karlis, D.; Papadopoulos, G. A.
2010-08-01
Poisson Hidden Markov models (PHMMs) are introduced to model temporal seismicity changes. In a PHMM the unobserved sequence of states is a finite-state Markov chain and the distribution of the observation at any time is Poisson with rate depending only on the current state of the chain. Thus, PHMMs allow a region to have varying seismicity rate. We applied the PHMM to model earthquake frequencies in the seismogenic area of Killini, Ionian Sea, Greece, between period 1990 and 2006. Simulations of data from the assumed model showed that it describes quite well the true data. The earthquake catalogue is dominated by main shocks occurring in 1993, 1997 and 2002. The time plot of PHMM seismicity states not only reproduces the three seismicity clusters but also quantifies the seismicity level and underlies the degree of strength of the serial dependence of the events at any point of time. Foreshock activity becomes quite evident before the three sequences with the gradual transition to states of cascade seismicity. Traditional analysis, based on the determination of highly significant changes of seismicity rates, failed to recognize foreshocks before the 1997 main shock due to the low number of events preceding that main shock. Then, PHMM has better performance than traditional analysis since the transition from one state to another does not only depend on the total number of events involved but also on the current state of the system. Therefore, PHMM recognizes significant changes of seismicity soon after they start, which is of particular importance for real-time recognition of foreshock activities and other seismicity changes.
Anisotropic norm-oriented mesh adaptation for a Poisson problem
NASA Astrophysics Data System (ADS)
Brèthes, Gautier; Dervieux, Alain
2016-10-01
We present a novel formulation for the mesh adaptation of the approximation of a Partial Differential Equation (PDE). The discussion is restricted to a Poisson problem. The proposed norm-oriented formulation extends the goal-oriented formulation since it is equation-based and uses an adjoint. At the same time, the norm-oriented formulation somewhat supersedes the goal-oriented one since it is basically a solution-convergent method. Indeed, goal-oriented methods rely on the reduction of the error in evaluating a chosen scalar output with the consequence that, as mesh size is increased (more degrees of freedom), only this output is proven to tend to its continuous analog while the solution field itself may not converge. A remarkable quality of goal-oriented metric-based adaptation is the mathematical formulation of the mesh adaptation problem under the form of the optimization, in the well-identified set of metrics, of a well-defined functional. In the new proposed formulation, we amplify this advantage. We search, in the same well-identified set of metrics, the minimum of a norm of the approximation error. The norm is prescribed by the user and the method allows addressing the case of multi-objective adaptation like, for example in aerodynamics, adaptating the mesh for drag, lift and moment in one shot. In this work, we consider the basic linear finite-element approximation and restrict our study to L2 norm in order to enjoy second-order convergence. Numerical examples for the Poisson problem are computed.
Regional regression of flood characteristics employing historical information
Tasker, Gary D.; Stedinger, J.R.
1987-01-01
Streamflow gauging networks provide hydrologic information for use in estimating the parameters of regional regression models. The regional regression models can be used to estimate flood statistics, such as the 100 yr peak, at ungauged sites as functions of drainage basin characteristics. A recent innovation in regional regression is the use of a generalized least squares (GLS) estimator that accounts for unequal station record lengths and sample cross correlation among the flows. However, this technique does not account for historical flood information. A method is proposed here to adjust this generalized least squares estimator to account for possible information about historical floods available at some stations in a region. The historical information is assumed to be in the form of observations of all peaks above a threshold during a long period outside the systematic record period. A Monte Carlo simulation experiment was performed to compare the GLS estimator adjusted for historical floods with the unadjusted GLS estimator and the ordinary least squares estimator. Results indicate that using the GLS estimator adjusted for historical information significantly improves the regression model. ?? 1987.
Multiple Regression and Its Discontents
ERIC Educational Resources Information Center
Snell, Joel C.; Marsh, Mitchell
2012-01-01
Multiple regression is part of a larger statistical strategy originated by Gauss. The authors raise questions about the theory and suggest some changes that would make room for Mandelbrot and Serendipity.
Regression methods for spatial data
NASA Technical Reports Server (NTRS)
Yakowitz, S. J.; Szidarovszky, F.
1982-01-01
The kriging approach, a parametric regression method used by hydrologists and mining engineers, among others also provides an error estimate the integral of the regression function. The kriging method is explored and some of its statistical characteristics are described. The Watson method and theory are extended so that the kriging features are displayed. Theoretical and computational comparisons of the kriging and Watson approaches are offered.
Wrong Signs in Regression Coefficients
NASA Technical Reports Server (NTRS)
McGee, Holly
1999-01-01
When using parametric cost estimation, it is important to note the possibility of the regression coefficients having the wrong sign. A wrong sign is defined as a sign on the regression coefficient opposite to the researcher's intuition and experience. Some possible causes for the wrong sign discussed in this paper are a small range of x's, leverage points, missing variables, multicollinearity, and computational error. Additionally, techniques for determining the cause of the wrong sign are given.
Basis Selection for Wavelet Regression
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Lau, Sonie (Technical Monitor)
1998-01-01
A wavelet basis selection procedure is presented for wavelet regression. Both the basis and the threshold are selected using cross-validation. The method includes the capability of incorporating prior knowledge on the smoothness (or shape of the basis functions) into the basis selection procedure. The results of the method are demonstrated on sampled functions widely used in the wavelet regression literature. The results of the method are contrasted with other published methods.
Regression Discontinuity Designs in Epidemiology
Moscoe, Ellen; Mutevedzi, Portia; Newell, Marie-Louise; Bärnighausen, Till
2014-01-01
When patients receive an intervention based on whether they score below or above some threshold value on a continuously measured random variable, the intervention will be randomly assigned for patients close to the threshold. The regression discontinuity design exploits this fact to estimate causal treatment effects. In spite of its recent proliferation in economics, the regression discontinuity design has not been widely adopted in epidemiology. We describe regression discontinuity, its implementation, and the assumptions required for causal inference. We show that regression discontinuity is generalizable to the survival and nonlinear models that are mainstays of epidemiologic analysis. We then present an application of regression discontinuity to the much-debated epidemiologic question of when to start HIV patients on antiretroviral therapy. Using data from a large South African cohort (2007–2011), we estimate the causal effect of early versus deferred treatment eligibility on mortality. Patients whose first CD4 count was just below the 200 cells/μL CD4 count threshold had a 35% lower hazard of death (hazard ratio = 0.65 [95% confidence interval = 0.45–0.94]) than patients presenting with CD4 counts just above the threshold. We close by discussing the strengths and limitations of regression discontinuity designs for epidemiology. PMID:25061922
McKenzie, K.R.
1959-07-01
An electrode support which permits accurate alignment and adjustment of the electrode in a plurality of planes and about a plurality of axes in a calutron is described. The support will align the slits in the electrode with the slits of an ionizing chamber so as to provide for the egress of ions. The support comprises an insulator, a leveling plate carried by the insulator and having diametrically opposed attaching screws screwed to the plate and the insulator and diametrically opposed adjusting screws for bearing against the insulator, and an electrode associated with the plate for adjustment therewith.
Incorporating Dipolar Solvents with Variable Density in Poisson-Boltzmann Electrostatics
Azuara, Cyril; Orland, Henri; Bon, Michael; Koehl, Patrice; Delarue, Marc
2008-01-01
We describe a new way to calculate the electrostatic properties of macromolecules that goes beyond the classical Poisson-Boltzmann treatment with only a small extra CPU cost. The solvent region is no longer modeled as a homogeneous dielectric media but rather as an assembly of self-orienting interacting dipoles of variable density. The method effectively unifies both the Poisson-centric view and the Langevin Dipole model. The model results in a variable dielectric constant \\documentclass[10pt]{article} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{pmc} \\usepackage[Euler]{upgreek} \\pagestyle{empty} \\oddsidemargin -1.0in \\begin{document} \\begin{equation*}{\\epsilon}({\\vec{r}})\\end{equation*}\\end{document} in the solvent region and also in a variable solvent density \\documentclass[10pt]{article} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{pmc} \\usepackage[Euler]{upgreek} \\pagestyle{empty} \\oddsidemargin -1.0in \\begin{document} \\begin{equation*}{\\rho}({\\vec{r}})\\end{equation*}\\end{document} that depends on the nature of the closest exposed solute atoms. The model was calibrated using small molecules and ions solvation data with only two adjustable parameters, namely the size and dipolar moment of the solvent. Hydrophobicity scales derived from the solvent density profiles agree very well with independently derived hydrophobicity scales, both at the atomic or residue level. Dimerization interfaces in homodimeric proteins or lipid-binding regions in membrane proteins clearly appear as poorly solvated patches on the solute accessible surface. Comparison of the thermally averaged solvent density of this model with the one derived from molecular dynamics simulations shows qualitative agreement on a coarse-grained level. Because this calculation is much more
Attachment style and adjustment to divorce.
Yárnoz-Yaben, Sagrario
2010-05-01
Divorce is becoming increasingly widespread in Europe. In this study, I present an analysis of the role played by attachment style (secure, dismissing, preoccupied and fearful, plus the dimensions of anxiety and avoidance) in the adaptation to divorce. Participants comprised divorced parents (N = 40) from a medium-sized city in the Basque Country. The results reveal a lower proportion of people with secure attachment in the sample group of divorcees. Attachment style and dependence (emotional and instrumental) are closely related. I have also found associations between measures that showed a poor adjustment to divorce and the preoccupied and fearful attachment styles. Adjustment is related to a dismissing attachment style and to the avoidance dimension. Multiple regression analysis confirmed that secure attachment and the avoidance dimension predict adjustment to divorce and positive affectivity while preoccupied attachment and the anxiety dimension predicted negative affectivity. Implications for research and interventions with divorcees are discussed.
A note on a characterization of the poisson and uniform distributions
Pusz, J.
1995-08-25
It is shown that if U, X are independent random variables, X {ge} O, U is uniformly distributed in, and X satisfies the equation UX + 2 {approximately} U + X, then X - 2 has the Poisson distribution with the parameter equal one. The above equation also characterizes the uniform distribution of X - 2 is a Poisson random variable. Moreover, a multivariate generalization is given.
Comment on: ‘A Poisson resampling method for simulating reduced counts in nuclear medicine images’
NASA Astrophysics Data System (ADS)
de Nijs, Robin
2015-07-01
In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.
Modeling Repeated Count Data: Some Extensions of the Rasch Poisson Counts Model.
ERIC Educational Resources Information Center
Duijn, Marijtje A. J. van; Jansen, Margo G. H.
1995-01-01
The Rasch Poisson Counts Model, a unidimensional latent trait model for tests that postulates that intensity parameters are products of test difficulty and subject ability parameters, is expanded into the Dirichlet-Gamma-Poisson model that takes into account variation between subjects and interaction between subjects and tests. (SLD)
Shrinkage regression-based methods for microarray missing value imputation
2013-01-01
Background Missing values commonly occur in the microarray data, which usually contain more than 5% missing values with up to 90% of genes affected. Inaccurate missing value estimation results in reducing the power of downstream microarray data analyses. Many types of methods have been developed to estimate missing values. Among them, the regression-based methods are very popular and have been shown to perform better than the other types of methods in many testing microarray datasets. Results To further improve the performances of the regression-based methods, we propose shrinkage regression-based methods. Our methods take the advantage of the correlation structure in the microarray data and select similar genes for the target gene by Pearson correlation coefficients. Besides, our methods incorporate the least squares principle, utilize a shrinkage estimation approach to adjust the coefficients of the regression model, and then use the new coefficients to estimate missing values. Simulation results show that the proposed methods provide more accurate missing value estimation in six testing microarray datasets than the existing regression-based methods do. Conclusions Imputation of missing values is a very important aspect of microarray data analyses because most of the downstream analyses require a complete dataset. Therefore, exploring accurate and efficient methods for estimating missing values has become an essential issue. Since our proposed shrinkage regression-based methods can provide accurate missing value estimation, they are competitive alternatives to the existing regression-based methods. PMID:24565159
Incremental learning for ν-Support Vector Regression.
Gu, Bin; Sheng, Victor S; Wang, Zhijie; Ho, Derek; Osman, Said; Li, Shuo
2015-07-01
The ν-Support Vector Regression (ν-SVR) is an effective regression learning algorithm, which has the advantage of using a parameter ν on controlling the number of support vectors and adjusting the width of the tube automatically. However, compared to ν-Support Vector Classification (ν-SVC) (Schölkopf et al., 2000), ν-SVR introduces an additional linear term into its objective function. Thus, directly applying the accurate on-line ν-SVC algorithm (AONSVM) to ν-SVR will not generate an effective initial solution. It is the main challenge to design an incremental ν-SVR learning algorithm. To overcome this challenge, we propose a special procedure called initial adjustments in this paper. This procedure adjusts the weights of ν-SVC based on the Karush-Kuhn-Tucker (KKT) conditions to prepare an initial solution for the incremental learning. Combining the initial adjustments with the two steps of AONSVM produces an exact and effective incremental ν-SVR learning algorithm (INSVR). Theoretical analysis has proven the existence of the three key inverse matrices, which are the cornerstones of the three steps of INSVR (including the initial adjustments), respectively. The experiments on benchmark datasets demonstrate that INSVR can avoid the infeasible updating paths as far as possible, and successfully converges to the optimal solution. The results also show that INSVR is faster than batch ν-SVR algorithms with both cold and warm starts.
Remotely Adjustable Hydraulic Pump
NASA Technical Reports Server (NTRS)
Kouns, H. H.; Gardner, L. D.
1987-01-01
Outlet pressure adjusted to match varying loads. Electrohydraulic servo has positioned sleeve in leftmost position, adjusting outlet pressure to maximum value. Sleeve in equilibrium position, with control land covering control port. For lowest pressure setting, sleeve shifted toward right by increased pressure on sleeve shoulder from servovalve. Pump used in aircraft and robots, where hydraulic actuators repeatedly turned on and off, changing pump load frequently and over wide range.
Auxetic Black Phosphorus: A 2D Material with Negative Poisson's Ratio
NASA Astrophysics Data System (ADS)
Du, Yuchen; Maassen, Jesse; Wu, Wangran; Luo, Zhe; Xu, Xianfan; Ye, Peide D.
2016-10-01
The Poisson's ratio of a material characterizes its response to uniaxial strain. Materials normally possess a positive Poisson's ratio - they contract laterally when stretched, and expand laterally when compressed. A negative Poisson's ratio is theoretically permissible but has not, with few exceptions of man-made bulk structures, been experimentally observed in any natural materials. Here, we show that the negative Poisson's ratio exists in the low-dimensional natural material black phosphorus, and that our experimental observations are consistent with first principles simulations. Through application of uniaxial strain along zigzag and armchair directions, we find that both interlayer and intralayer negative Poisson's ratios can be obtained in black phosphorus. The phenomenon originates from the puckered structure of its in-plane lattice, together with coupled hinge-like bonding configurations.
Weighted triangulation adjustment
Anderson, Walter L.
1969-01-01
The variation of coordinates method is employed to perform a weighted least squares adjustment of horizontal survey networks. Geodetic coordinates are required for each fixed and adjustable station. A preliminary inverse geodetic position computation is made for each observed line. Weights associated with each observed equation for direction, azimuth, and distance are applied in the formation of the normal equations in-the least squares adjustment. The number of normal equations that may be solved is twice the number of new stations and less than 150. When the normal equations are solved, shifts are produced at adjustable stations. Previously computed correction factors are applied to the shifts and a most probable geodetic position is found for each adjustable station. Pinal azimuths and distances are computed. These may be written onto magnetic tape for subsequent computation of state plane or grid coordinates. Input consists of punch cards containing project identification, program options, and position and observation information. Results listed include preliminary and final positions, residuals, observation equations, solution of the normal equations showing magnitudes of shifts, and a plot of each adjusted and fixed station. During processing, data sets containing irrecoverable errors are rejected and the type of error is listed. The computer resumes processing of additional data sets.. Other conditions cause warning-errors to be issued, and processing continues with the current data set.
Interpretation of Standardized Regression Coefficients in Multiple Regression.
ERIC Educational Resources Information Center
Thayer, Jerome D.
The extent to which standardized regression coefficients (beta values) can be used to determine the importance of a variable in an equation was explored. The beta value and the part correlation coefficient--also called the semi-partial correlation coefficient and reported in squared form as the incremental "r squared"--were compared for variables…
Regressive Evolution in Astyanax Cavefish
Jeffery, William R.
2013-01-01
A diverse group of animals, including members of most major phyla, have adapted to life in the perpetual darkness of caves. These animals are united by the convergence of two regressive phenotypes, loss of eyes and pigmentation. The mechanisms of regressive evolution are poorly understood. The teleost Astyanax mexicanus is of special significance in studies of regressive evolution in cave animals. This species includes an ancestral surface dwelling form and many con-specific cave-dwelling forms, some of which have evolved their recessive phenotypes independently. Recent advances in Astyanax development and genetics have provided new information about how eyes and pigment are lost during cavefish evolution; namely, they have revealed some of the molecular and cellular mechanisms involved in trait modification, the number and identity of the underlying genes and mutations, the molecular basis of parallel evolution, and the evolutionary forces driving adaptation to the cave environment. PMID:19640230
Laplace regression with censored data.
Bottai, Matteo; Zhang, Jiajia
2010-08-01
We consider a regression model where the error term is assumed to follow a type of asymmetric Laplace distribution. We explore its use in the estimation of conditional quantiles of a continuous outcome variable given a set of covariates in the presence of random censoring. Censoring may depend on covariates. Estimation of the regression coefficients is carried out by maximizing a non-differentiable likelihood function. In the scenarios considered in a simulation study, the Laplace estimator showed correct coverage and shorter computation time than the alternative methods considered, some of which occasionally failed to converge. We illustrate the use of Laplace regression with an application to survival time in patients with small cell lung cancer.
Survival Data and Regression Models
NASA Astrophysics Data System (ADS)
Grégoire, G.
2014-12-01
We start this chapter by introducing some basic elements for the analysis of censored survival data. Then we focus on right censored data and develop two types of regression models. The first one concerns the so-called accelerated failure time models (AFT), which are parametric models where a function of a parameter depends linearly on the covariables. The second one is a semiparametric model, where the covariables enter in a multiplicative form in the expression of the hazard rate function. The main statistical tool for analysing these regression models is the maximum likelihood methodology and, in spite we recall some essential results about the ML theory, we refer to the chapter "Logistic Regression" for a more detailed presentation.
Interquantile Shrinkage in Regression Models
Jiang, Liewen; Wang, Huixia Judy; Bondell, Howard D.
2012-01-01
Conventional analysis using quantile regression typically focuses on fitting the regression model at different quantiles separately. However, in situations where the quantile coefficients share some common feature, joint modeling of multiple quantiles to accommodate the commonality often leads to more efficient estimation. One example of common features is that a predictor may have a constant effect over one region of quantile levels but varying effects in other regions. To automatically perform estimation and detection of the interquantile commonality, we develop two penalization methods. When the quantile slope coefficients indeed do not change across quantile levels, the proposed methods will shrink the slopes towards constant and thus improve the estimation efficiency. We establish the oracle properties of the two proposed penalization methods. Through numerical investigations, we demonstrate that the proposed methods lead to estimations with competitive or higher efficiency than the standard quantile regression estimation in finite samples. Supplemental materials for the article are available online. PMID:24363546
[Is regression of atherosclerosis possible?].
Thomas, D; Richard, J L; Emmerich, J; Bruckert, E; Delahaye, F
1992-10-01
Experimental studies have shown the regression of atherosclerosis in animals given a cholesterol-rich diet and then given a normal diet or hypolipidemic therapy. Despite favourable results of clinical trials of primary prevention modifying the lipid profile, the concept of atherosclerosis regression in man remains very controversial. The methodological approach is difficult: this is based on angiographic data and requires strict standardisation of angiographic views and reliable quantitative techniques of analysis which are available with image processing. Several methodologically acceptable clinical coronary studies have shown not only stabilisation but also regression of atherosclerotic lesions with reductions of about 25% in total cholesterol levels and of about 40% in LDL cholesterol levels. These reductions were obtained either by drugs as in CLAS (Cholesterol Lowering Atherosclerosis Study), FATS (Familial Atherosclerosis Treatment Study) and SCOR (Specialized Center of Research Intervention Trial), by profound modifications in dietary habits as in the Lifestyle Heart Trial, or by surgery (ileo-caecal bypass) as in POSCH (Program On the Surgical Control of the Hyperlipidemias). On the other hand, trials with non-lipid lowering drugs such as the calcium antagonists (INTACT, MHIS) have not shown significant regression of existing atherosclerotic lesions but only a decrease on the number of new lesions. The clinical benefits of these regression studies are difficult to demonstrate given the limited period of observation, relatively small population numbers and the fact that in some cases the subjects were asymptomatic. The decrease in the number of cardiovascular events therefore seems relatively modest and concerns essentially subjects who were symptomatic initially. The clinical repercussion of studies of prevention involving a single lipid factor is probably partially due to the reduction in progression and anatomical regression of the atherosclerotic plaque
Exact Dynamics via Poisson Process: a unifying Monte Carlo paradigm
NASA Astrophysics Data System (ADS)
Gubernatis, James
2014-03-01
A common computational task is solving a set of ordinary differential equations (o.d.e.'s). A little known theorem says that the solution of any set of o.d.e.'s is exactly solved by the expectation value over a set of arbitary Poisson processes of a particular function of the elements of the matrix that defines the o.d.e.'s. The theorem thus provides a new starting point to develop real and imaginary-time continous-time solvers for quantum Monte Carlo algorithms, and several simple observations enable various quantum Monte Carlo techniques and variance reduction methods to transfer to a new context. I will state the theorem, note a transformation to a very simple computational scheme, and illustrate the use of some techniques from the directed-loop algorithm in context of the wavefunction Monte Carlo method that is used to solve the Lindblad master equation for the dynamics of open quantum systems. I will end by noting that as the theorem does not depend on the source of the o.d.e.'s coming from quantum mechanics, it also enables the transfer of continuous-time methods from quantum Monte Carlo to the simulation of various classical equations of motion heretofore only solved deterministically.
A bivariate survival model with compound Poisson frailty.
Wienke, A; Ripatti, S; Palmgren, J; Yashin, A
2010-01-30
A correlated frailty model is suggested for analysis of bivariate time-to-event data. The model is an extension of the correlated power variance function (PVF) frailty model (correlated three-parameter frailty model) (J. Epidemiol. Biostat. 1999; 4:53-60). It is based on a bivariate extension of the compound Poisson frailty model in univariate survival analysis (Ann. Appl. Probab. 1992; 4:951-972). It allows for a non-susceptible fraction (of zero frailty) in the population, overcoming the common assumption in survival analysis that all individuals are susceptible to the event under study. The model contains the correlated gamma frailty model and the correlated inverse Gaussian frailty model as special cases. A maximum likelihood estimation procedure for the parameters is presented and its properties are studied in a small simulation study. This model is applied to breast cancer incidence data of Swedish twins. The proportion of women susceptible to breast cancer is estimated to be 15 per cent.
Biomolecular electrostatics with the linearized Poisson-Boltzmann equation.
Fogolari, F; Zuccato, P; Esposito, G; Viglino, P
1999-01-01
Electrostatics plays a key role in many biological processes. The Poisson-Boltzmann equation (PBE) and its linearized form (LPBE) allow prediction of electrostatic effects for biomolecular systems. The discrepancies between the solutions of the PBE and those of the LPBE are well known for systems with a simple geometry, but much less for biomolecular systems. Results for high charge density systems show that there are limitations to the applicability of the LPBE at low ionic strength and, to a lesser extent, at higher ionic strength. For systems with a simple geometry, the onset of nonlinear effects has been shown to be governed by the ratio of the electric field over the Debye screening constant. This ratio is used in the present work to correct the LPBE results to reproduce fairly accurately those obtained from the PBE for systems with a simple geometry. Since the correction does not involve any geometrical parameter, it can be easily applied to real biomolecular systems. The error on the potential for the LPBE (compared to the PBE) spans few kT/q for the systems studied here and is greatly reduced by the correction. This allows for a more accurate evaluation of the electrostatic free energy of the systems. PMID:9876118
Linear-Nonlinear-Poisson models of primate choice dynamics.
Corrado, Greg S; Sugrue, Leo P; Seung, H Sebastian; Newsome, William T
2005-11-01
The equilibrium phenomenon of matching behavior traditionally has been studied in stationary environments. Here we attempt to uncover the local mechanism of choice that gives rise to matching by studying behavior in a highly dynamic foraging environment. In our experiments, 2 rhesus monkeys (Macacca mulatta) foraged for juice rewards by making eye movements to one of two colored icons presented on a computer monitor, each rewarded on dynamic variable-interval schedules. Using a generalization of Wiener kernel analysis, we recover a compact mechanistic description of the impact of past reward on future choice in the form of a Linear-Nonlinear-Poisson model. We validate this model through rigorous predictive and generative testing. Compared to our earlier work with this same data set, this model proves to be a better description of choice behavior and is more tightly correlated with putative neural value signals. Refinements over previous models include hyperbolic (as opposed to exponential) temporal discounting of past rewards, and differential (as opposed to fractional) comparisons of option value. Through numerical simulation we find that within this class of strategies, the model parameters employed by animals are very close to those that maximize reward harvesting efficiency.
Continental crust composition constrained by measurements of crustal Poisson's ratio
NASA Astrophysics Data System (ADS)
Zandt, George; Ammon, Charles J.
1995-03-01
DECIPHERING the geological evolution of the Earth's continental crust requires knowledge of its bulk composition and global variability. The main uncertainties are associated with the composition of the lower crust. Seismic measurements probe the elastic properties of the crust at depth, from which composition can be inferred. Of particular note is Poisson's ratio,Σ ; this elastic parameter can be determined uniquely from the ratio of P- to S-wave seismic velocity, and provides a better diagnostic of crustal composition than either P- or S-wave velocity alone1. Previous attempts to measure Σ have been limited by difficulties in obtaining coincident P- and S-wave data sampling the entire crust2. Here we report 76 new estimates of crustal Σ spanning all of the continents except Antarctica. We find that, on average, Σ increases with the age of the crust. Our results strongly support the presence of a mafic lower crust beneath cratons, and suggest either a uniformitarian craton formation process involving delamination of the lower crust during continental collisions, followed by magmatic underplating, or a model in which crust formation processes have changed since the Precambrian era.
The Poisson Gamma distribution for wind speed data
NASA Astrophysics Data System (ADS)
Ćakmakyapan, Selen; Özel, Gamze
2016-04-01
The wind energy is one of the most significant alternative clean energy source and rapidly developing renewable energy sources in the world. For the evaluation of wind energy potential, probability density functions (pdfs) are usually used to model wind speed distributions. The selection of the appropriate pdf reduces the wind power estimation error and also allow to achieve characteristics. In the literature, different pdfs used to model wind speed data for wind energy applications. In this study, we propose a new probability distribution to model the wind speed data. Firstly, we defined the new probability distribution named Poisson-Gamma (PG) distribution and we analyzed a wind speed data sets which are about five pressure degree for the station. We obtained the data sets from Turkish State Meteorological Service. Then, we modelled the data sets with Exponential, Weibull, Lomax, 3 parameters Burr, Gumbel, Gamma, Rayleigh which are used to model wind speed data, and PG distributions. Finally, we compared the distribution, to select the best fitted model and demonstrated that PG distribution modeled the data sets better.
Complexity as aging non-Poisson renewal processes
NASA Astrophysics Data System (ADS)
Bianco, Simone
The search for a satisfactory model for complexity, meant as an intermediate condition between total order and total disorder, is still subject of debate in the scientific community. In this dissertation the emergence of non-Poisson renewal processes in several complex systems is investigated. After reviewing the basics of renewal theory, another popular approach to complexity, called modulation, is introduced. I show how these two different approaches, given a suitable choice of the parameter involved, can generate the same macroscopic outcome, namely an inverse power law distribution density of events occurrence. To solve this ambiguity, a numerical instrument, based on the theoretical analysis of the aging properties of renewal systems, is introduced. The application of this method, called renewal aging experiment, allows us to distinguish if a time series has been generated by a renewal or a modulation process. This method of analysis is then applied to several physical systems, from blinking quantum dots, to the human brain activity, to seismic fluctuations. Theoretical conclusions about the underlying nature of the considered complex systems are drawn.
A Monte Carlo solution of heat conduction and Poisson equations
Grigoriu, M.
2000-02-01
A Monte Carlo method is developed for solving the heat conduction, Poisson, and Laplace equations. The method is based on properties of Brownian motion and Ito processes, the Ito formula for differentiable functions of these processes, and the similarities between the generator of Ito processes and the differential operators of these equations. The proposed method is similar to current Monte Carlo solutions, such as the fixed random walk, exodus, and floating walk methods, in the sense that it is local, that is, it determines the solution at a single point or a small set of points of the domain of definition of the heat conduction equation directly. However, the proposed and the current Monte Carlo solutions are based on different theoretical considerations. The proposed Monte Carlo method has some attractive features. The method does not require to discretize the domain of definition of the differential equation, can be applied to domains of any dimension and geometry, works for both Dirichlet and Neumann boundary conditions, and provides simple solutions for the steady-state and transient heat equations. Several examples are presented to illustrate the application of the proposed method and demonstrate its accuracy.
Poisson process approximation for sequence repeats, and sequencing by hybridization.
Arratia, R; Martin, D; Reinert, G; Waterman, M S
1996-01-01
Sequencing by hybridization is a tool to determine a DNA sequence from the unordered list of all l-tuples contained in this sequence; typical numbers for l are l = 8, 10, 12. For theoretical purposes we assume that the multiset of all l-tuples is known. This multiset determines the DNA sequence uniquely if none of the so-called Ukkonen transformations are possible. These transformations require repeats of (l-1)-tuples in the sequence, with these repeats occurring in certain spatial patterns. We model DNA as an i.i.d. sequence. We first prove Poisson process approximations for the process of indicators of all leftmost long repeats allowing self-overlap and for the process of indicators of all left-most long repeats without self-overlap. Using the Chen-Stein method, we get bounds on the error of these approximations. As a corollary, we approximate the distribution of longest repeats. In the second step we analyze the spatial patterns of the repeats. Finally we combine these two steps to prove an approximation for the probability that a random sequence is uniquely recoverable from its list of l-tuples. For all our results we give some numerical examples including error bounds. PMID:8891959
Quantum Monte Carlo using a Stochastic Poisson Solver
Das, D; Martin, R M; Kalos, M H
2005-05-06
Quantum Monte Carlo (QMC) is an extremely powerful method to treat many-body systems. Usually quantum Monte Carlo has been applied in cases where the interaction potential has a simple analytic form, like the 1/r Coulomb potential. However, in a complicated environment as in a semiconductor heterostructure, the evaluation of the interaction itself becomes a non-trivial problem. Obtaining the potential from any grid-based finite-difference method, for every walker and every step is unfeasible. We demonstrate an alternative approach of solving the Poisson equation by a classical Monte Carlo within the overall quantum Monte Carlo scheme. We have developed a modified ''Walk On Spheres'' algorithm using Green's function techniques, which can efficiently account for the interaction energy of walker configurations, typical of quantum Monte Carlo algorithms. This stochastically obtained potential can be easily incorporated within popular quantum Monte Carlo techniques like variational Monte Carlo (VMC) or diffusion Monte Carlo (DMC). We demonstrate the validity of this method by studying a simple problem, the polarization of a helium atom in the electric field of an infinite capacitor.
Bayesian Inference and Online Learning in Poisson Neuronal Networks.
Huang, Yanping; Rao, Rajesh P N
2016-08-01
Motivated by the growing evidence for Bayesian computation in the brain, we show how a two-layer recurrent network of Poisson neurons can perform both approximate Bayesian inference and learning for any hidden Markov model. The lower-layer sensory neurons receive noisy measurements of hidden world states. The higher-layer neurons infer a posterior distribution over world states via Bayesian inference from inputs generated by sensory neurons. We demonstrate how such a neuronal network with synaptic plasticity can implement a form of Bayesian inference similar to Monte Carlo methods such as particle filtering. Each spike in a higher-layer neuron represents a sample of a particular hidden world state. The spiking activity across the neural population approximates the posterior distribution over hidden states. In this model, variability in spiking is regarded not as a nuisance but as an integral feature that provides the variability necessary for sampling during inference. We demonstrate how the network can learn the likelihood model, as well as the transition probabilities underlying the dynamics, using a Hebbian learning rule. We present results illustrating the ability of the network to perform inference and learning for arbitrary hidden Markov models.
Urinary arsenic concentration adjustment factors and malnutrition.
Nermell, Barbro; Lindberg, Anna-Lena; Rahman, Mahfuzar; Berglund, Marika; Persson, Lars Ake; El Arifeen, Shams; Vahter, Marie
2008-02-01
This study aims at evaluating the suitability of adjusting urinary concentrations of arsenic, or any other urinary biomarker, for variations in urine dilution by creatinine and specific gravity in a malnourished population. We measured the concentrations of metabolites of inorganic arsenic, creatinine and specific gravity in spot urine samples collected from 1466 individuals, 5-88 years of age, in Matlab, rural Bangladesh, where arsenic-contaminated drinking water and malnutrition are prevalent (about 30% of the adults had body mass index (BMI) below 18.5 kg/m(2)). The urinary concentrations of creatinine were low; on average 0.55 g/L in the adolescents and adults and about 0.35 g/L in the 5-12 years old children. Therefore, adjustment by creatinine gave much higher numerical values for the urinary arsenic concentrations than did the corresponding data expressed as microg/L, adjusted by specific gravity. As evaluated by multiple regression analyses, urinary creatinine, adjusted by specific gravity, was more affected by body size, age, gender and season than was specific gravity. Furthermore, urinary creatinine was found to be significantly associated with urinary arsenic, which further disqualifies the creatinine adjustment. PMID:17900556
Universal Poisson Statistics of mRNAs with Complex Decay Pathways.
Thattai, Mukund
2016-01-19
Messenger RNA (mRNA) dynamics in single cells are often modeled as a memoryless birth-death process with a constant probability per unit time that an mRNA molecule is synthesized or degraded. This predicts a Poisson steady-state distribution of mRNA number, in close agreement with experiments. This is surprising, since mRNA decay is known to be a complex process. The paradox is resolved by realizing that the Poisson steady state generalizes to arbitrary mRNA lifetime distributions. A mapping between mRNA dynamics and queueing theory highlights an identifiability problem: a measured Poisson steady state is consistent with a large variety of microscopic models. Here, I provide a rigorous and intuitive explanation for the universality of the Poisson steady state. I show that the mRNA birth-death process and its complex decay variants all take the form of the familiar Poisson law of rare events, under a nonlinear rescaling of time. As a corollary, not only steady-states but also transients are Poisson distributed. Deviations from the Poisson form occur only under two conditions, promoter fluctuations leading to transcriptional bursts or nonindependent degradation of mRNA molecules. These results place severe limits on the power of single-cell experiments to probe microscopic mechanisms, and they highlight the need for single-molecule measurements. PMID:26743048
Correlation Weights in Multiple Regression
ERIC Educational Resources Information Center
Waller, Niels G.; Jones, Jeff A.
2010-01-01
A general theory on the use of correlation weights in linear prediction has yet to be proposed. In this paper we take initial steps in developing such a theory by describing the conditions under which correlation weights perform well in population regression models. Using OLS weights as a comparison, we define cases in which the two weighting…
Weighting Regressions by Propensity Scores
ERIC Educational Resources Information Center
Freedman, David A.; Berk, Richard A.
2008-01-01
Regressions can be weighted by propensity scores in order to reduce bias. However, weighting is likely to increase random error in the estimates, and to bias the estimated standard errors downward, even when selection mechanisms are well understood. Moreover, in some cases, weighting will increase the bias in estimated causal parameters. If…
Multiple Regression: A Leisurely Primer.
ERIC Educational Resources Information Center
Daniel, Larry G.; Onwuegbuzie, Anthony J.
Multiple regression is a useful statistical technique when the researcher is considering situations in which variables of interest are theorized to be multiply caused. It may also be useful in those situations in which the researchers is interested in studies of predictability of phenomena of interest. This paper provides an introduction to…
Cactus: An Introduction to Regression
ERIC Educational Resources Information Center
Hyde, Hartley
2008-01-01
When the author first used "VisiCalc," the author thought it a very useful tool when he had the formulas. But how could he design a spreadsheet if there was no known formula for the quantities he was trying to predict? A few months later, the author relates he learned to use multiple linear regression software and suddenly it all clicked into…
Ridge Regression for Interactive Models.
ERIC Educational Resources Information Center
Tate, Richard L.
1988-01-01
An exploratory study of the value of ridge regression for interactive models is reported. Assuming that the linear terms in a simple interactive model are centered to eliminate non-essential multicollinearity, a variety of common models, representing both ordinal and disordinal interactions, are shown to have "orientations" that are favorable to…
Quantile Regression with Censored Data
ERIC Educational Resources Information Center
Lin, Guixian
2009-01-01
The Cox proportional hazards model and the accelerated failure time model are frequently used in survival data analysis. They are powerful, yet have limitation due to their model assumptions. Quantile regression offers a semiparametric approach to model data with possible heterogeneity. It is particularly powerful for censored responses, where the…
Logistic regression: a brief primer.
Stoltzfus, Jill C
2011-10-01
Regression techniques are versatile in their application to medical research because they can measure associations, predict outcomes, and control for confounding variable effects. As one such technique, logistic regression is an efficient and powerful way to analyze the effect of a group of independent variables on a binary outcome by quantifying each independent variable's unique contribution. Using components of linear regression reflected in the logit scale, logistic regression iteratively identifies the strongest linear combination of variables with the greatest probability of detecting the observed outcome. Important considerations when conducting logistic regression include selecting independent variables, ensuring that relevant assumptions are met, and choosing an appropriate model building strategy. For independent variable selection, one should be guided by such factors as accepted theory, previous empirical investigations, clinical considerations, and univariate statistical analyses, with acknowledgement of potential confounding variables that should be accounted for. Basic assumptions that must be met for logistic regression include independence of errors, linearity in the logit for continuous variables, absence of multicollinearity, and lack of strongly influential outliers. Additionally, there should be an adequate number of events per independent variable to avoid an overfit model, with commonly recommended minimum "rules of thumb" ranging from 10 to 20 events per covariate. Regarding model building strategies, the three general types are direct/standard, sequential/hierarchical, and stepwise/statistical, with each having a different emphasis and purpose. Before reaching definitive conclusions from the results of any of these methods, one should formally quantify the model's internal validity (i.e., replicability within the same data set) and external validity (i.e., generalizability beyond the current sample). The resulting logistic regression model
2012-01-01
Background Caesarean section (CS) rate is a quality of health care indicator frequently used at national and international level. The aim of this study was to assess whether adjustment for Robson’s Ten Group Classification System (TGCS), and clinical and socio-demographic variables of the mother and the fetus is necessary for inter-hospital comparisons of CS rates. Methods The study population includes 64,423 deliveries in Emilia-Romagna between January 1, 2003 and December 31, 2004, classified according to theTGCS. Poisson regression was used to estimate crude and adjusted hospital relative risks of CS compared to a reference category. Analyses were carried out in the overall population and separately according to the Robson groups (groups I, II, III, IV and V–X combined). Adjusted relative risks (RR) of CS were estimated using two risk-adjustment models; the first (M1) including the TGCS group as the only adjustment factor; the second (M2) including in addition demographic and clinical confounders identified using a stepwise selection procedure. Percentage variations between crude and adjusted RRs by hospital were calculated to evaluate the confounding effect of covariates. Results The percentage variations from crude to adjusted RR proved to be similar in M1 and M2 model. However, stratified analyses by Robson’s classification groups showed that residual confounding for clinical and demographic variables was present in groups I (nulliparous, single, cephalic, ≥37 weeks, spontaneous labour) and III (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, spontaneous labour) and IV (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, induced or CS before labour) and to a minor extent in groups II (nulliparous, single, cephalic, ≥37 weeks, induced or CS before labour) and IV (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, induced or CS before labour). Conclusions The TGCS classification is useful for
Poisson-Lie T-duals of the bi-Yang-Baxter models
NASA Astrophysics Data System (ADS)
Klimčík, Ctirad
2016-09-01
We prove the conjecture of Sfetsos, Siampos and Thompson that suitable analytic continuations of the Poisson-Lie T-duals of the bi-Yang-Baxter sigma models coincide with the recently introduced generalized λ-models. We then generalize this result by showing that the analytic continuation of a generic σ-model of "universal WZW-type" introduced by Tseytlin in 1993 is nothing but the Poisson-Lie T-dual of a generic Poisson-Lie symmetric σ-model introduced by Klimčík and Ševera in 1995.
[Transformations of parameters in the generalized Poisson distribution for test data analysis].
Ogasawara, H
1996-02-01
The generalized Poisson distribution is a distribution which approximates various forms of mixtures of Poisson distributions. The mean and variance of the generalized Poisson distribution, which are simple functions of the two parameters of the distribution, are more useful than the original parameters in test data analysis. Therefore, we adopted two types of transformations of parameters. The first model has new parameters of mean and standard deviation. The second model contains new parameters of mean and variance/mean. An example indicates that the transformed parameters are convenient to understand the properties of data. PMID:8935832
Blow-up conditions for two dimensional modified Euler-Poisson equations
NASA Astrophysics Data System (ADS)
Lee, Yongki
2016-09-01
The multi-dimensional Euler-Poisson system describes the dynamic behavior of many important physical flows, yet as a hyperbolic system its solution can blow-up for some initial configurations. This article strives to advance our understanding on the critical threshold phenomena through the study of a two-dimensional modified Euler-Poisson system with a modified Riesz transform where the singularity at the origin is removed. We identify upper-thresholds for finite time blow-up of solutions for the modified Euler-Poisson equations with attractive/repulsive forcing.
Regression Verification Using Impact Summaries
NASA Technical Reports Server (NTRS)
Backes, John; Person, Suzette J.; Rungta, Neha; Thachuk, Oksana
2013-01-01
Regression verification techniques are used to prove equivalence of syntactically similar programs. Checking equivalence of large programs, however, can be computationally expensive. Existing regression verification techniques rely on abstraction and decomposition techniques to reduce the computational effort of checking equivalence of the entire program. These techniques are sound but not complete. In this work, we propose a novel approach to improve scalability of regression verification by classifying the program behaviors generated during symbolic execution as either impacted or unimpacted. Our technique uses a combination of static analysis and symbolic execution to generate summaries of impacted program behaviors. The impact summaries are then checked for equivalence using an o-the-shelf decision procedure. We prove that our approach is both sound and complete for sequential programs, with respect to the depth bound of symbolic execution. Our evaluation on a set of sequential C artifacts shows that reducing the size of the summaries can help reduce the cost of software equivalence checking. Various reduction, abstraction, and compositional techniques have been developed to help scale software verification techniques to industrial-sized systems. Although such techniques have greatly increased the size and complexity of systems that can be checked, analysis of large software systems remains costly. Regression analysis techniques, e.g., regression testing [16], regression model checking [22], and regression verification [19], restrict the scope of the analysis by leveraging the differences between program versions. These techniques are based on the idea that if code is checked early in development, then subsequent versions can be checked against a prior (checked) version, leveraging the results of the previous analysis to reduce analysis cost of the current version. Regression verification addresses the problem of proving equivalence of closely related program
Simple, Internally Adjustable Valve
NASA Technical Reports Server (NTRS)
Burley, Richard K.
1990-01-01
Valve containing simple in-line, adjustable, flow-control orifice made from ordinary plumbing fitting and two allen setscrews. Construction of valve requires only simple drilling, tapping, and grinding. Orifice installed in existing fitting, avoiding changes in rest of plumbing.
NASA Technical Reports Server (NTRS)
1986-01-01
Corning Glass Works' Serengeti Driver sunglasses are unique in that their lenses self-adjust and filter light while suppressing glare. They eliminate more than 99% of the ultraviolet rays in sunlight. The frames are based on the NASA Anthropometric Source Book.
ERIC Educational Resources Information Center
Abramson, Jane A.
Personal interviews with 100 former farm operators living in Saskatoon, Saskatchewan, were conducted in an attempt to understand the nature of the adjustment process caused by migration from rural to urban surroundings. Requirements for inclusion in the study were that respondents had owned or operated a farm for at least 3 years, had left their…
Hunter, Steven L.
2002-01-01
An inclinometer utilizing synchronous demodulation for high resolution and electronic offset adjustment provides a wide dynamic range without any moving components. A device encompassing a tiltmeter and accompanying electronic circuitry provides quasi-leveled tilt sensors that detect highly resolved tilt change without signal saturation.
3DGRAPE - THREE DIMENSIONAL GRIDS ABOUT ANYTHING BY POISSON'S EQUATION
NASA Technical Reports Server (NTRS)
Sorenson, R. L.
1994-01-01
The ability to treat arbitrary boundary shapes is one of the most desirable characteristics of a method for generating grids. 3DGRAPE is designed to make computational grids in or about almost any shape. These grids are generated by the solution of Poisson's differential equations in three dimensions. The program automatically finds its own values for inhomogeneous terms which give near-orthogonality and controlled grid cell height at boundaries. Grids generated by 3DGRAPE have been applied to both viscous and inviscid aerodynamic problems, and to problems in other fluid-dynamic areas. 3DGRAPE uses zones to solve the problem of warping one cube into the physical domain in real-world computational fluid dynamics problems. In a zonal approach, a physical domain is divided into regions, each of which maps into its own computational cube. It is believed that even the most complicated physical region can be divided into zones, and since it is possible to warp a cube into each zone, a grid generator which is oriented to zones and allows communication across zonal boundaries (where appropriate) solves the problem of topological complexity. 3DGRAPE expects to read in already-distributed x,y,z coordinates on the bodies of interest, coordinates which will remain fixed during the entire grid-generation process. The 3DGRAPE code makes no attempt to fit given body shapes and redistribute points thereon. Body-fitting is a formidable problem in itself. The user must either be working with some simple analytical body shape, upon which a simple analytical distribution can be easily effected, or must have available some sophisticated stand-alone body-fitting software. 3DGRAPE does not require the user to supply the block-to-block boundaries nor the shapes of the distribution of points. 3DGRAPE will typically supply those block-to-block boundaries simply as surfaces in the elliptic grid. Thus at block-to-block boundaries the following conditions are obtained: (1) grids lines will
A comparison of several regression models for analysing cost of CABG surgery.
Austin, Peter C; Ghali, William A; Tu, Jack V
2003-09-15
Investigators in clinical research are often interested in determining the association between patient characteristics and cost of medical or surgical treatment. However, there is no uniformly agreed upon regression model with which to analyse cost data. The objective of the current study was to compare the performance of linear regression, linear regression with log-transformed cost, generalized linear models with Poisson, negative binomial and gamma distributions, median regression, and proportional hazards models for analysing costs in a cohort of patients undergoing CABG surgery. The study was performed on data comprising 1959 patients who underwent CABG surgery in Calgary, Alberta, between June 1994 and March 1998. Ten of 21 patient characteristics were significantly associated with cost of surgery in all seven models. Eight variables were not significantly associated with cost of surgery in all seven models. Using mean squared prediction error as a loss function, proportional hazards regression and the three generalized linear models were best able to predict cost in independent validation data. Using mean absolute error, linear regression with log-transformed cost, proportional hazards regression, and median regression to predict median cost, were best able to predict cost in independent validation data. Since the models demonstrated good consistency in identifying factors associated with increased cost of CABG surgery, any of the seven models can be used for identifying factors associated with increased cost of surgery. However, the magnitude of, and the interpretation of, the coefficients vary across models. Researchers are encouraged to consider a variety of candidate models, including those better known in the econometrics literature, rather than begin data analysis with one regression model selected a priori. The final choice of regression model should be made after a careful assessment of how best to assess predictive ability and should be tailored to
A comparison of several regression models for analysing cost of CABG surgery.
Austin, Peter C; Ghali, William A; Tu, Jack V
2003-09-15
Investigators in clinical research are often interested in determining the association between patient characteristics and cost of medical or surgical treatment. However, there is no uniformly agreed upon regression model with which to analyse cost data. The objective of the current study was to compare the performance of linear regression, linear regression with log-transformed cost, generalized linear models with Poisson, negative binomial and gamma distributions, median regression, and proportional hazards models for analysing costs in a cohort of patients undergoing CABG surgery. The study was performed on data comprising 1959 patients who underwent CABG surgery in Calgary, Alberta, between June 1994 and March 1998. Ten of 21 patient characteristics were significantly associated with cost of surgery in all seven models. Eight variables were not significantly associated with cost of surgery in all seven models. Using mean squared prediction error as a loss function, proportional hazards regression and the three generalized linear models were best able to predict cost in independent validation data. Using mean absolute error, linear regression with log-transformed cost, proportional hazards regression, and median regression to predict median cost, were best able to predict cost in independent validation data. Since the models demonstrated good consistency in identifying factors associated with increased cost of CABG surgery, any of the seven models can be used for identifying factors associated with increased cost of surgery. However, the magnitude of, and the interpretation of, the coefficients vary across models. Researchers are encouraged to consider a variety of candidate models, including those better known in the econometrics literature, rather than begin data analysis with one regression model selected a priori. The final choice of regression model should be made after a careful assessment of how best to assess predictive ability and should be tailored to
3D Regression Heat Map Analysis of Population Study Data.
Klemm, Paul; Lawonn, Kai; Glaßer, Sylvia; Niemann, Uli; Hegenscheid, Katrin; Völzke, Henry; Preim, Bernhard
2016-01-01
Epidemiological studies comprise heterogeneous data about a subject group to define disease-specific risk factors. These data contain information (features) about a subject's lifestyle, medical status as well as medical image data. Statistical regression analysis is used to evaluate these features and to identify feature combinations indicating a disease (the target feature). We propose an analysis approach of epidemiological data sets by incorporating all features in an exhaustive regression-based analysis. This approach combines all independent features w.r.t. a target feature. It provides a visualization that reveals insights into the data by highlighting relationships. The 3D Regression Heat Map, a novel 3D visual encoding, acts as an overview of the whole data set. It shows all combinations of two to three independent features with a specific target disease. Slicing through the 3D Regression Heat Map allows for the detailed analysis of the underlying relationships. Expert knowledge about disease-specific hypotheses can be included into the analysis by adjusting the regression model formulas. Furthermore, the influences of features can be assessed using a difference view comparing different calculation results. We applied our 3D Regression Heat Map method to a hepatic steatosis data set to reproduce results from a data mining-driven analysis. A qualitative analysis was conducted on a breast density data set. We were able to derive new hypotheses about relations between breast density and breast lesions with breast cancer. With the 3D Regression Heat Map, we present a visual overview of epidemiological data that allows for the first time an interactive regression-based analysis of large feature sets with respect to a disease. PMID:26529689
3D Regression Heat Map Analysis of Population Study Data.
Klemm, Paul; Lawonn, Kai; Glaßer, Sylvia; Niemann, Uli; Hegenscheid, Katrin; Völzke, Henry; Preim, Bernhard
2016-01-01
Epidemiological studies comprise heterogeneous data about a subject group to define disease-specific risk factors. These data contain information (features) about a subject's lifestyle, medical status as well as medical image data. Statistical regression analysis is used to evaluate these features and to identify feature combinations indicating a disease (the target feature). We propose an analysis approach of epidemiological data sets by incorporating all features in an exhaustive regression-based analysis. This approach combines all independent features w.r.t. a target feature. It provides a visualization that reveals insights into the data by highlighting relationships. The 3D Regression Heat Map, a novel 3D visual encoding, acts as an overview of the whole data set. It shows all combinations of two to three independent features with a specific target disease. Slicing through the 3D Regression Heat Map allows for the detailed analysis of the underlying relationships. Expert knowledge about disease-specific hypotheses can be included into the analysis by adjusting the regression model formulas. Furthermore, the influences of features can be assessed using a difference view comparing different calculation results. We applied our 3D Regression Heat Map method to a hepatic steatosis data set to reproduce results from a data mining-driven analysis. A qualitative analysis was conducted on a breast density data set. We were able to derive new hypotheses about relations between breast density and breast lesions with breast cancer. With the 3D Regression Heat Map, we present a visual overview of epidemiological data that allows for the first time an interactive regression-based analysis of large feature sets with respect to a disease.
Quantile Regression With Measurement Error
Wei, Ying; Carroll, Raymond J.
2010-01-01
Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. PMID:20305802
Precision and Recall for Regression
NASA Astrophysics Data System (ADS)
Torgo, Luis; Ribeiro, Rita
Cost sensitive prediction is a key task in many real world applications. Most existing research in this area deals with classification problems. This paper addresses a related regression problem: the prediction of rare extreme values of a continuous variable. These values are often regarded as outliers and removed from posterior analysis. However, for many applications (e.g. in finance, meteorology, biology, etc.) these are the key values that we want to accurately predict. Any learning method obtains models by optimizing some preference criteria. In this paper we propose new evaluation criteria that are more adequate for these applications. We describe a generalization for regression of the concepts of precision and recall often used in classification. Using these new evaluation metrics we are able to focus the evaluation of predictive models on the cases that really matter for these applications. Our experiments indicate the advantages of the use of these new measures when comparing predictive models in the context of our target applications.
Sobolev inequalities, the Poisson semigroup, and analysis on the sphere Sn.
Beckner, W
1992-01-01
Hypercontractive estimates are obtained for the Poisson semigroup on Lp(Sn). Such estimates are determined by a sharp logarithmic Sobolev inequality that relates the entropy of a function on Sn to its smoothness. PMID:11607294
Hung, Tran Loc; Giang, Le Truong
2016-01-01
Using the Stein-Chen method some upper bounds in Poisson approximation for distributions of row-wise triangular arrays of independent negative-binomial distributed random variables are established in this note. PMID:26844026
A Hands-on Activity for Teaching the Poisson Distribution Using the Stock Market
ERIC Educational Resources Information Center
Dunlap, Mickey; Studstill, Sharyn
2014-01-01
The number of increases a particular stock makes over a fixed period follows a Poisson distribution. This article discusses using this easily-found data as an opportunity to let students become involved in the data collection and analysis process.
A Poisson-based adaptive affinity propagation clustering for SAGE data.
Tang, DongMing; Zhu, QingXin; Yang, Fan
2010-02-01
Serial analysis of gene expression (SAGE) is a powerful tool to obtain gene expression profiles. Clustering analysis is a valuable technique for analyzing SAGE data. In this paper, we propose an adaptive clustering method for SAGE data analysis, namely, PoissonAPS. The method incorporates a novel clustering algorithm, Affinity Propagation (AP). While AP algorithm has demonstrated good performance on many different data sets, it also faces several limitations. PoissonAPS overcomes the limitations of AP using the clustering validation measure as a cost function of merging and splitting, and as a result, it can automatically cluster SAGE data without user-specified parameters. We evaluated PoissonAPS and compared its performance with other methods on several real life SAGE datasets. The experimental results show that PoissonAPS can produce meaningful and interpretable clusters for SAGE data.
Accurate Young's modulus measurement based on Rayleigh wave velocity and empirical Poisson's ratio
NASA Astrophysics Data System (ADS)
Li, Mingxia; Feng, Zhihua
2016-07-01
This paper presents a method for Young's modulus measurement based on Rayleigh wave speed. The error in Poisson's ratio has weak influence on the measurement of Young's modulus based on Rayleigh wave speed, and Poisson's ratio minimally varies in a certain material; thus, we can accurately estimate Young's modulus with surface wave speed and a rough Poisson's ratio. We numerically analysed three methods using Rayleigh, longitudinal, and transversal wave speed, respectively, and the error in Poisson's ratio shows the least influence on the result in the method involving Rayleigh wave speed. An experiment was performed and has proved the feasibility of this method. Device for speed measuring could be small, and no sample pretreatment is needed. Hence, developing a portable instrument based on this method is possible. This method makes a good compromise between usability and precision.
Hung, Tran Loc; Giang, Le Truong
2016-01-01
Using the Stein-Chen method some upper bounds in Poisson approximation for distributions of row-wise triangular arrays of independent negative-binomial distributed random variables are established in this note.
Particle trapping: A key requisite of structure formation and stability of Vlasov–Poisson plasmas
Schamel, Hans
2015-04-15
Particle trapping is shown to control the existence of undamped coherent structures in Vlasov–Poisson plasmas and thereby affects the onset of plasma instability beyond the realm of linear Landau theory.
The Kramers-Kronig relations for usual and anomalous Poisson-Nernst-Planck models.
Evangelista, Luiz Roberto; Lenzi, Ervin Kaminski; Barbero, Giovanni
2013-11-20
The consistency of the frequency response predicted by a class of electrochemical impedance expressions is analytically checked by invoking the Kramers-Kronig (KK) relations. These expressions are obtained in the context of Poisson-Nernst-Planck usual or anomalous diffusional models that satisfy Poisson's equation in a finite length situation. The theoretical results, besides being successful in interpreting experimental data, are also shown to obey the KK relations when these relations are modified accordingly.
Exponential stability of second-order stochastic evolution equations with Poisson jumps
NASA Astrophysics Data System (ADS)
Sakthivel, R.; Ren, Y.
2012-12-01
This paper is concerned with the exponential stability problem of second-order nonlinear stochastic evolution equations with Poisson jumps. By using the stochastic analysis theory, a set of novel sufficient conditions are derived for the exponential stability of mild solutions to the second-order nonlinear stochastic differential equations with infinite delay driven by Poisson jumps. An example is provided to demonstrate the effectiveness of the proposed result.
Tests of a homogeneous Poisson process against clustering and other alternatives
Atwood, C.L.
1994-05-01
This report presents three closely related tests of the hypothesis that data points come from a homogeneous Poisson process. If there is too much observed variation among the log-transformed between-point distances, the hypothesis is rejected. The tests are more powerful than the standard chi-squared test against the alternative hypothesis of event clustering, but not against the alternative hypothesis of a Poisson process with smoothly varying intensity.
On the Determination of Poisson Statistics for Haystack Radar Observations of Orbital Debris
NASA Technical Reports Server (NTRS)
Stokely, Christopher L.; Benbrook, James R.; Horstman, Matt
2007-01-01
A convenient and powerful method is used to determine if radar detections of orbital debris are observed according to Poisson statistics. This is done by analyzing the time interval between detection events. For Poisson statistics, the probability distribution of the time interval between events is shown to be an exponential distribution. This distribution is a special case of the Erlang distribution that is used in estimating traffic loads on telecommunication networks. Poisson statistics form the basis of many orbital debris models but the statistical basis of these models has not been clearly demonstrated empirically until now. Interestingly, during the fiscal year 2003 observations with the Haystack radar in a fixed staring mode, there are no statistically significant deviations observed from that expected with Poisson statistics, either independent or dependent of altitude or inclination. One would potentially expect some significant clustering of events in time as a result of satellite breakups, but the presence of Poisson statistics indicates that such debris disperse rapidly with respect to Haystack's very narrow radar beam. An exception to Poisson statistics is observed in the months following the intentional breakup of the Fengyun satellite in January 2007.
Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George
2009-08-01
We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.
An intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces.
Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying
2013-09-01
Poisson disk sampling has excellent spatial and spectral properties, and plays an important role in a variety of visual computing. Although many promising algorithms have been proposed for multidimensional sampling in euclidean space, very few studies have been reported with regard to the problem of generating Poisson disks on surfaces due to the complicated nature of the surface. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. In sharp contrast to the conventional parallel approaches, our method neither partitions the given surface into small patches nor uses any spatial data structure to maintain the voids in the sampling domain. Instead, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. Our algorithm guarantees that the generated Poisson disks are uniformly and randomly distributed without bias. It is worth noting that our method is intrinsic and independent of the embedding space. This intrinsic feature allows us to generate Poisson disk patterns on arbitrary surfaces in IR(n). To our knowledge, this is the first intrinsic, parallel, and accurate algorithm for surface Poisson disk sampling. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.
Cutburth, Ronald W.; Silva, Leonard L.
1988-01-01
An improved mounting stage of the type used for the detection of laser beams is disclosed. A stage center block is mounted on each of two opposite sides by a pair of spaced ball bearing tracks which provide stability as well as simplicity. The use of the spaced ball bearing pairs in conjunction with an adjustment screw which also provides support eliminates extraneous stabilization components and permits maximization of the area of the center block laser transmission hole.
Ducker, W.L.
1982-09-14
A system of rotatably and pivotally mounted radially extended bent supports for radially extending windmill rotor vanes in combination with axially movable radially extended control struts connected to the vanes with semi-automatic and automatic torque and other sensing and servo units provide automatic adjustment of the windmill vanes relative to their axes of rotation to produce mechanical output at constant torque or at constant speed or electrical quantities dependent thereon.
Ducker, W.L.
1980-01-15
A system of rotatably and pivotally mounted radially extended bent supports for radially extending windmill rotor vanes in combination with axially movable radially extended control struts connected to the vanes with semi-automatic and automatic torque and other sensing and servo units provide automatic adjustment of the windmill vanes relative to their axes of rotation to produce mechanical output at constant torque or at constant speed or electrical quantities dependent thereon.
Ducker, W.L.
1982-09-07
A system of rotatably and pivotally mounted radially extended bent supports for radially extending windmill rotor vanes in combination with axially movable radially extended control struts connected to the vanes with semi-automatic and automatic torque and other sensing and servo units provide automatic adjustment of the windmill vanes relative to their axes of rotation to produce mechanical output at constant torque or at constant speed or electrical quantities dependent thereon.
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Schrenkenghost, Debra K.
2001-01-01
The Adjustable Autonomy Testbed (AAT) is a simulation-based testbed located in the Intelligent Systems Laboratory in the Automation, Robotics and Simulation Division at NASA Johnson Space Center. The purpose of the testbed is to support evaluation and validation of prototypes of adjustable autonomous agent software for control and fault management for complex systems. The AA T project has developed prototype adjustable autonomous agent software and human interfaces for cooperative fault management. This software builds on current autonomous agent technology by altering the architecture, components and interfaces for effective teamwork between autonomous systems and human experts. Autonomous agents include a planner, flexible executive, low level control and deductive model-based fault isolation. Adjustable autonomy is intended to increase the flexibility and effectiveness of fault management with an autonomous system. The test domain for this work is control of advanced life support systems for habitats for planetary exploration. The CONFIG hybrid discrete event simulation environment provides flexible and dynamically reconfigurable models of the behavior of components and fluids in the life support systems. Both discrete event and continuous (discrete time) simulation are supported, and flows and pressures are computed globally. This provides fast dynamic simulations of interacting hardware systems in closed loops that can be reconfigured during operations scenarios, producing complex cascading effects of operations and failures. Current object-oriented model libraries support modeling of fluid systems, and models have been developed of physico-chemical and biological subsystems for processing advanced life support gases. In FY01, water recovery system models will be developed.
Psychosocial adjustment to ALS: a longitudinal study
Matuz, Tamara; Birbaumer, Niels; Hautzinger, Martin; Kübler, Andrea
2015-01-01
For the current study the Lazarian stress-coping theory and the appendant model of psychosocial adjustment to chronic illness and disabilities (Pakenham, 1999) has shaped the foundation for identifying determinants of adjustment to ALS. We aimed to investigate the evolution of psychosocial adjustment to ALS and to determine its long-term predictors. A longitudinal study design with four measurement time points was therefore, used to assess patients' quality of life, depression, and stress-coping model related aspects, such as illness characteristics, social support, cognitive appraisals, and coping strategies during a period of 2 years. Regression analyses revealed that 55% of the variance of severity of depressive symptoms and 47% of the variance in quality of life at T2 was accounted for by all the T1 predictor variables taken together. On the level of individual contributions, protective buffering, and appraisal of own coping potential accounted for a significant percentage in the variance in severity of depressive symptoms, whereas problem management coping strategies explained variance in quality of life scores. Illness characteristics at T2 did not explain any variance of both adjustment outcomes. Overall, the pattern of the longitudinal results indicated stable depressive symptoms and quality of life indices reflecting a successful adjustment to the disease across four measurement time points during a period of about two years. Empirical evidence is provided for the predictive value of social support, cognitive appraisals, and coping strategies, but not illness parameters such as severity and duration for adaptation to ALS. The current study contributes to a better conceptualization of adjustment, allowing us to provide evidence-based support beyond medical and physical intervention for people with ALS. PMID:26441696
Quality Reporting of Multivariable Regression Models in Observational Studies
Real, Jordi; Forné, Carles; Roso-Llorach, Albert; Martínez-Sánchez, Jose M.
2016-01-01
Abstract Controlling for confounders is a crucial step in analytical observational studies, and multivariable models are widely used as statistical adjustment techniques. However, the validation of the assumptions of the multivariable regression models (MRMs) should be made clear in scientific reporting. The objective of this study is to review the quality of statistical reporting of the most commonly used MRMs (logistic, linear, and Cox regression) that were applied in analytical observational studies published between 2003 and 2014 by journals indexed in MEDLINE. Review of a representative sample of articles indexed in MEDLINE (n = 428) with observational design and use of MRMs (logistic, linear, and Cox regression). We assessed the quality of reporting about: model assumptions and goodness-of-fit, interactions, sensitivity analysis, crude and adjusted effect estimate, and specification of more than 1 adjusted model. The tests of underlying assumptions or goodness-of-fit of the MRMs used were described in 26.2% (95% CI: 22.0–30.3) of the articles and 18.5% (95% CI: 14.8–22.1) reported the interaction analysis. Reporting of all items assessed was higher in articles published in journals with a higher impact factor. A low percentage of articles indexed in MEDLINE that used multivariable techniques provided information demonstrating rigorous application of the model selected as an adjustment method. Given the importance of these methods to the final results and conclusions of observational studies, greater rigor is required in reporting the use of MRMs in the scientific literature. PMID:27196467
Adolescent suicide attempts and adult adjustment
Brière, Frédéric N.; Rohde, Paul; Seeley, John R.; Klein, Daniel; Lewinsohn, Peter M.
2014-01-01
Background Adolescent suicide attempts are disproportionally prevalent and frequently of low severity, raising questions regarding their long-term prognostic implications. In this study, we examined whether adolescent attempts were associated with impairments related to suicidality, psychopathology, and psychosocial functioning in adulthood (objective 1) and whether these impairments were better accounted for by concurrent adolescent confounders (objective 2). Method 816 adolescents were assessed using interviews and questionnaires at four time points from adolescence to adulthood. We examined whether lifetime suicide attempts in adolescence (by T2, mean age 17) predicted adult outcomes (by T4, mean age 30) using linear and logistic regressions in unadjusted models (objective 1) and adjusting for sociodemographic background, adolescent psychopathology, and family risk factors (objective 2). Results In unadjusted analyses, adolescent suicide attempts predicted poorer adjustment on all outcomes, except those related to social role status. After adjustment, adolescent attempts remained predictive of axis I and II psychopathology (anxiety disorder, antisocial and borderline personality disorder symptoms), global and social adjustment, risky sex, and psychiatric treatment utilization. However, adolescent attempts no longer predicted most adult outcomes, notably suicide attempts and major depressive disorder. Secondary analyses indicated that associations did not differ by sex and attempt characteristics (intent, lethality, recurrence). Conclusions Adolescent suicide attempters are at high risk of protracted and wide-ranging impairments, regardless of the characteristics of their attempt. Although attempts specifically predict (and possibly influence) several outcomes, results suggest that most impairments reflect the confounding contributions of other individual and family problems or vulnerabilites in adolescent attempters. PMID:25421360
Hydrodynamic limit of Wigner-Poisson kinetic theory: Revisited
Akbari-Moghanjoughi, M.
2015-02-15
In this paper, we revisit the hydrodynamic limit of the Langmuir wave dispersion relation based on the Wigner-Poisson model in connection with that obtained directly from the original Lindhard dielectric function based on the random-phase-approximation. It is observed that the (fourth-order) expansion of the exact Lindhard dielectric constant correctly reduces to the hydrodynamic dispersion relation with an additional term of fourth-order, beside that caused by the quantum diffraction effect. It is also revealed that the generalized Lindhard dielectric theory accounts for the recently discovered Shukla-Eliasson attractive potential (SEAP). However, the expansion of the exact Lindhard static dielectric function leads to a k{sup 4} term of different magnitude than that obtained from the linearized quantum hydrodynamics model. It is shown that a correction factor of 1/9 should be included in the term arising from the quantum Bohm potential of the momentum balance equation in fluid model in order for a correct plasma dielectric response treatment. Finally, it is observed that the long-range oscillatory screening potential (Friedel oscillations) of type cos(2k{sub F}r)/r{sup 3}, which is a consequence of the divergence of the dielectric function at point k = 2k{sub F} in a quantum plasma, arises due to the finiteness of the Fermi-wavenumber and is smeared out in the limit of very high electron number-densities, typical of white dwarfs and neutron stars. In the very low electron number-density regime, typical of semiconductors and metals, where the Friedel oscillation wavelength becomes much larger compared to the interparticle distances, the SEAP appears with a much deeper potential valley. It is remarked that the fourth-order approximate Lindhard dielectric constant approaches that of the linearized quantum hydrodynamic in the limit if very high electron number-density. By evaluation of the imaginary part of the Lindhard dielectric function, it is shown that the
Multilevel Methods for the Poisson-Boltzmann Equation
NASA Astrophysics Data System (ADS)
Holst, Michael Jay
We consider the numerical solution of the Poisson -Boltzmann equation (PBE), a three-dimensional second order nonlinear elliptic partial differential equation arising in biophysics. This problem has several interesting features impacting numerical algorithms, including discontinuous coefficients representing material interfaces, rapid nonlinearities, and three spatial dimensions. Similar equations occur in various applications, including nuclear physics, semiconductor physics, population genetics, astrophysics, and combustion. In this thesis, we study the PBE, discretizations, and develop multilevel-based methods for approximating the solutions of these types of equations. We first outline the physical model and derive the PBE, which describes the electrostatic potential of a large complex biomolecule lying in a solvent. We next study the theoretical properties of the linearized and nonlinear PBE using standard function space methods; since this equation has not been previously studied theoretically, we provide existence and uniqueness proofs in both the linearized and nonlinear cases. We also analyze box-method discretizations of the PBE, establishing several properties of the discrete equations which are produced. In particular, we show that the discrete nonlinear problem is well-posed. We study and develop linear multilevel methods for interface problems, based on algebraic enforcement of Galerkin or variational conditions, and on coefficient averaging procedures. Using a stencil calculus, we show that in certain simplified cases the two approaches are equivalent, with different averaging procedures corresponding to different prolongation operators. We also develop methods for nonlinear problems based on a nonlinear multilevel method, and on linear multilevel methods combined with a globally convergent damped-inexact-Newton method. We derive a necessary and sufficient descent condition for the inexact-Newton direction, enabling the development of extremely
Regression analysis of cytopathological data
Whittemore, A.S.; McLarty, J.W.; Fortson, N.; Anderson, K.
1982-12-01
Epithelial cells from the human body are frequently labelled according to one of several ordered levels of abnormality, ranging from normal to malignant. The label of the most abnormal cell in a specimen determines the score for the specimen. This paper presents a model for the regression of specimen scores against continuous and discrete variables, as in host exposure to carcinogens. Application to data and tests for adequacy of model fit are illustrated using sputum specimens obtained from a cohort of former asbestos workers.
The Impact of Financial Sophistication on Adjustable Rate Mortgage Ownership
ERIC Educational Resources Information Center
Smith, Hyrum; Finke, Michael S.; Huston, Sandra J.
2011-01-01
The influence of a financial sophistication scale on adjustable-rate mortgage (ARM) borrowing is explored. Descriptive statistics and regression analysis using recent data from the Survey of Consumer Finances reveal that ARM borrowing is driven by both the least and most financially sophisticated households but for different reasons. Less…
Effects of Relational Authenticity on Adjustment to College
ERIC Educational Resources Information Center
Lenz, A. Stephen; Holman, Rachel L.; Lancaster, Chloe; Gotay, Stephanie G.
2016-01-01
The authors examined the association between relational health and student adjustment to college. Data were collected from 138 undergraduate students completing their 1st semester at a large university in the mid-southern United States. Regression analysis indicated that higher levels of relational authenticity were a predictor of success during…
Mediating Effects of Relationships with Mentors on College Adjustment
ERIC Educational Resources Information Center
Lenz, A. Stephen
2014-01-01
This study examined the relationship between student adjustment to college and relational health with peers, mentors, and the community. Data were collected from 80 undergraduate students completing their first semester of course work at a large university in the mid-South. A series of simultaneous multiple regression analyses indicated that…
Multiatlas Segmentation as Nonparametric Regression
Awate, Suyash P.; Whitaker, Ross T.
2015-01-01
This paper proposes a novel theoretical framework to model and analyze the statistical characteristics of a wide range of segmentation methods that incorporate a database of label maps or atlases; such methods are termed as label fusion or multiatlas segmentation. We model these multiatlas segmentation problems as nonparametric regression problems in the high-dimensional space of image patches. We analyze the nonparametric estimator’s convergence behavior that characterizes expected segmentation error as a function of the size of the multiatlas database. We show that this error has an analytic form involving several parameters that are fundamental to the specific segmentation problem (determined by the chosen anatomical structure, imaging modality, registration algorithm, and label-fusion algorithm). We describe how to estimate these parameters and show that several human anatomical structures exhibit the trends modeled analytically. We use these parameter estimates to optimize the regression estimator. We show that the expected error for large database sizes is well predicted by models learned on small databases. Thus, a few expert segmentations can help predict the database sizes required to keep the expected error below a specified tolerance level. Such cost-benefit analysis is crucial for deploying clinical multiatlas segmentation systems. PMID:24802528
Variable Selection in ROC Regression
2013-01-01
Regression models are introduced into the receiver operating characteristic (ROC) analysis to accommodate effects of covariates, such as genes. If many covariates are available, the variable selection issue arises. The traditional induced methodology separately models outcomes of diseased and nondiseased groups; thus, separate application of variable selections to two models will bring barriers in interpretation, due to differences in selected models. Furthermore, in the ROC regression, the accuracy of area under the curve (AUC) should be the focus instead of aiming at the consistency of model selection or the good prediction performance. In this paper, we obtain one single objective function with the group SCAD to select grouped variables, which adapts to popular criteria of model selection, and propose a two-stage framework to apply the focused information criterion (FIC). Some asymptotic properties of the proposed methods are derived. Simulation studies show that the grouped variable selection is superior to separate model selections. Furthermore, the FIC improves the accuracy of the estimated AUC compared with other criteria. PMID:24312135
On q-deformed symmetries as Poisson-Lie symmetries and application to Yang-Baxter type models
NASA Astrophysics Data System (ADS)
Delduc, F.; Lacroix, S.; Magro, M.; Vicedo, B.
2016-10-01
Yang-Baxter type models are integrable deformations of integrable field theories, such as the principal chiral model on a Lie group G or σ-models on (semi-)symmetric spaces G/F. The deformation has the effect of breaking the global G-symmetry of the original model, replacing the associated set of conserved charges by ones whose Poisson brackets are those of the q-deformed Poisson-Hopf algebra {{\\mathscr{U}}}q({g}). Working at the Hamiltonian level, we show how this q-deformed Poisson algebra originates from a Poisson-Lie G-symmetry. The theory of Poisson-Lie groups and their actions on Poisson manifolds, in particular the formalism of the non-abelian moment map, is reviewed. For a coboundary Poisson-Lie group G, this non-abelian moment map must obey the Semenov-Tian-Shansky bracket on the dual group {G}* , up to terms involving central quantities. When the latter vanish, we develop a general procedure linking this Poisson bracket to the defining relations of the Poisson-Hopf algebra {{\\mathscr{U}}}q({g}), including the q-Poisson-Serre relations. We consider reality conditions leading to q being either real or a phase. We determine the non-abelian moment map for Yang-Baxter type models. This enables to compute the corresponding action of G on the fields parametrising the phase space of these models.
On the validity of the Poisson assumption in sampling nanometer-sized aerosols
Damit, Brian E; Wu, Dr. Chang-Yu; Cheng, Mengdawn
2014-01-01
A Poisson process is traditionally believed to apply to the sampling of aerosols. For a constant aerosol concentration, it is assumed that a Poisson process describes the fluctuation in the measured concentration because aerosols are stochastically distributed in space. Recent studies, however, have shown that sampling of micrometer-sized aerosols has non-Poissonian behavior with positive correlations. The validity of the Poisson assumption for nanometer-sized aerosols has not been examined and thus was tested in this study. Its validity was tested for four particle sizes - 10 nm, 25 nm, 50 nm and 100 nm - by sampling from indoor air with a DMA- CPC setup to obtain a time series of particle counts. Five metrics were calculated from the data: pair-correlation function (PCF), time-averaged PCF, coefficient of variation, probability of measuring a concentration at least 25% greater than average, and posterior distributions from Bayesian inference. To identify departures from Poissonian behavior, these metrics were also calculated for 1,000 computer-generated Poisson time series with the same mean as the experimental data. For nearly all comparisons, the experimental data fell within the range of 80% of the Poisson-simulation values. Essentially, the metrics for the experimental data were indistinguishable from a simulated Poisson process. The greater influence of Brownian motion for nanometer-sized aerosols may explain the Poissonian behavior observed for smaller aerosols. Although the Poisson assumption was found to be valid in this study, it must be carefully applied as the results here do not definitively prove applicability in all sampling situations.
Zero-inflated models for regression analysis of count data: a study of growth and development.
Cheung, Yin Bin
2002-05-30
Poisson regression is widely used in medical studies, and can be extended to negative binomial regression to allow for heterogeneity. When there is an excess number of zero counts, a useful approach is to used a mixture model with a proportion P of subjects not at risk, and a proportion of 1--P at-risk subjects who take on outcome values following a Poisson or negative binomial distribution. Covariate effects can be incorporated into both components of the models. In child assessment, fine motor development is often measured by test items that involve a process of imitation and a process of fine motor exercise. One such developmental milestone is 'building a tower of cubes'. This study analyses the impact of foetal growth and postnatal somatic growth on this milestone, operationalized as the number of cubes and measured around the age of 22 months. It is shown that the two aspects of early growth may have different implications for imitation and fine motor dexterity. The usual approach of recording and analysing the milestone as a binary outcome, such as whether the child can build a tower of three cubes, may leave out important information.
Subsea adjustable choke valves
Cyvas, M.K. )
1989-08-01
With emphasis on deepwater wells and marginal offshore fields growing, the search for reliable subsea production systems has become a high priority. A reliable subsea adjustable choke is essential to the realization of such a system, and recent advances are producing the degree of reliability required. Technological developments have been primarily in (1) trim material (including polycrystalline diamond), (2) trim configuration, (3) computer programs for trim sizing, (4) component materials, and (5) diver/remote-operated-vehicle (ROV) interfaces. These five facets are overviewed and progress to date is reported. A 15- to 20-year service life for adjustable subsea chokes is now a reality. Another factor vital to efficient use of these technological developments is to involve the choke manufacturer and ROV/diver personnel in initial system conceptualization. In this manner, maximum benefit can be derived from the latest technology. Major areas of development still required and under way are listed, and the paper closes with a tabulation of successful subsea choke installations in recent years.
Practical Session: Multiple Linear Regression
NASA Astrophysics Data System (ADS)
Clausel, M.; Grégoire, G.
2014-12-01
Three exercises are proposed to illustrate the simple linear regression. In the first one investigates the influence of several factors on atmospheric pollution. It has been proposed by D. Chessel and A.B. Dufour in Lyon 1 (see Sect. 6 of http://pbil.univ-lyon1.fr/R/pdf/tdr33.pdf) and is based on data coming from 20 cities of U.S. Exercise 2 is an introduction to model selection whereas Exercise 3 provides a first example of analysis of variance. Exercises 2 and 3 have been proposed by A. Dalalyan at ENPC (see Exercises 2 and 3 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_5.pdf).
NASA Astrophysics Data System (ADS)
Tatlier, Mehmet Seha
Random fibrous can be found among natural and synthetic materials. Some of these random fibrous networks possess negative Poisson's ratio and they are extensively called auxetic materials. The governing mechanisms behind this counter intuitive property in random networks are yet to be understood and this kind of auxetic material remains widely under-explored. However, most of synthetic auxetic materials suffer from their low strength. This shortcoming can be rectified by developing high strength auxetic composites. The process of embedding auxetic random fibrous networks in a polymer matrix is an attractive alternate route to the manufacture of auxetic composites, however before such an approach can be developed, a methodology for designing fibrous networks with the desired negative Poisson's ratios must first be established. This requires an understanding of the factors which bring about negative Poisson's ratios in these materials. In this study, a numerical model is presented in order to investigate the auxetic behavior in compressed random fiber networks. Finite element analyses of three-dimensional stochastic fiber networks were performed to gain insight into the effects of parameters such as network anisotropy, network density, and degree of network compression on the out-of-plane Poisson's ratio and Young's modulus. The simulation results suggest that the compression is the critical parameter that gives rise to negative Poisson's ratio while anisotropy significantly promotes the auxetic behavior. This model can be utilized to design fibrous auxetic materials and to evaluate feasibility of developing auxetic composites by using auxetic fibrous networks as the reinforcing layer.
Ultra-soft 100 nm thick zero Poisson's ratio film with 60% reversible compressibility
NASA Astrophysics Data System (ADS)
Nguyen, Chieu; Szalewski, Steve; Saraf, Ravi
2013-03-01
Squeezing films of most solids, liquids and granular materials causes dilation in the lateral dimension which is characterized by a positive Poisson's ratio. Auxetic materials, such as, special foams, crumpled graphite, zeolites, spectrin/actin membrane, and carbon nanotube laminates shrink, i.e., their Poisson's ratio is negative. As a result of Poisson's effect, the force to squeeze an amorphous material, such as a viscous thin film coating adhered to rigid surface increases by over million fold as the thickness decreases from 10 μm to 100 nm due to constrain on lateral deformations and off-plane relaxation. We demonstrate, ultra-soft, 100 nm films of polymer/nanoparticle composite adhered to 1.25 cm diameter glass that can be reversibly squeezed over 60% strain between rigid plates requiring (very) low stresses below 100 KPa. Unlike non-zero Poisson's ratio materials, stiffness decreases with thickness, and the stress distribution is uniform over the film as mapped electro-optically. The high deformability at very low stresses is explained by considering reentrant cellular structure found in cork and the wings of beetles that have Poisson's ratio near zero.
Species abundance in a forest community in South China: A case of poisson lognormal distribution
Yin, Z.-Y.; Ren, H.; Zhang, Q.-M.; Peng, S.-L.; Guo, Q.-F.; Zhou, G.-Y.
2005-01-01
Case studies on Poisson lognormal distribution of species abundance have been rare, especially in forest communities. We propose a numerical method to fit the Poisson lognormal to the species abundance data at an evergreen mixed forest in the Dinghushan Biosphere Reserve, South China. Plants in the tree, shrub and herb layers in 25 quadrats of 20 m??20 m, 5 m??5 m, and 1 m??1 m were surveyed. Results indicated that: (i) for each layer, the observed species abundance with a similarly small median, mode, and a variance larger than the mean was reverse J-shaped and followed well the zero-truncated Poisson lognormal; (ii) the coefficient of variation, skewness and kurtosis of abundance, and two Poisson lognormal parameters (?? and ??) for shrub layer were closer to those for the herb layer than those for the tree layer; and (iii) from the tree to the shrub to the herb layer, the ?? and the coefficient of variation decreased, whereas diversity increased. We suggest that: (i) the species abundance distributions in the three layers reflects the overall community characteristics; (ii) the Poisson lognormal can describe the species abundance distribution in diverse communities with a few abundant species but many rare species; and (iii) 1/?? should be an alternative measure of diversity.
Adolescent Mothers' Adjustment to Parenting.
ERIC Educational Resources Information Center
Samuels, Valerie Jarvis; And Others
1994-01-01
Examined adolescent mothers' adjustment to parenting, self-esteem, social support, and perceptions of baby. Subjects (n=52) responded to questionnaires at two time periods approximately six months apart. Mothers with higher self-esteem at Time 1 had better adjustment at Time 2. Adjustment was predicted by Time 2 variables; contact with baby's…
NASA Astrophysics Data System (ADS)
Lin, XiaoHui; Zhang, ChiBin; Gu, Jun; Jiang, ShuYun; Yang, JueKuan
2014-11-01
A non-continuous electroosmotic flow model (PFP model) is built based on Poisson equation, Fokker-Planck equation and Navier-Stokse equation, and used to predict the DNA molecule translocation through nanopore. PFP model discards the continuum assumption of ion translocation and considers ions as discrete particles. In addition, this model includes the contributions of Coulomb electrostatic potential between ions, Brownian motion of ions and viscous friction to ion transportation. No ionic diffusion coefficient and other phenomenological parameters are needed in the PFP model. It is worth noting that the PFP model can describe non-equilibrium electroosmotic transportation of ions in a channel of a size comparable with the mean free path of ion. A modified clustering method is proposed for the numerical solution of PFP model, and ion current translocation through nanopore with a radius of 1 nm is simulated using the modified clustering method. The external electric field, wall charge density of nanopore, surface charge density of DNA, as well as ion average number density, influence the electroosmotic velocity profile of electrolyte solution, the velocity of DNA translocation through nanopore and ion current blockade. Results show that the ion average number density of electrolyte and surface charge density of nanopore have a significant effect on the translocation velocity of DNA and the ion current blockade. The translocation velocity of DNA is proportional to the surface charge density of nanopore, and is inversely proportional to ion average number density of electrolyte solution. Thus, the translocation velocity of DNAs can be controlled to improve the accuracy of sequencing by adjusting the external electric field, ion average number density of electrolyte and surface charge density of nanopore. Ion current decreases when the ion average number density is larger than the critical value and increases when the ion average number density is lower than the
Semiparametric regression during 2003–2007*
Ruppert, David; Wand, M.P.; Carroll, Raymond J.
2010-01-01
Semiparametric regression is a fusion between parametric regression and nonparametric regression that integrates low-rank penalized splines, mixed model and hierarchical Bayesian methodology – thus allowing more streamlined handling of longitudinal and spatial correlation. We review progress in the field over the five-year period between 2003 and 2007. We find semiparametric regression to be a vibrant field with substantial involvement and activity, continual enhancement and widespread application. PMID:20305800
Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.
Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence
2012-12-01
A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA. PMID:23741284
Pointwise estimates of solutions for the multi-dimensional bipolar Euler-Poisson system
NASA Astrophysics Data System (ADS)
Wu, Zhigang; Li, Yeping
2016-06-01
In the paper, we consider a multi-dimensional bipolar hydrodynamic model from semiconductor devices and plasmas. This system takes the form of Euler-Poisson with electric field and frictional damping added to the momentum equations. By making a new analysis on Green's functions for the Euler system with damping and the Euler-Poisson system with damping, we obtain the pointwise estimates of the solution for the multi-dimensions bipolar Euler-Poisson system. As a by-product, we extend decay rates of the densities {ρ_i(i=1,2)} in the usual L 2-norm to the L p -norm with {p≥1} and the time-decay rates of the momentums m i ( i = 1,2) in the L 2-norm to the L p -norm with p > 1 and all of the decay rates here are optimal.
Fessler, J A
1995-01-01
This paper describes rapidly converging algorithms for computing attenuation maps from Poisson transmission measurements using penalized-likelihood objective functions. We demonstrate that an under-relaxed cyclic coordinate-ascent algorithm converges faster than the convex algorithm of Lange (see ibid., vol.4, no.10, p.1430-1438, 1995), which in turn converges faster than the expectation-maximization (EM) algorithm for transmission tomography. To further reduce computation, one could replace the log-likelihood objective with a quadratic approximation. However, we show with simulations and analysis that the quadratic objective function leads to biased estimates for low-count measurements. Therefore we introduce hybrid Poisson/polynomial objective functions that use the exact Poisson log-likelihood for detector measurements with low counts, but use computationally efficient quadratic or cubic approximations for the high-count detector measurements. We demonstrate that the hybrid objective functions reduce computation time without increasing estimation bias.
NASA Astrophysics Data System (ADS)
Basin, M.; Maldonado, J. J.; Zendejo, O.
2016-07-01
This paper proposes new mean-square filter and parameter estimator design for linear stochastic systems with unknown parameters over linear observations, where unknown parameters are considered as combinations of Gaussian and Poisson white noises. The problem is treated by reducing the original problem to a filtering problem for an extended state vector that includes parameters as additional states, modelled as combinations of independent Gaussian and Poisson processes. The solution to this filtering problem is based on the mean-square filtering equations for incompletely polynomial states confused with Gaussian and Poisson noises over linear observations. The resulting mean-square filter serves as an identifier for the unknown parameters. Finally, a simulation example shows effectiveness of the proposed mean-square filter and parameter estimator.
Some remarks on the range of Poisson's ratio in isotropic linear elasticity
NASA Astrophysics Data System (ADS)
Tarumi, Ryuichi; Ledbetter, Hassel; Shibutani, Yoji
2012-04-01
We study the range of Poisson's ratio ν in the theory of isotropic linear elasticity. Using the ratio of the Lamé constants χ = λ/μ, we defined the parametrized Poisson's ratio as ν = ν(χ) and found that it represents a standard hyperbola with center (χ, ν) = (-1/3, 1/2) and eccentricity ? . One of the focal points of the hyperbola represents a Born instability criterion (or the equivalent strong ellipticity condition) and the other determines the sign of ν. The hyperbola has a vertex point in the admissible range of ν, and, hence, it naturally divides the range into the two subranges: ? and ? . We also found that there are three equivalent formulas among the nine elastic-constant ratios χ i considered in this study. The geometric conjugate of Poisson's ratio, which is defined as ν* ≔ 1 - ν, has also been proposed.
Westermeier, T; Michaelis, J
1995-03-01
Since 1980 the German Children's Cancer Registry has documented all childhood malignancies in the Federal Republic of Germany. Various statistical procedures have been proposed to identify municipalities or other geographic units with increased numbers of malignancies. Usually the Poisson distribution, which requires the malignancies to be distributed homogeneously and uncorrelated, is applied. Other discrete statistical distributions (so-called cluster distributions) like the generalized or compound Poisson distributions are applicable more generally. In this paper we present a first explorative approach to the question of whether it is necessary to use one of these cluster distributions to model the data of the German Children's Cancer Registry. In conclusion, we find no indication that the Poisson approach is insufficient. PMID:7604164
Developmental Regression in Autism Spectrum Disorders
ERIC Educational Resources Information Center
Rogers, Sally J.
2004-01-01
The occurrence of developmental regression in autism is one of the more puzzling features of this disorder. Although several studies have documented the validity of parental reports of regression using home videos, accumulating data suggest that most children who demonstrate regression also demonstrated previous, subtle, developmental differences.…
Building Regression Models: The Importance of Graphics.
ERIC Educational Resources Information Center
Dunn, Richard
1989-01-01
Points out reasons for using graphical methods to teach simple and multiple regression analysis. Argues that a graphically oriented approach has considerable pedagogic advantages in the exposition of simple and multiple regression. Shows that graphical methods may play a central role in the process of building regression models. (Author/LS)
Regression Analysis by Example. 5th Edition
ERIC Educational Resources Information Center
Chatterjee, Samprit; Hadi, Ali S.
2012-01-01
Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. "Regression Analysis by Example, Fifth Edition" has been expanded and thoroughly…
Bayesian Unimodal Density Regression for Causal Inference
ERIC Educational Resources Information Center
Karabatsos, George; Walker, Stephen G.
2011-01-01
Karabatsos and Walker (2011) introduced a new Bayesian nonparametric (BNP) regression model. Through analyses of real and simulated data, they showed that the BNP regression model outperforms other parametric and nonparametric regression models of common use, in terms of predictive accuracy of the outcome (dependent) variable. The other,…
Standards for Standardized Logistic Regression Coefficients
ERIC Educational Resources Information Center
Menard, Scott
2011-01-01
Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…
Streamflow forecasting using functional regression
NASA Astrophysics Data System (ADS)
Masselot, Pierre; Dabo-Niang, Sophie; Chebana, Fateh; Ouarda, Taha B. M. J.
2016-07-01
Streamflow, as a natural phenomenon, is continuous in time and so are the meteorological variables which influence its variability. In practice, it can be of interest to forecast the whole flow curve instead of points (daily or hourly). To this end, this paper introduces the functional linear models and adapts it to hydrological forecasting. More precisely, functional linear models are regression models based on curves instead of single values. They allow to consider the whole process instead of a limited number of time points or features. We apply these models to analyse the flow volume and the whole streamflow curve during a given period by using precipitations curves. The functional model is shown to lead to encouraging results. The potential of functional linear models to detect special features that would have been hard to see otherwise is pointed out. The functional model is also compared to the artificial neural network approach and the advantages and disadvantages of both models are discussed. Finally, future research directions involving the functional model in hydrology are presented.
Estimating equivalence with quantile regression
Cade, B.S.
2011-01-01
Equivalence testing and corresponding confidence interval estimates are used to provide more enlightened statistical statements about parameter estimates by relating them to intervals of effect sizes deemed to be of scientific or practical importance rather than just to an effect size of zero. Equivalence tests and confidence interval estimates are based on a null hypothesis that a parameter estimate is either outside (inequivalence hypothesis) or inside (equivalence hypothesis) an equivalence region, depending on the question of interest and assignment of risk. The former approach, often referred to as bioequivalence testing, is often used in regulatory settings because it reverses the burden of proof compared to a standard test of significance, following a precautionary principle for environmental protection. Unfortunately, many applications of equivalence testing focus on establishing average equivalence by estimating differences in means of distributions that do not have homogeneous variances. I discuss how to compare equivalence across quantiles of distributions using confidence intervals on quantile regression estimates that detect differences in heterogeneous distributions missed by focusing on means. I used one-tailed confidence intervals based on inequivalence hypotheses in a two-group treatment-control design for estimating bioequivalence of arsenic concentrations in soils at an old ammunition testing site and bioequivalence of vegetation biomass at a reclaimed mining site. Two-tailed confidence intervals based both on inequivalence and equivalence hypotheses were used to examine quantile equivalence for negligible trends over time for a continuous exponential model of amphibian abundance. ?? 2011 by the Ecological Society of America.
Insulin resistance: regression and clustering.
Yoon, Sangho; Assimes, Themistocles L; Quertermous, Thomas; Hsiao, Chin-Fu; Chuang, Lee-Ming; Hwu, Chii-Min; Rajaratnam, Bala; Olshen, Richard A
2014-01-01
In this paper we try to define insulin resistance (IR) precisely for a group of Chinese women. Our definition deliberately does not depend upon body mass index (BMI) or age, although in other studies, with particular random effects models quite different from models used here, BMI accounts for a large part of the variability in IR. We accomplish our goal through application of Gauss mixture vector quantization (GMVQ), a technique for clustering that was developed for application to lossy data compression. Defining data come from measurements that play major roles in medical practice. A precise statement of what the data are is in Section 1. Their family structures are described in detail. They concern levels of lipids and the results of an oral glucose tolerance test (OGTT). We apply GMVQ to residuals obtained from regressions of outcomes of an OGTT and lipids on functions of age and BMI that are inferred from the data. A bootstrap procedure developed for our family data supplemented by insights from other approaches leads us to believe that two clusters are appropriate for defining IR precisely. One cluster consists of women who are IR, and the other of women who seem not to be. Genes and other features are used to predict cluster membership. We argue that prediction with "main effects" is not satisfactory, but prediction that includes interactions may be. PMID:24887437
A Conway-Maxwell-Poisson (CMP) model to address data dispersion on positron emission tomography.
Santarelli, Maria Filomena; Della Latta, Daniele; Scipioni, Michele; Positano, Vincenzo; Landini, Luigi
2016-10-01
Positron emission tomography (PET) in medicine exploits the properties of positron-emitting unstable nuclei. The pairs of γ- rays emitted after annihilation are revealed by coincidence detectors and stored as projections in a sinogram. It is well known that radioactive decay follows a Poisson distribution; however, deviation from Poisson statistics occurs on PET projection data prior to reconstruction due to physical effects, measurement errors, correction of deadtime, scatter, and random coincidences. A model that describes the statistical behavior of measured and corrected PET data can aid in understanding the statistical nature of the data: it is a prerequisite to develop efficient reconstruction and processing methods and to reduce noise. The deviation from Poisson statistics in PET data could be described by the Conway-Maxwell-Poisson (CMP) distribution model, which is characterized by the centring parameter λ and the dispersion parameter ν, the latter quantifying the deviation from a Poisson distribution model. In particular, the parameter ν allows quantifying over-dispersion (ν<1) or under-dispersion (ν>1) of data. A simple and efficient method for λ and ν parameters estimation is introduced and assessed using Monte Carlo simulation for a wide range of activity values. The application of the method to simulated and experimental PET phantom data demonstrated that the CMP distribution parameters could detect deviation from the Poisson distribution both in raw and corrected PET data. It may be usefully implemented in image reconstruction algorithms and quantitative PET data analysis, especially in low counting emission data, as in dynamic PET data, where the method demonstrated the best accuracy. PMID:27522237
Calabrese, Justin M; Brunner, Jesse L; Ostfeld, Richard S
2011-01-01
It is well known that parasites are often highly aggregated on their hosts such that relatively few individuals host the large majority of parasites. When the parasites are vectors of infectious disease, a key consequence of this aggregation can be increased disease transmission rates. The cause of this aggregation, however, is much less clear, especially for parasites such as arthropod vectors, which generally spend only a short time on their hosts. Regression-based analyses of ticks on various hosts have focused almost exclusively on identifying the intrinsic host characteristics associated with large burdens, but these efforts have had mixed results; most host traits examined have some small influence, but none are key. An alternative approach, the Poisson-gamma mixture distribution, has often been used to describe aggregated parasite distributions in a range of host/macroparasite systems, but lacks a clear mechanistic basis. Here, we extend this framework by linking it to a general model of parasite accumulation. Then, focusing on blacklegged ticks (Ixodes scapularis) on mice (Peromyscus leucopus), we fit the extended model to the best currently available larval tick burden datasets via hierarchical Bayesian methods, and use it to explore the relative contributions of intrinsic and extrinsic factors on observed tick burdens. Our results suggest that simple bad luck-inhabiting a home range with high vector density-may play a much larger role in determining parasite burdens than is currently appreciated. PMID:22216216
Psychosocial Predictors of Adjustment among First Year College of Education Students
ERIC Educational Resources Information Center
Salami, Samuel O.
2011-01-01
The purpose of this study was to examine the contribution of psychological and social factors to the prediction of adjustment to college. A total of 250 first year students from colleges of education in Kwara State, Nigeria, completed measures of self-esteem, emotional intelligence, stress, social support and adjustment. Regression analyses…
ERIC Educational Resources Information Center
Hickman, Gregory P.; Bartholomae, Suzanne; McKenry, Patrick C.
2000-01-01
Examines the relationship between parenting styles and academic achievement and adjustment of traditional college freshmen (N=101). Multiple regression models indicate that authoritative parenting style was positively related to student's academic adjustment. Self-esteem was significantly predictive of social, personal-emotional, goal…
ERIC Educational Resources Information Center
Raymond, Mark R.; Harik, Polina; Clauser, Brian E.
2011-01-01
Prior research indicates that the overall reliability of performance ratings can be improved by using ordinary least squares (OLS) regression to adjust for rater effects. The present investigation extends previous work by evaluating the impact of OLS adjustment on standard errors of measurement ("SEM") at specific score levels. In addition, a…
Ni, Wei; Ding, Guoyong; Li, Yifei; Li, Hongkai; Jiang, Baofa
2014-01-01
Background Xinxiang, a city in Henan Province, suffered from frequent floods due to persistent and heavy precipitation from 2004 to 2010. In the same period, dysentery was a common public health problem in Xinxiang, with the proportion of reported cases being the third highest among all the notified infectious diseases. Objectives We focused on dysentery disease consequences of different degrees of floods and examined the association between floods and the morbidity of dysentery on the basis of longitudinal data during the study period. Design A time-series Poisson regression model was conducted to examine the relationship between 10 times different degrees of floods and the monthly morbidity of dysentery from 2004 to 2010 in Xinxiang. Relative risks (RRs) of moderate and severe floods on the morbidity of dysentery were calculated in this paper. In addition, we estimated the attributable contributions of moderate and severe floods to the morbidity of dysentery. Results A total of 7591 cases of dysentery were notified in Xinxiang during the study period. The effect of floods on dysentery was shown with a 0-month lag. Regression analysis showed that the risk of moderate and severe floods on the morbidity of dysentery was 1.55 (95% CI: 1.42–1.670) and 1.74 (95% CI: 1.56–1.94), respectively. The attributable risk proportions (ARPs) of moderate and severe floods to the morbidity of dysentery were 35.53 and 42.48%, respectively. Conclusions This study confirms that floods have significantly increased the risk of dysentery in the study area. In addition, severe floods have a higher proportional contribution to the morbidity of dysentery than moderate floods. Public health action should be taken to avoid and control a potential risk of dysentery epidemics after floods. PMID:25098726
The Vlasov-Poisson System for Stellar Dynamics in Spaces of Constant Curvature
NASA Astrophysics Data System (ADS)
Diacu, Florin; Ibrahim, Slim; Lind, Crystal; Shen, Shengyi
2016-09-01
We obtain a natural extension of the Vlasov-Poisson system for stellar dynamics to spaces of constant Gaussian curvature {κ ≠ 0}: the unit sphere {S^2}, for {κ > 0}, and the unit hyperbolic sphere {H^2}, for {κ < 0}. These equations can be easily generalized to higher dimensions. When the particles move on a geodesic, the system reduces to a 1-dimensional problem that is more singular than the classical analogue of the Vlasov-Poisson system. In the analysis of this reduced model, we study the well-posedness of the problem and derive Penrose-type conditions for linear stability around homogeneous solutions in the sense of Landau damping.
A Family of Poisson Processes for Use in Stochastic Models of Precipitation
NASA Astrophysics Data System (ADS)
Penland, C.
2013-12-01
Both modified Poisson processes and compound Poisson processes can be relevant to stochastic parameterization of precipitation. This presentation compares the dynamical properties of these systems and discusses the physical situations in which each might be appropriate. If the parameters describing either class of systems originate in hydrodynamics, then proper consideration of stochastic calculus is required during numerical implementation of the parameterization. It is shown here that an improper numerical treatment can have severe implications for estimating rainfall distributions, particularly in the tails of the distributions and, thus, on the frequency of extreme events.
Doubly stochastic Poisson process models for precipitation at fine time-scales
NASA Astrophysics Data System (ADS)
Ramesh, Nadarajah I.; Onof, Christian; Xie, Dichao
2012-09-01
This paper considers a class of stochastic point process models, based on doubly stochastic Poisson processes, in the modelling of rainfall. We examine the application of this class of models, a neglected alternative to the widely-known Poisson cluster models, in the analysis of fine time-scale rainfall intensity. These models are mainly used to analyse tipping-bucket raingauge data from a single site but an extension to multiple sites is illustrated which reveals the potential of this class of models to study the temporal and spatial variability of precipitation at fine time-scales.
Modeling spiking behavior of neurons with time-dependent Poisson processes.
Shinomoto, S; Tsubo, Y
2001-10-01
Three kinds of interval statistics, as represented by the coefficient of variation, the skewness coefficient, and the correlation coefficient of consecutive intervals, are evaluated for three kinds of time-dependent Poisson processes: pulse regulated, sinusoidally regulated, and doubly stochastic. Among these three processes, the sinusoidally regulated and doubly stochastic Poisson processes, in the case when the spike rate varies slowly compared with the mean interval between spikes, are found to be consistent with the three statistical coefficients exhibited by data recorded from neurons in the prefrontal cortex of monkeys.
Modeling spiking behavior of neurons with time-dependent Poisson processes
NASA Astrophysics Data System (ADS)
Shinomoto, Shigeru; Tsubo, Yasuhiro
2001-10-01
Three kinds of interval statistics, as represented by the coefficient of variation, the skewness coefficient, and the correlation coefficient of consecutive intervals, are evaluated for three kinds of time-dependent Poisson processes: pulse regulated, sinusoidally regulated, and doubly stochastic. Among these three processes, the sinusoidally regulated and doubly stochastic Poisson processes, in the case when the spike rate varies slowly compared with the mean interval between spikes, are found to be consistent with the three statistical coefficients exhibited by data recorded from neurons in the prefrontal cortex of monkeys.
Arnold, J.; Kosson, D.S.; Garrabrants, A.; Meeussen, J.C.L.; Sloot, H.A. van der
2013-02-15
A robust numerical solution of the nonlinear Poisson-Boltzmann equation for asymmetric polyelectrolyte solutions in discrete pore geometries is presented. Comparisons to the linearized approximation of the Poisson-Boltzmann equation reveal that the assumptions leading to linearization may not be appropriate for the electrochemical regime in many cementitious materials. Implications of the electric double layer on both partitioning of species and on diffusive release are discussed. The influence of the electric double layer on anion diffusion relative to cation diffusion is examined.
Influence of the Poisson Ratio on the Natural Frequencies of Stepped-Thickness Circular Plate
NASA Astrophysics Data System (ADS)
AL-JUMAILY, A. M.; JAMEEL, K.
2000-07-01
The natural frequencies of simply supported and clamped, stepped-thickness plates are determined using classical plate solutions with exact continuity conditions at the step. The effect of incorporating the Poisson ratio in the continuity conditions on the natural frequencies for nodal diameters, 0, 1 and nodal interior circle numbers 0, 1, 2 is thoroughly investigated. For engineering applications, a design criterion is proposed for simply supported and clamped plates based on an approximate linear model for the natural frequencies. The literature lacks experimental results on this type of plates. Hence, in this paper experimental results are presented for four models with two Poisson's ratios and prove their consistency with the proposed criterion.
A critical comparison of the numerical solution of the 1D filtered Vlasov-Poisson equation
NASA Astrophysics Data System (ADS)
Viñas, A. F.; Klimas, A. J.
2003-04-01
We present a comparison of the numerical solution of the filtered Vlasov-Poisson system of equations using the Fourier-Fourier and the Flux-Balance-MacCormack methods in the electrostatic, non-relativistic case. We show that the splitting method combined with the Flux-Balance-MacCormack scheme provides an efficient and accurate scheme for integrating the filtered Vlasov-Poisson system in their self-consistent field. Finally we present various typical problems of interest in plasma physics research which can be studied with the scheme presented here.
NASA Astrophysics Data System (ADS)
Watanabe, Hirofumi; Okiyama, Yoshio; Nakano, Tatsuya; Tanaka, Shigenori
2010-11-01
We developed FMO-PB method, which incorporates solvation effects into the Fragment Molecular Orbital calculation with the Poisson-Boltzmann equation. This method retains good accuracy in energy calculations with reduced computational time. We calculated the solvation free energies for polyalanines, Alpha-1 peptide, tryptophan cage, and complex of estrogen receptor and 17 β-estradiol to show the applicability of this method for practical systems. From the calculated results, it has been confirmed that the FMO-PB method is useful for large biomolecules in solution. We also discussed the electric charges which are used in solving the Poisson-Boltzmann equation.
iAPBS: a programming interface to Adaptive Poisson-Boltzmann Solver (APBS).
Konecny, Robert; Baker, Nathan A; McCammon, J Andrew
2012-07-26
The Adaptive Poisson-Boltzmann Solver (APBS) is a state-of-the-art suite for performing Poisson-Boltzmann electrostatic calculations on biomolecules. The iAPBS package provides a modular programmatic interface to the APBS library of electrostatic calculation routines. The iAPBS interface library can be linked with a FORTRAN or C/C++ program thus making all of the APBS functionality available from within the application. Several application modules for popular molecular dynamics simulation packages - Amber, NAMD and CHARMM are distributed with iAPBS allowing users of these packages to perform implicit solvent electrostatic calculations with APBS. PMID:22905037
The Vlasov-Poisson Dynamics as the Mean Field Limit of Extended Charges
NASA Astrophysics Data System (ADS)
Lazarovici, Dustin
2016-10-01
The paper treats the validity problem of the nonrelativistic Vlasov-Poisson equation in {d ≥ 2} dimensions. It is shown that the Vlasov-Poisson dynamics can be derived as a combined mean field and point-particle limit of an N-particle Coulomb system of extended charges. This requires a sufficiently fast convergence of the initial empirical distributions. If the electron radius decreases slower than {N^{-{1/d(d+2)}}}, the corresponding initial configurations are typical. This result entails propagation of molecular chaos for the respective dynamics.
On Poisson's ratio for metal matrix composite laminates. [aluminum boron composites
NASA Technical Reports Server (NTRS)
Herakovich, C. T.; Shuart, M. J.
1978-01-01
The definition of Poisson's ratio for nonlinear behavior of metal matrix composite laminates is discussed and experimental results for tensile and compressive loading of five different boron-aluminum laminates are presented. It is shown that there may be considerable difference in the value of Poisson's ratio as defined by a total strain or an incremental strain definition. It is argued that the incremental definition is more appropriate for nonlinear material behavior. Results from a (0) laminate indicate that the incremental definition provides a precursor to failure which is not evident if the total strain definition is used.
A spatial Poisson Point Process to classify coconut fields on Ikonos pansharpened images
NASA Astrophysics Data System (ADS)
Teina, R.; Béréziat, D.; Stoll, B.
2008-12-01
The goal of this study is to classify the coconut fields, observed on remote sensing images, according to their spatial distribution. For that purpose, we use a technique of point pattern analysis to characterize spatially a set of points. These points are obtained after a coconut trees segmentation process on Ikonos images. Coconuts' fields not following a Poisson Point Process are identified as maintained, otherwise other fields are characterized as wild. A spatial analysis is then used to establish locally the Poisson intensity and therefore to characterize the degree of wildness.
Reentrant Origami-Based Metamaterials with Negative Poisson's Ratio and Bistability
NASA Astrophysics Data System (ADS)
Yasuda, H.; Yang, J.
2015-05-01
We investigate the unique mechanical properties of reentrant 3D origami structures based on the Tachi-Miura polyhedron (TMP). We explore the potential usage as mechanical metamaterials that exhibit tunable negative Poisson's ratio and structural bistability simultaneously. We show analytically and experimentally that the Poisson's ratio changes from positive to negative and vice versa during its folding motion. In addition, we verify the bistable mechanism of the reentrant 3D TMP under rigid origami configurations without relying on the buckling motions of planar origami surfaces. This study forms a foundation in designing and constructing TMP-based metamaterials in the form of bellowslike structures for engineering applications.
Improved blowup theorems for the Euler-Poisson equations with attractive forces
NASA Astrophysics Data System (ADS)
Li, Rui; Lin, Xing; Ma, Zongwei
2016-07-01
Our discussion here mainly focuses on the formation of singularities for solutions to the N-dimensional Euler-Poisson equations with attractive forces, in radial symmetry. Motivated by the integration method of Yuen, we prove two blow-up results under the conditions that the solutions have compact radius R(t) or have no compact support restriction, which generalize the ones Yuen obtained in 2011 [M. W. Yuen, "Blowup for the C1 solution of the Euler-Poisson equations of gaseous stars in RN," J. Math. Anal. Appl. 383, 627-633 (2011)].
The noncommutative Poisson bracket and the deformation of the family algebras
Wei, Zhaoting
2015-07-15
The family algebras are introduced by Kirillov in 2000. In this paper, we study the noncommutative Poisson bracket P on the classical family algebra C{sub τ}(g). We show that P controls the first-order 1-parameter formal deformation from C{sub τ}(g) to Q{sub τ}(g) where the latter is the quantum family algebra. Moreover, we will prove that the noncommutative Poisson bracket is in fact a Hochschild 2-coboundary, and therefore, the deformation is infinitesimally trivial. In the last part of this paper, we discuss the relation between Mackey’s analogue and the quantization problem of the family algebras.
Fully Regressive Melanoma: A Case Without Metastasis.
Ehrsam, Eric; Kallini, Joseph R; Lebas, Damien; Khachemoune, Amor; Modiano, Philippe; Cotten, Hervé
2016-08-01
Fully regressive melanoma is a phenomenon in which the primary cutaneous melanoma becomes completely replaced by fibrotic components as a result of host immune response. Although 10 to 35 percent of cases of cutaneous melanomas may partially regress, fully regressive melanoma is very rare; only 47 cases have been reported in the literature to date. AH of the cases of fully regressive melanoma reported in the literature were diagnosed in conjunction with metastasis on a patient. The authors describe a case of fully regressive melanoma without any metastases at the time of its diagnosis. Characteristic findings on dermoscopy, as well as the absence of melanoma on final biopsy, confirmed the diagnosis. PMID:27672418
A Negative Binomial Regression Model for Accuracy Tests
ERIC Educational Resources Information Center
Hung, Lai-Fa
2012-01-01
Rasch used a Poisson model to analyze errors and speed in reading tests. An important property of the Poisson distribution is that the mean and variance are equal. However, in social science research, it is very common for the variance to be greater than the mean (i.e., the data are overdispersed). This study embeds the Rasch model within an…
Delay Adjusted Incidence Infographic
This Infographic shows the National Cancer Institute SEER Incidence Trends. The graphs show the Average Annual Percent Change (AAPC) 2002-2011. For Men, Thyroid: 5.3*,Liver & IBD: 3.6*, Melanoma: 2.3*, Kidney: 2.0*, Myeloma: 1.9*, Pancreas: 1.2*, Leukemia: 0.9*, Oral Cavity: 0.5, Non-Hodgkin Lymphoma: 0.3*, Esophagus: -0.1, Brain & ONS: -0.2*, Bladder: -0.6*, All Sites: -1.1*, Stomach: -1.7*, Larynx: -1.9*, Prostate: -2.1*, Lung & Bronchus: -2.4*, and Colon & Rectum: -3/0*. For Women, Thyroid: 5.8*, Liver & IBD: 2.9*, Myeloma: 1.8*, Kidney: 1.6*, Melanoma: 1.5, Corpus & Uterus: 1.3*, Pancreas: 1.1*, Leukemia: 0.6*, Brain & ONS: 0, Non-Hodgkin Lymphoma: -0.1, All Sites: -0.1, Breast: -0.3, Stomach: -0.7*, Oral Cavity: -0.7*, Bladder: -0.9*, Ovary: -0.9*, Lung & Bronchus: -1.0*, Cervix: -2.4*, and Colon & Rectum: -2.7*. * AAPC is significantly different from zero (p<.05). Rates were adjusted for reporting delay in the registry. www.cancer.gov Source: Special section of the Annual Report to the Nation on the Status of Cancer, 1975-2011.
Developmental regression in autism spectrum disorder
Al Backer, Nouf Backer
2015-01-01
The occurrence of developmental regression in autism spectrum disorder (ASD) is one of the most puzzling phenomena of this disorder. A little is known about the nature and mechanism of developmental regression in ASD. About one-third of young children with ASD lose some skills during the preschool period, usually speech, but sometimes also nonverbal communication, social or play skills are also affected. There is a lot of evidence suggesting that most children who demonstrate regression also had previous, subtle, developmental differences. It is difficult to predict the prognosis of autistic children with developmental regression. It seems that the earlier development of social, language, and attachment behaviors followed by regression does not predict the later recovery of skills or better developmental outcomes. The underlying mechanisms that lead to regression in autism are unknown. The role of subclinical epilepsy in the developmental regression of children with autism remains unclear. PMID:27493417
Racial identity and reflected appraisals as influences on Asian Americans' racial adjustment.
Alvarez, A N; Helms, J E
2001-08-01
J. E. Helms's (1990) racial identity psychodiagnostic model was used to examine the contribution of racial identity schemas and reflected appraisals to the development of healthy racial adjustment of Asian American university students (N = 188). Racial adjustment was operationally defined as collective self-esteem and awareness of anti-Asian racism. Multiple regression analyses suggested that racial identity schemas and reflected appraisals were significantly predictive of Asian Americans' racial adjustment. Implications for counseling and future research are discussed.
A Portrait of Poisson: A Fish out of Water Who Found His Calling.
ERIC Educational Resources Information Center
Geller, B.; Bruk, Y.
1991-01-01
Presents a brief historical sketch of the life and work of one of the founders of modern mathematical physics. Discusses three problem-solving applications of the Poisson distribution with examples from elementary probability theory. Provides background on two of his noteworthy results from the physics of oscillations and the deformation of rigid…
Multi-Parameter Linear Least-Squares Fitting to Poisson Data One Count at a Time
NASA Technical Reports Server (NTRS)
Wheaton, W.; Dunklee, A.; Jacobson, A.; Ling, J.; Mahoney, W.; Radocinski, R.
1993-01-01
A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multi-component linear model, with underlying physical count rates or fluxes which are to be estimated from the data.
The Dependent Poisson Race Model and Modeling Dependence in Conjoint Choice Experiments
ERIC Educational Resources Information Center
Ruan, Shiling; MacEachern, Steven N.; Otter, Thomas; Dean, Angela M.
2008-01-01
Conjoint choice experiments are used widely in marketing to study consumer preferences amongst alternative products. We develop a class of choice models, belonging to the class of Poisson race models, that describe a "random utility" which lends itself to a process-based description of choice. The models incorporate a dependence structure which…
Limit theorems for dilute quantum systems leading to quantum poisson processes
NASA Astrophysics Data System (ADS)
Alicki, Robert; Rudnicki, Sławomir; Sadowski, Sławomir
1993-12-01
The limit theorems for sums of independent or correlated operators representing observables of dilute quantum systems and leading to quantum Poisson processes are proved. Examples of systems of unstable particles and a Fermi lattice gas are discussed. For the latter, relations between low density limit and central limit are given.
Mixtures of compound Poisson processes as models of tick-by-tick financial data
NASA Astrophysics Data System (ADS)
Scalas, Enrico
2007-10-01
A model for the phenomenological description of tick-by-tick share prices in a stock exchange is introduced. It is based on mixtures of compound Poisson processes. Preliminary results based on Monte Carlo simulation show that this model can reproduce various stylized facts.
Dynamic Young's Moduli, Poisson's Ratios and Damping Ratios of Antrim Oil Shale
Somogyi, F.
1980-01-01
Dynamic Young's Modulus, Poisson's Ratio and Damping Ratio in the horizontal and vertical directions have been measured on approximately 30 samples recovered from Well No. 201 and three samples from Well No. 107. Young's Moduli and Damping Ratios were determined by means of a longitudinal resonant frequency technique. Impact of strain gaged specimens was utilized for measurement of Poisson's Ratios. Longitudinal resonant frequency in the direction perpendicular to the bedding planes (i.e., vertical) is extremely sensitive to discontinuities in the test specimen. For samples from Well No. 201, vertical Young's Modulus varied from 2.5 to 26 GN/m/sup 2/; Damping Ratio from approximately 1 to 4%; and Poisson's Ratio, in general, from 0.2 to 0.3. In the horizontal direction, Young's Modulus tends to increase with depth, generally ranging in value from 28 to 42 GN/m/sup 2/; Damping Ratio varies randomly from 1 to 3%, and Poisson's Ratio remains approximately constant at 0.3.
Measurement of Young's modulus and Poisson's ratio of human hair using optical techniques
NASA Astrophysics Data System (ADS)
Hu, Zhenxing; Li, Gaosheng; Xie, Huimin; Hua, Tao; Chen, Pengwan; Huang, Fenglei
2009-12-01
Human hair is a complex nanocomposite fiber whose physical appearance and mechanical strength are governed by a variety of factors like ethnicity, cleaning, grooming, chemical treatments and environment. Characterization of mechanical properties of hair is essential to develop better cosmetic products and advance biological and cosmetic science. Hence the behavior of hair under tension is of interest to beauty care science. Human hair fibers experience tensile forces as they are groomed and styled. Previous researches about tensile testing of human hair were seemingly focused on the longitudinal direction, such as elastic modulus, yield strength, breaking strength and strain at break after different treatment. In this research, experiment of evaluating the mechanical properties of human hair, such as Young's modulus and Poisson's ratio, was designed and conducted. The principle of the experimental instrument was presented. The system of testing instrument to evaluate the Young's modulus and Poisson's ratio was introduced. The range of Poisson's ratio of the hair from the identical person was evaluated. Experiments were conducted for testing the mechanical properties after acid, aqueous alkali and neutral solution treatment of human hair. Explanation of Young's modulus and Poisson's ratio was conducted base on these results of experiments. These results can be useful to hair treatment and cosmetic product.
Measurement of Young's modulus and Poisson's ratio of human hair using optical techniques
NASA Astrophysics Data System (ADS)
Hu, Zhenxing; Li, Gaosheng; Xie, Huimin; Hua, Tao; Chen, Pengwan; Huang, Fenglei
2010-03-01
Human hair is a complex nanocomposite fiber whose physical appearance and mechanical strength are governed by a variety of factors like ethnicity, cleaning, grooming, chemical treatments and environment. Characterization of mechanical properties of hair is essential to develop better cosmetic products and advance biological and cosmetic science. Hence the behavior of hair under tension is of interest to beauty care science. Human hair fibers experience tensile forces as they are groomed and styled. Previous researches about tensile testing of human hair were seemingly focused on the longitudinal direction, such as elastic modulus, yield strength, breaking strength and strain at break after different treatment. In this research, experiment of evaluating the mechanical properties of human hair, such as Young's modulus and Poisson's ratio, was designed and conducted. The principle of the experimental instrument was presented. The system of testing instrument to evaluate the Young's modulus and Poisson's ratio was introduced. The range of Poisson's ratio of the hair from the identical person was evaluated. Experiments were conducted for testing the mechanical properties after acid, aqueous alkali and neutral solution treatment of human hair. Explanation of Young's modulus and Poisson's ratio was conducted base on these results of experiments. These results can be useful to hair treatment and cosmetic product.
ERIC Educational Resources Information Center
Shiyko, Mariya P.; Li, Yuelin; Rindskopf, David
2012-01-01
Intensive longitudinal data (ILD) have become increasingly common in the social and behavioral sciences; count variables, such as the number of daily smoked cigarettes, are frequently used outcomes in many ILD studies. We demonstrate a generalized extension of growth mixture modeling (GMM) to Poisson-distributed ILD for identifying qualitatively…
Analysis of Large Data Logs: An Application of Poisson Sampling on Excite Web Queries.
ERIC Educational Resources Information Center
Ozmutlu, H. Cenk; Spink, Amanda; Ozmutlu, Seda
2002-01-01
Discusses the need for tools that allow effective analysis of search engine queries to provide a greater understanding of Web users' information seeking behavior and describes a study that developed an effective strategy for selecting samples from large-scale data sets. Reports on Poisson sampling with data logs from the Excite search engine.…
Positive bound state solutions for some Schrödinger-Poisson systems
NASA Astrophysics Data System (ADS)
Cerami, Giovanna; Molle, Riccardo
2016-10-01
The paper deals with a class of Schrödinger-Poisson systems, where the coupling term and the other coefficients do not have any symmetry property. Moreover, the setting we consider does not allow the existence of ground state solutions. Under suitable assumptions on the decay rate of the coefficients, we prove existence of a bound state, finite energy solution.
ERIC Educational Resources Information Center
Sichel, H. S.
1992-01-01
Discusses the use of the generalized inverse Gaussian-Poisson (GIGP) distribution in bibliometric studies. The main types of size-frequency distributions are described, bibliometric distributions in logarithms are examined; parameter estimation is discussed; and goodness-of-fit tests are considered. Examples of applications are included. (17…
ERIC Educational Resources Information Center
Kayser, Brian D.
The fit of educational aspirations of Illinois rural high school youths to 3 related one-parameter mathematical models was investigated. The models used were the continuous-time Markov chain model, the discrete-time Markov chain, and the Poisson distribution. The sample of 635 students responded to questionnaires from 1966 to 1969 as part of an…
Birth and Death Process Modeling Leads to the Poisson Distribution: A Journey Worth Taking
ERIC Educational Resources Information Center
Rash, Agnes M.; Winkel, Brian J.
2009-01-01
This paper describes details of development of the general birth and death process from which we can extract the Poisson process as a special case. This general process is appropriate for a number of courses and units in courses and can enrich the study of mathematics for students as it touches and uses a diverse set of mathematical topics, e.g.,…
Updating a Classic: "The Poisson Distribution and the Supreme Court" Revisited
ERIC Educational Resources Information Center
Cole, Julio H.
2010-01-01
W. A. Wallis studied vacancies in the US Supreme Court over a 96-year period (1837-1932) and found that the distribution of the number of vacancies per year could be characterized by a Poisson model. This note updates this classic study.
Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation
ERIC Educational Resources Information Center
Prentice, J. S. C.
2012-01-01
An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…
About solvability of some boundary value problems for Poisson equation in a ball
NASA Astrophysics Data System (ADS)
Koshanova, Maira D.; Usmanov, Kairat I.; Turmetov, Batirkhan Kh.
2016-08-01
In the present paper, we study properties of some integro-differential operators of fractional order. As an application of the properties of these operators for Poisson equation we examine questions on solvability of a fractional analogue of the Neumann problem and analogues of periodic boundary value problems for circular domains. The exact conditions for solvability of these problems are found.
NASA Astrophysics Data System (ADS)
Li, Hua; Hirakawa, Kazuhiko; Cao, Jun-Cheng
2013-08-01
We have investigated the importance of Poisson potential induced by intentional doping on the band structures of two-well scattering injection terahertz quantum-cascade lasers, using a self-consistent Schrödinger-Poisson method. The calculated results show that the increase in doping density leads to a dramatic increase in Poisson potential, i.e., every 1010 cm-2 increase in sheet density brings about 0.58 meV Poisson potential. As the doping is increased from 3.6×1010 to 3.0×1011 cm-2, the calculated optical transition energy shows a significant shift (27% increase). By taking account of the free carrier absorption loss and the scattering injection efficiency, a narrow region doping in the wide GaAs well is recommended to minimize the influence of Poisson potential on the band structures.
Life Events, Sibling Warmth, and Youths' Adjustment.
Waite, Evelyn B; Shanahan, Lilly; Calkins, Susan D; Keane, Susan P; O'Brien, Marion
2011-10-01
Sibling warmth has been identified as a protective factor from life events, but stressor-support match-mismatch and social domains perspectives suggest that sibling warmth may not efficiently protect youths from all types of life events. We tested whether sibling warmth moderated the association between each of family-wide, youths' personal, and siblings' personal life events and both depressive symptoms and risk-taking behaviors. Participants were 187 youths aged 9-18 (M = 11.80 years old, SD = 2.05). Multiple regression models revealed that sibling warmth was a protective factor from depressive symptoms for family-wide events, but not for youths' personal and siblings' personal life events. Findings highlight the importance of contextualizing protective functions of sibling warmth by taking into account the domains of stressors and adjustment. PMID:22241934
LRGS: Linear Regression by Gibbs Sampling
NASA Astrophysics Data System (ADS)
Mantz, Adam B.
2016-02-01
LRGS (Linear Regression by Gibbs Sampling) implements a Gibbs sampler to solve the problem of multivariate linear regression with uncertainties in all measured quantities and intrinsic scatter. LRGS extends an algorithm by Kelly (2007) that used Gibbs sampling for performing linear regression in fairly general cases in two ways: generalizing the procedure for multiple response variables, and modeling the prior distribution of covariates using a Dirichlet process.
Quantile regression applied to spectral distance decay
Rocchini, D.; Cade, B.S.
2008-01-01
Remotely sensed imagery has long been recognized as a powerful support for characterizing and estimating biodiversity. Spectral distance among sites has proven to be a powerful approach for detecting species composition variability. Regression analysis of species similarity versus spectral distance allows us to quantitatively estimate the amount of turnover in species composition with respect to spectral and ecological variability. In classical regression analysis, the residual sum of squares is minimized for the mean of the dependent variable distribution. However, many ecological data sets are characterized by a high number of zeroes that add noise to the regression model. Quantile regressions can be used to evaluate trend in the upper quantiles rather than a mean trend across the whole distribution of the dependent variable. In this letter, we used ordinary least squares (OLS) and quantile regressions to estimate the decay of species similarity versus spectral distance. The achieved decay rates were statistically nonzero (p < 0.01), considering both OLS and quantile regressions. Nonetheless, the OLS regression estimate of the mean decay rate was only half the decay rate indicated by the upper quantiles. Moreover, the intercept value, representing the similarity reached when the spectral distance approaches zero, was very low compared with the intercepts of the upper quantiles, which detected high species similarity when habitats are more similar. In this letter, we demonstrated the power of using quantile regressions applied to spectral distance decay to reveal species diversity patterns otherwise lost or underestimated by OLS regression. ?? 2008 IEEE.
Regression Calibration with Heteroscedastic Error Variance
Spiegelman, Donna; Logan, Roger; Grove, Douglas
2011-01-01
The problem of covariate measurement error with heteroscedastic measurement error variance is considered. Standard regression calibration assumes that the measurement error has a homoscedastic measurement error variance. An estimator is proposed to correct regression coefficients for covariate measurement error with heteroscedastic variance. Point and interval estimates are derived. Validation data containing the gold standard must be available. This estimator is a closed-form correction of the uncorrected primary regression coefficients, which may be of logistic or Cox proportional hazards model form, and is closely related to the version of regression calibration developed by Rosner et al. (1990). The primary regression model can include multiple covariates measured without error. The use of these estimators is illustrated in two data sets, one taken from occupational epidemiology (the ACE study) and one taken from nutritional epidemiology (the Nurses’ Health Study). In both cases, although there was evidence of moderate heteroscedasticity, there was little difference in estimation or inference using this new procedure compared to standard regression calibration. It is shown theoretically that unless the relative risk is large or measurement error severe, standard regression calibration approximations will typically be adequate, even with moderate heteroscedasticity in the measurement error model variance. In a detailed simulation study, standard regression calibration performed either as well as or better than the new estimator. When the disease is rare and the errors normally distributed, or when measurement error is moderate, standard regression calibration remains the method of choice. PMID:22848187
Process modeling with the regression network.
van der Walt, T; Barnard, E; van Deventer, J
1995-01-01
A new connectionist network topology called the regression network is proposed. The structural and underlying mathematical features of the regression network are investigated. Emphasis is placed on the intricacies of the optimization process for the regression network and some measures to alleviate these difficulties of optimization are proposed and investigated. The ability of the regression network algorithm to perform either nonparametric or parametric optimization, as well as a combination of both, is also highlighted. It is further shown how the regression network can be used to model systems which are poorly understood on the basis of sparse data. A semi-empirical regression network model is developed for a metallurgical processing operation (a hydrocyclone classifier) by building mechanistic knowledge into the connectionist structure of the regression network model. Poorly understood aspects of the process are provided for by use of nonparametric regions within the structure of the semi-empirical connectionist model. The performance of the regression network model is compared to the corresponding generalization performance results obtained by some other nonparametric regression techniques.
Hybrid fuzzy regression with trapezoidal fuzzy data
NASA Astrophysics Data System (ADS)
Razzaghnia, T.; Danesh, S.; Maleki, A.
2011-12-01
In this regard, this research deals with a method for hybrid fuzzy least-squares regression. The extension of symmetric triangular fuzzy coefficients to asymmetric trapezoidal fuzzy coefficients is considered as an effective measure for removing unnecessary fuzziness of the linear fuzzy model. First, trapezoidal fuzzy variable is applied to derive a bivariate regression model. In the following, normal equations are formulated to solve the four parts of hybrid regression coefficients. Also the model is extended to multiple regression analysis. Eventually, method is compared with Y-H.O. chang's model.
[From clinical judgment to linear regression model.
Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O
2013-01-01
When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R(2)) indicates the importance of independent variables in the outcome.
Geodesic least squares regression on information manifolds
Verdoolaege, Geert
2014-12-05
We present a novel regression method targeted at situations with significant uncertainty on both the dependent and independent variables or with non-Gaussian distribution models. Unlike the classic regression model, the conditional distribution of the response variable suggested by the data need not be the same as the modeled distribution. Instead they are matched by minimizing the Rao geodesic distance between them. This yields a more flexible regression method that is less constrained by the assumptions imposed through the regression model. As an example, we demonstrate the improved resistance of our method against some flawed model assumptions and we apply this to scaling laws in magnetic confinement fusion.
Mood Adjustment via Mass Communication.
ERIC Educational Resources Information Center
Knobloch, Silvia
2003-01-01
Proposes and experimentally tests mood adjustment approach, complementing mood management theory. Discusses how results regarding self-exposure across time show that patterns of popular music listening among a group of undergraduate students differ with initial mood and anticipation, lending support to mood adjustment hypotheses. Describes how…
Spousal Adjustment to Myocardial Infarction.
ERIC Educational Resources Information Center
Ziglar, Elisa J.
This paper reviews the literature on the stresses and coping strategies of spouses of patients with myocardial infarction (MI). It attempts to identify specific problem areas of adjustment for the spouse and to explore the effects of spousal adjustment on patient recovery. Chapter one provides an overview of the importance in examining the…
Data correction for seven activity trackers based on regression models.
Andalibi, Vafa; Honko, Harri; Christophe, Francois; Viik, Jari
2015-08-01
Using an activity tracker for measuring activity-related parameters, e.g. steps and energy expenditure (EE), can be very helpful in assisting a person's fitness improvement. Unlike the measuring of number of steps, an accurate EE estimation requires additional personal information as well as accurate velocity of movement, which is hard to achieve due to inaccuracy of sensors. In this paper, we have evaluated regression-based models to improve the precision for both steps and EE estimation. For this purpose, data of seven activity trackers and two reference devices was collected from 20 young adult volunteers wearing all devices at once in three different tests, namely 60-minute office work, 6-hour overall activity and 60-minute walking. Reference data is used to create regression models for each device and relative percentage errors of adjusted values are then statistically compared to that of original values. The effectiveness of regression models are determined based on the result of a statistical test. During a walking period, EE measurement was improved in all devices. The step measurement was also improved in five of them. The results show that improvement of EE estimation is possible only with low-cost implementation of fitting model over the collected data e.g. in the app or in corresponding service back-end. PMID:26736578
ERIC Educational Resources Information Center
Bulcock, J. W.
The problem of model estimation when the data are collinear was examined. Though the ridge regression (RR) outperforms ordinary least squares (OLS) regression in the presence of acute multicollinearity, it is not a problem free technique for reducing the variance of the estimates. It is a stochastic procedure when it should be nonstochastic and it…
Parental Divorce and Children's Adjustment.
Lansford, Jennifer E
2009-03-01
This article reviews the research literature on links between parental divorce and children's short-term and long-term adjustment. First, I consider evidence regarding how divorce relates to children's externalizing behaviors, internalizing problems, academic achievement, and social relationships. Second, I examine timing of the divorce, demographic characteristics, children's adjustment prior to the divorce, and stigmatization as moderators of the links between divorce and children's adjustment. Third, I examine income, interparental conflict, parenting, and parents well-being as mediators of relations between divorce and children's adjustment. Fourth, I note the caveats and limitations of the research literature. Finally, I consider notable policies related to grounds for divorce, child support, and child custody in light of how they might affect children s adjustment to their parents divorce.
Adjustment versus no adjustment when using adjustable sutures in strabismus surgery
Liebermann, Laura; Hatt, Sarah R.; Leske, David A.; Holmes, Jonathan M.
2013-01-01
Purpose To compare long-term postoperative outcomes when performing an adjustment to achieve a desired immediate postoperative alignment versus simply tying off at the desired immediate postoperative alignment when using adjustable sutures for strabismus surgery. Methods We retrospectively identified 89 consecutive patients who underwent a reoperation for horizontal strabismus using adjustable sutures and also had a 6-week and 1-year outcome examination. In each case, the intent of the surgeon was to tie off and only to adjust if the patient was not within the intended immediate postoperative range. Postoperative success was predefined based on angle of misalignment and diplopia at distance and near. Results Of the 89 patients, 53 (60%) were adjusted and 36 (40%) were tied off. Success rates were similar between patients who were simply tied off immediately after surgery and those who were adjusted. At 6 weeks, the success rate was 64% for the nonadjusted group versus 81% for the adjusted group (P = 0.09; difference of 17%; 95% CI, −2% to 36%). At 1 year, the success rate was 67% for the nonadjusted group versus 77% for the adjusted group (P = 0.3; difference of 11%; 95% CI, −8% to 30%). Conclusions Performing an adjustment to obtain a desired immediate postoperative alignment did not yield inferior long-term outcomes to those obtained by tying off to obtain that initial alignment. If patients were who were outside the desired immediate postoperative range had not been not adjusted, it is possible that their long-term outcomes would have been worse, therefore, overall, an adjustable approach may be superior to a nonadjustable approach. PMID:23415035
Risk-adjusted monitoring of survival times.
Sego, Landon H; Reynolds, Marion R; Woodall, William H
2009-04-30
We consider the monitoring of surgical outcomes, where each patient has a different risk of post-operative mortality due to risk factors that exist prior to the surgery. We propose a risk-adjusted (RA) survival time CUSUM chart (RAST CUSUM) for monitoring a continuous, time-to-event variable that may be right-censored. Risk adjustment is accomplished using accelerated failure time regression models. We compare the average run length performance of the RAST CUSUM chart with the RA Bernoulli CUSUM chart using data from cardiac surgeries to motivate the details of the comparison. The comparisons show that the RAST CUSUM chart is more efficient at detecting a sudden increase in the odds of mortality than the RA Bernoulli CUSUM chart, especially when the fraction of censored observations is relatively low or when a small increase in the odds of mortality occurs. We also discuss the impact of the amount of training data used to estimate chart parameters as well as the implementation of the RAST CUSUM chart during prospective monitoring.
Fitts' Law in early postural adjustments.
Bertucco, M; Cesari, P; Latash, M L
2013-02-12
We tested a hypothesis that the classical relation between movement time and index of difficulty (ID) in quick pointing action (Fitts' Law) reflects processes at the level of motor planning. Healthy subjects stood on a force platform and performed quick and accurate hand movements into targets of different size located at two distances. The movements were associated with early postural adjustments that are assumed to reflect motor planning processes. The short distance did not require trunk rotation, while the long distance did. As a result, movements over the long distance were associated with substantial Coriolis forces. Movement kinematics and contact forces and moments recorded by the platform were studied. Movement time scaled with ID for both movements. However, the data could not be fitted with a single regression: Movements over the long distance had a larger intercept corresponding to movement times about 140 ms longer than movements over the shorter distance. The magnitude of postural adjustments prior to movement initiation scaled with ID for both short and long distances. Our results provide strong support for the hypothesis that Fitts' Law emerges at the level of motor planning, not at the level of corrections of ongoing movements. They show that, during natural movements, changes in movement distance may lead to changes in the relation between movement time and ID, for example when the contribution of different body segments to the movement varies and when the action of Coriolis force may require an additional correction of the movement trajectory. PMID:23211560
Fitts’ Law in Early Postural Adjustments
Bertucco, M.; Cesari, P.; Latash, M.L
2012-01-01
We tested a hypothesis that the classical relation between movement time and index of difficulty (ID) in quick pointing action (Fitts’ Law) reflects processes at the level of motor planning. Healthy subjects stood on a force platform and performed quick and accurate hand movements into targets of different size located at two distances. The movements were associated with early postural adjustments that are assumed to reflect motor planning processes. The short distance did not require trunk rotation, while the long distance did. As a result, movements over the long distance were associated with substantiual Coriolis forces. Movement kinematics and contact forces and moments recorded by the platform were studied. Movement time scaled with ID for both movements. However, the data could not be fitted with a single regression: Movements over the long distance had a larger intercept corresponding to movement times about 140 ms longer than movements over the shorter distance. The magnitude of postural adjustments prior to movement initiation scaled with ID for both short and long distances. Our results provide strong support for the hypothesis that Fitts’ Law emerges at the level of motor planning, not at the level of corrections of ongoing movements. They show that, during natural movements, changes in movement distance may lead to changes in the relation between movement time and ID, for example when the contribution of different body segments to the movement varies and when the action of Coriolis force may require an additional correction of the movement trajectory. PMID:23211560
Suppression Situations in Multiple Linear Regression
ERIC Educational Resources Information Center
Shieh, Gwowen
2006-01-01
This article proposes alternative expressions for the two most prevailing definitions of suppression without resorting to the standardized regression modeling. The formulation provides a simple basis for the examination of their relationship. For the two-predictor regression, the author demonstrates that the previous results in the literature are…
Deriving the Regression Equation without Using Calculus
ERIC Educational Resources Information Center
Gordon, Sheldon P.; Gordon, Florence S.
2004-01-01
Probably the one "new" mathematical topic that is most responsible for modernizing courses in college algebra and precalculus over the last few years is the idea of fitting a function to a set of data in the sense of a least squares fit. Whether it be simple linear regression or nonlinear regression, this topic opens the door to applying the…
A Practical Guide to Regression Discontinuity
ERIC Educational Resources Information Center
Jacob, Robin; Zhu, Pei; Somers, Marie-Andrée; Bloom, Howard
2012-01-01
Regression discontinuity (RD) analysis is a rigorous nonexperimental approach that can be used to estimate program impacts in situations in which candidates are selected for treatment based on whether their value for a numeric rating exceeds a designated threshold or cut-point. Over the last two decades, the regression discontinuity approach has…
Dealing with Outliers: Robust, Resistant Regression
ERIC Educational Resources Information Center
Glasser, Leslie
2007-01-01
Least-squares linear regression is the best of statistics and it is the worst of statistics. The reasons for this paradoxical claim, arising from possible inapplicability of the method and the excessive influence of "outliers", are discussed and substitute regression methods based on median selection, which is both robust and resistant, are…
Cross-Validation, Shrinkage, and Multiple Regression.
ERIC Educational Resources Information Center
Hynes, Kevin
One aspect of multiple regression--the shrinkage of the multiple correlation coefficient on cross-validation is reviewed. The paper consists of four sections. In section one, the distinction between a fixed and a random multiple regression model is made explicit. In section two, the cross-validation paradigm and an explanation for the occurrence…
Regression Analysis: Legal Applications in Institutional Research
ERIC Educational Resources Information Center
Frizell, Julie A.; Shippen, Benjamin S., Jr.; Luna, Andrew L.
2008-01-01
This article reviews multiple regression analysis, describes how its results should be interpreted, and instructs institutional researchers on how to conduct such analyses using an example focused on faculty pay equity between men and women. The use of multiple regression analysis will be presented as a method with which to compare salaries of…
A Simulation Investigation of Principal Component Regression.
ERIC Educational Resources Information Center
Allen, David E.
Regression analysis is one of the more common analytic tools used by researchers. However, multicollinearity between the predictor variables can cause problems in using the results of regression analyses. Problems associated with multicollinearity include entanglement of relative influences of variables due to reduced precision of estimation,…
Incremental Net Effects in Multiple Regression
ERIC Educational Resources Information Center
Lipovetsky, Stan; Conklin, Michael
2005-01-01
A regular problem in regression analysis is estimating the comparative importance of the predictors in the model. This work considers the 'net effects', or shares of the predictors in the coefficient of the multiple determination, which is a widely used characteristic of the quality of a regression model. Estimation of the net effects can be a…