#### Sample records for adjusted poisson regression

1. Understanding poisson regression.

PubMed

Hayat, Matthew J; Higgins, Melinda

2014-04-01

Nurse investigators often collect study data in the form of counts. Traditional methods of data analysis have historically approached analysis of count data either as if the count data were continuous and normally distributed or with dichotomization of the counts into the categories of occurred or did not occur. These outdated methods for analyzing count data have been replaced with more appropriate statistical methods that make use of the Poisson probability distribution, which is useful for analyzing count data. The purpose of this article is to provide an overview of the Poisson distribution and its use in Poisson regression. Assumption violations for the standard Poisson regression model are addressed with alternative approaches, including addition of an overdispersion parameter or negative binomial regression. An illustrative example is presented with an application from the ENSPIRE study, and regression modeling of comorbidity data is included for illustrative purposes.

2. Estimation of count data using mixed Poisson, generalized Poisson and finite Poisson mixture regression models

Zamani, Hossein; Faroughi, Pouya; Ismail, Noriszura

2014-06-01

This study relates the Poisson, mixed Poisson (MP), generalized Poisson (GP) and finite Poisson mixture (FPM) regression models through mean-variance relationship, and suggests the application of these models for overdispersed count data. As an illustration, the regression models are fitted to the US skin care count data. The results indicate that FPM regression model is the best model since it provides the largest log likelihood and the smallest AIC, followed by Poisson-Inverse Gaussion (PIG), GP and negative binomial (NB) regression models. The results also show that NB, PIG and GP regression models provide similar results.

3. Modelling of filariasis in East Java with Poisson regression and generalized Poisson regression models

Darnah

2016-04-01

Poisson regression has been used if the response variable is count data that based on the Poisson distribution. The Poisson distribution assumed equal dispersion. In fact, a situation where count data are over dispersion or under dispersion so that Poisson regression inappropriate because it may underestimate the standard errors and overstate the significance of the regression parameters, and consequently, giving misleading inference about the regression parameters. This paper suggests the generalized Poisson regression model to handling over dispersion and under dispersion on the Poisson regression model. The Poisson regression model and generalized Poisson regression model will be applied the number of filariasis cases in East Java. Based regression Poisson model the factors influence of filariasis are the percentage of families who don't behave clean and healthy living and the percentage of families who don't have a healthy house. The Poisson regression model occurs over dispersion so that we using generalized Poisson regression. The best generalized Poisson regression model showing the factor influence of filariasis is percentage of families who don't have healthy house. Interpretation of result the model is each additional 1 percentage of families who don't have healthy house will add 1 people filariasis patient.

4. Background stratified Poisson regression analysis of cohort data.

PubMed

Richardson, David B; Langholz, Bryan

2012-03-01

Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models.

5. Poisson Mixture Regression Models for Heart Disease Prediction.

PubMed

Mufudza, Chipo; Erol, Hamza

2016-01-01

Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

6. Poisson Regression Analysis of Illness and Injury Surveillance Data

SciTech Connect

Frome E.L., Watkins J.P., Ellis E.D.

2012-12-12

The Department of Energy (DOE) uses illness and injury surveillance to monitor morbidity and assess the overall health of the work force. Data collected from each participating site include health events and a roster file with demographic information. The source data files are maintained in a relational data base, and are used to obtain stratified tables of health event counts and person time at risk that serve as the starting point for Poisson regression analysis. The explanatory variables that define these tables are age, gender, occupational group, and time. Typical response variables of interest are the number of absences due to illness or injury, i.e., the response variable is a count. Poisson regression methods are used to describe the effect of the explanatory variables on the health event rates using a log-linear main effects model. Results of fitting the main effects model are summarized in a tabular and graphical form and interpretation of model parameters is provided. An analysis of deviance table is used to evaluate the importance of each of the explanatory variables on the event rate of interest and to determine if interaction terms should be considered in the analysis. Although Poisson regression methods are widely used in the analysis of count data, there are situations in which over-dispersion occurs. This could be due to lack-of-fit of the regression model, extra-Poisson variation, or both. A score test statistic and regression diagnostics are used to identify over-dispersion. A quasi-likelihood method of moments procedure is used to evaluate and adjust for extra-Poisson variation when necessary. Two examples are presented using respiratory disease absence rates at two DOE sites to illustrate the methods and interpretation of the results. In the first example the Poisson main effects model is adequate. In the second example the score test indicates considerable over-dispersion and a more detailed analysis attributes the over-dispersion to extra-Poisson

7. Testing approaches for overdispersion in poisson regression versus the generalized poisson model.

PubMed

Yang, Zhao; Hardin, James W; Addy, Cheryl L; Vuong, Quang H

2007-08-01

Overdispersion is a common phenomenon in Poisson modeling, and the negative binomial (NB) model is frequently used to account for overdispersion. Testing approaches (Wald test, likelihood ratio test (LRT), and score test) for overdispersion in the Poisson regression versus the NB model are available. Because the generalized Poisson (GP) model is similar to the NB model, we consider the former as an alternate model for overdispersed count data. The score test has an advantage over the LRT and the Wald test in that the score test only requires that the parameter of interest be estimated under the null hypothesis. This paper proposes a score test for overdispersion based on the GP model and compares the power of the test with the LRT and Wald tests. A simulation study indicates the score test based on asymptotic standard Normal distribution is more appropriate in practical application for higher empirical power, however, it underestimates the nominal significance level, especially in small sample situations, and examples illustrate the results of comparing the candidate tests between the Poisson and GP models. A bootstrap test is also proposed to adjust the underestimation of nominal level in the score statistic when the sample size is small. The simulation study indicates the bootstrap test has significance level closer to nominal size and has uniformly greater power than the score test based on asymptotic standard Normal distribution. From a practical perspective, we suggest that, if the score test gives even a weak indication that the Poisson model is inappropriate, say at the 0.10 significance level, we advise the more accurate bootstrap procedure as a better test for comparing whether the GP model is more appropriate than Poisson model. Finally, the Vuong test is illustrated to choose between GP and NB2 models for the same dataset.

8. Regression models for mixed Poisson and continuous longitudinal data.

PubMed

Yang, Ying; Kang, Jian; Mao, Kai; Zhang, Jie

2007-09-10

In this article we develop flexible regression models in two respects to evaluate the influence of the covariate variables on the mixed Poisson and continuous responses and to evaluate how the correlation between Poisson response and continuous response changes over time. A scenario for dealing with regression models of mixed continuous and Poisson responses when the heterogeneous variance and correlation changing over time exist is proposed. Our general approach is first to jointly build marginal model and to check whether the variance and correlation change over time via likelihood ratio test. If the variance and correlation change over time, we will do a suitable data transformation to properly evaluate the influence of the covariates on the mixed responses. The proposed methods are applied to the interstitial cystitis data base (ICDB) cohort study, and we find that the positive correlations significantly change over time, which suggests heterogeneous variances should not be ignored in modelling and inference.

9. Reducing Poisson noise and baseline drift in X-ray spectral images with bootstrap Poisson regression and robust nonparametric regression.

PubMed

Zhu, Feng; Qin, Binjie; Feng, Weiyue; Wang, Huajian; Huang, Shaosen; Lv, Yisong; Chen, Yong

2013-03-21

X-ray spectral imaging provides quantitative imaging of trace elements in a biological sample with high sensitivity. We propose a novel algorithm to promote the signal-to-noise ratio (SNR) of x-ray spectral images that have low photon counts. Firstly, we estimate the image data area that belongs to the homogeneous parts through confidence interval testing. Then, we apply the Poisson regression through its maximum likelihood estimation on this area to estimate the true photon counts from the Poisson noise corrupted data. Unlike other denoising methods based on regression analysis, we use the bootstrap resampling method to ensure the accuracy of regression estimation. Finally, we use a robust local nonparametric regression method to estimate the baseline and subsequently subtract it from the x-ray spectral data to further improve the SNR of the data. Experiments on several real samples show that the proposed method performs better than some state-of-the-art approaches to ensure accuracy and precision for quantitative analysis of the different trace elements in a standard reference biological sample.

10. Modeling the number of car theft using Poisson regression

Zulkifli, Malina; Ling, Agnes Beh Yen; Kasim, Maznah Mat; Ismail, Noriszura

2016-10-01

Regression analysis is the most popular statistical methods used to express the relationship between the variables of response with the covariates. The aim of this paper is to evaluate the factors that influence the number of car theft using Poisson regression model. This paper will focus on the number of car thefts that occurred in districts in Peninsular Malaysia. There are two groups of factor that have been considered, namely district descriptive factors and socio and demographic factors. The result of the study showed that Bumiputera composition, Chinese composition, Other ethnic composition, foreign migration, number of residence with the age between 25 to 64, number of employed person and number of unemployed person are the most influence factors that affect the car theft cases. These information are very useful for the law enforcement department, insurance company and car owners in order to reduce and limiting the car theft cases in Peninsular Malaysia.

11. Extension of the modified Poisson regression model to prospective studies with correlated binary data.

PubMed

Zou, G Y; Donner, Allan

2013-12-01

The Poisson regression model using a sandwich variance estimator has become a viable alternative to the logistic regression model for the analysis of prospective studies with independent binary outcomes. The primary advantage of this approach is that it readily provides covariate-adjusted risk ratios and associated standard errors. In this article, the model is extended to studies with correlated binary outcomes as arise in longitudinal or cluster randomization studies. The key step involves a cluster-level grouping strategy for the computation of the middle term in the sandwich estimator. For a single binary exposure variable without covariate adjustment, this approach results in risk ratio estimates and standard errors that are identical to those found in the survey sampling literature. Simulation results suggest that it is reliable for studies with correlated binary data, provided the total number of clusters is at least 50. Data from observational and cluster randomized studies are used to illustrate the methods.

12. Mixed-effects Poisson regression analysis of adverse event reports

PubMed Central

Gibbons, Robert D.; Segawa, Eisuke; Karabatsos, George; Amatya, Anup K.; Bhaumik, Dulal K.; Brown, C. Hendricks; Kapur, Kush; Marcus, Sue M.; Hur, Kwan; Mann, J. John

2008-01-01

SUMMARY A new statistical methodology is developed for the analysis of spontaneous adverse event (AE) reports from post-marketing drug surveillance data. The method involves both empirical Bayes (EB) and fully Bayes estimation of rate multipliers for each drug within a class of drugs, for a particular AE, based on a mixed-effects Poisson regression model. Both parametric and semiparametric models for the random-effect distribution are examined. The method is applied to data from Food and Drug Administration (FDA)’s Adverse Event Reporting System (AERS) on the relationship between antidepressants and suicide. We obtain point estimates and 95 per cent confidence (posterior) intervals for the rate multiplier for each drug (e.g. antidepressants), which can be used to determine whether a particular drug has an increased risk of association with a particular AE (e.g. suicide). Confidence (posterior) intervals that do not include 1.0 provide evidence for either significant protective or harmful associations of the drug and the adverse effect. We also examine EB, parametric Bayes, and semiparametric Bayes estimators of the rate multipliers and associated confidence (posterior) intervals. Results of our analysis of the FDA AERS data revealed that newer antidepressants are associated with lower rates of suicide adverse event reports compared with older antidepressants. We recommend improvements to the existing AERS system, which are likely to improve its public health value as an early warning system. PMID:18404622

13. Performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data.

PubMed

Yelland, Lisa N; Salter, Amy B; Ryan, Philip

2011-10-15

Modified Poisson regression, which combines a log Poisson regression model with robust variance estimation, is a useful alternative to log binomial regression for estimating relative risks. Previous studies have shown both analytically and by simulation that modified Poisson regression is appropriate for independent prospective data. This method is often applied to clustered prospective data, despite a lack of evidence to support its use in this setting. The purpose of this article is to evaluate the performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data, by using generalized estimating equations to account for clustering. A simulation study is conducted to compare log binomial regression and modified Poisson regression for analyzing clustered data from intervention and observational studies. Both methods generally perform well in terms of bias, type I error, and coverage. Unlike log binomial regression, modified Poisson regression is not prone to convergence problems. The methods are contrasted by using example data sets from 2 large studies. The results presented in this article support the use of modified Poisson regression as an alternative to log binomial regression for analyzing clustered prospective data when clustering is taken into account by using generalized estimating equations.

14. On regression adjustment for the propensity score.

PubMed

Vansteelandt, S; Daniel, R M

2014-10-15

Propensity scores are widely adopted in observational research because they enable adjustment for high-dimensional confounders without requiring models for their association with the outcome of interest. The results of statistical analyses based on stratification, matching or inverse weighting by the propensity score are therefore less susceptible to model extrapolation than those based solely on outcome regression models. This is attractive because extrapolation in outcome regression models may be alarming, yet difficult to diagnose, when the exposed and unexposed individuals have very different covariate distributions. Standard regression adjustment for the propensity score forms an alternative to the aforementioned propensity score methods, but the benefits of this are less clear because it still involves modelling the outcome in addition to the propensity score. In this article, we develop novel insights into the properties of this adjustment method. We demonstrate that standard tests of the null hypothesis of no exposure effect (based on robust variance estimators), as well as particular standardised effects obtained from such adjusted regression models, are robust against misspecification of the outcome model when a propensity score model is correctly specified; they are thus not vulnerable to the aforementioned problem of extrapolation. We moreover propose efficient estimators for these standardised effects, which retain a useful causal interpretation even when the propensity score model is misspecified, provided the outcome regression model is correctly specified.

15. Poisson regression for modeling count and frequency outcomes in trauma research.

PubMed

Gagnon, David R; Doron-LaMarca, Susan; Bell, Margret; O'Farrell, Timothy J; Taft, Casey T

2008-10-01

The authors describe how the Poisson regression method for analyzing count or frequency outcome variables can be applied in trauma studies. The outcome of interest in trauma research may represent a count of the number of incidents of behavior occurring in a given time interval, such as acts of physical aggression or substance abuse. Traditional linear regression approaches assume a normally distributed outcome variable with equal variances over the range of predictor variables, and may not be optimal for modeling count outcomes. An application of Poisson regression is presented using data from a study of intimate partner aggression among male patients in an alcohol treatment program and their female partners. Results of Poisson regression and linear regression models are compared.

16. Fuzzy classifier based support vector regression framework for Poisson ratio determination

Asoodeh, Mojtaba; Bagheripour, Parisa

2013-09-01

Poisson ratio is considered as one of the most important rock mechanical properties of hydrocarbon reservoirs. Determination of this parameter through laboratory measurement is time, cost, and labor intensive. Furthermore, laboratory measurements do not provide continuous data along the reservoir intervals. Hence, a fast, accurate, and inexpensive way of determining Poisson ratio which produces continuous data over the whole reservoir interval is desirable. For this purpose, support vector regression (SVR) method based on statistical learning theory (SLT) was employed as a supervised learning algorithm to estimate Poisson ratio from conventional well log data. SVR is capable of accurately extracting the implicit knowledge contained in conventional well logs and converting the gained knowledge into Poisson ratio data. Structural risk minimization (SRM) principle which is embedded in the SVR structure in addition to empirical risk minimization (EMR) principle provides a robust model for finding quantitative formulation between conventional well log data and Poisson ratio. Although satisfying results were obtained from an individual SVR model, it had flaws of overestimation in low Poisson ratios and underestimation in high Poisson ratios. These errors were eliminated through implementation of fuzzy classifier based SVR (FCBSVR). The FCBSVR significantly improved accuracy of the final prediction. This strategy was successfully applied to data from carbonate reservoir rocks of an Iranian Oil Field. Results indicated that SVR predicted Poisson ratio values are in good agreement with measured values.

17. Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.

PubMed

Chatzis, Sotirios P; Andreou, Andreas S

2015-11-01

Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.

18. Modeling animal-vehicle collisions using diagonal inflated bivariate Poisson regression.

PubMed

Lao, Yunteng; Wu, Yao-Jan; Corey, Jonathan; Wang, Yinhai

2011-01-01

Two types of animal-vehicle collision (AVC) data are commonly adopted for AVC-related risk analysis research: reported AVC data and carcass removal data. One issue with these two data sets is that they were found to have significant discrepancies by previous studies. In order to model these two types of data together and provide a better understanding of highway AVCs, this study adopts a diagonal inflated bivariate Poisson regression method, an inflated version of bivariate Poisson regression model, to fit the reported AVC and carcass removal data sets collected in Washington State during 2002-2006. The diagonal inflated bivariate Poisson model not only can model paired data with correlation, but also handle under- or over-dispersed data sets as well. Compared with three other types of models, double Poisson, bivariate Poisson, and zero-inflated double Poisson, the diagonal inflated bivariate Poisson model demonstrates its capability of fitting two data sets with remarkable overlapping portions resulting from the same stochastic process. Therefore, the diagonal inflated bivariate Poisson model provides researchers a new approach to investigating AVCs from a different perspective involving the three distribution parameters (λ(1), λ(2) and λ(3)). The modeling results show the impacts of traffic elements, geometric design and geographic characteristics on the occurrences of both reported AVC and carcass removal data. It is found that the increase of some associated factors, such as speed limit, annual average daily traffic, and shoulder width, will increase the numbers of reported AVCs and carcass removals. Conversely, the presence of some geometric factors, such as rolling and mountainous terrain, will decrease the number of reported AVCs.

19. Effect of air pollution on lung cancer: A poisson regression model based on vital statistics

SciTech Connect

Tango, Toshiro

1994-11-01

This article describes a Poisson regression model for time trends of mortality to detect the long-term effects of common levels of air pollution on lung cancer, in which the adjustment for cigarette smoking is not always necessary. The main hypothesis to be tested in the model is that if the long-term and common-level air pollution had an effect on lung cancer, the death rate from lung cancer could be expected to increase gradually at a higher rate in the region with relatively high levels of air pollution than in the region with low levels, and that this trend would not be expected for other control diseases in which cigarette smoking is a risk factor. Using this approach, we analyzed the trend of mortality in females aged 40 to 79, from lung cancer and two control diseases, ischemic heart disease and cerebrovascular disease, based on vital statistics in 23 wards of the Tokyo metropolitan area for 1972 to 1988. Ward-specific mean levels per day of SO{sub 2} and NO{sub 2} from 1974 through 1976 estimated by Makino (1978) were used as the ward-specific exposure measure of air pollution. No data on tobacco consumption in each ward is available. Our analysis supported the existence of long-term effects of air pollution on lung cancer. 14 refs., 5 figs., 2 tabs.

20. A marginalized zero-inflated Poisson regression model with overall exposure effects.

PubMed

Long, D Leann; Preisser, John S; Herring, Amy H; Golin, Carol E

2014-12-20

The zero-inflated Poisson (ZIP) regression model is often employed in public health research to examine the relationships between exposures of interest and a count outcome exhibiting many zeros, in excess of the amount expected under sampling from a Poisson distribution. The regression coefficients of the ZIP model have latent class interpretations, which correspond to a susceptible subpopulation at risk for the condition with counts generated from a Poisson distribution and a non-susceptible subpopulation that provides the extra or excess zeros. The ZIP model parameters, however, are not well suited for inference targeted at marginal means, specifically, in quantifying the effect of an explanatory variable in the overall mixture population. We develop a marginalized ZIP model approach for independent responses to model the population mean count directly, allowing straightforward inference for overall exposure effects and empirical robust variance estimation for overall log-incidence density ratios. Through simulation studies, the performance of maximum likelihood estimation of the marginalized ZIP model is assessed and compared with other methods of estimating overall exposure effects. The marginalized ZIP model is applied to a recent study of a motivational interviewing-based safer sex counseling intervention, designed to reduce unprotected sexual act counts.

1. A Spline-Based Lack-Of-Fit Test for Independent Variable Effect in Poisson Regression.

PubMed

Li, Chin-Shang; Tu, Wanzhu

2007-05-01

In regression analysis of count data, independent variables are often modeled by their linear effects under the assumption of log-linearity. In reality, the validity of such an assumption is rarely tested, and its use is at times unjustifiable. A lack-of-fit test is proposed for the adequacy of a postulated functional form of an independent variable within the framework of semiparametric Poisson regression models based on penalized splines. It offers added flexibility in accommodating the potentially non-loglinear effect of the independent variable. A likelihood ratio test is constructed for the adequacy of the postulated parametric form, for example log-linearity, of the independent variable effect. Simulations indicate that the proposed model performs well, and misspecified parametric model has much reduced power. An example is given.

2. Poisson regression analysis of mortality among male workers at a thorium-processing plant

SciTech Connect

Liu, Zhiyuan; Lee, Tze-San; Kotek, T.J.

1991-12-31

Analyses of mortality among a cohort of 3119 male workers employed between 1915 and 1973 at a thorium-processing plant were updated to the end of 1982. Of the whole group, 761 men were deceased and 2161 men were still alive, while 197 men were lost to follow-up. A total of 250 deaths was added to the 511 deaths observed in the previous study. The standardized mortality ratio (SMR) for all causes of death was 1.12 with 95% confidence interval (CI) of 1.05-1.21. The SMRs were also significantly increased for all malignant neoplasms (SMR = 1.23, 95% CI = 1.04-1.43) and lung cancer (SMR = 1.36, 95% CI = 1.02-1.78). Poisson regression analysis was employed to evaluate the joint effects of job classification, duration of employment, time since first employment, age and year at first employment on mortality of all malignant neoplasms and lung cancer. A comparison of internal and external analyses with the Poisson regression model was also conducted and showed no obvious difference in fitting the data on lung cancer mortality of the thorium workers. The results of the multivariate analysis showed that there was no significant effect of all the study factors on mortality due to all malignant neoplasms and lung cancer. Therefore, further study is needed for the former thorium workers.

3. A comparison between Poisson and zero-inflated Poisson regression models with an application to number of black spots in Corriedale sheep.

PubMed

Naya, Hugo; Urioste, Jorge I; Chang, Yu-Mei; Rodrigues-Motta, Mariana; Kremer, Roberto; Gianola, Daniel

2008-01-01

Dark spots in the fleece area are often associated with dark fibres in wool, which limits its competitiveness with other textile fibres. Field data from a sheep experiment in Uruguay revealed an excess number of zeros for dark spots. We compared the performance of four Poisson and zero-inflated Poisson (ZIP) models under four simulation scenarios. All models performed reasonably well under the same scenario for which the data were simulated. The deviance information criterion favoured a Poisson model with residual, while the ZIP model with a residual gave estimates closer to their true values under all simulation scenarios. Both Poisson and ZIP models with an error term at the regression level performed better than their counterparts without such an error. Field data from Corriedale sheep were analysed with Poisson and ZIP models with residuals. Parameter estimates were similar for both models. Although the posterior distribution of the sire variance was skewed due to a small number of rams in the dataset, the median of this variance suggested a scope for genetic selection. The main environmental factor was the age of the sheep at shearing. In summary, age related processes seem to drive the number of dark spots in this breed of sheep.

4. A comparison between Poisson and zero-inflated Poisson regression models with an application to number of black spots in Corriedale sheep

PubMed Central

Naya, Hugo; Urioste, Jorge I; Chang, Yu-Mei; Rodrigues-Motta, Mariana; Kremer, Roberto; Gianola, Daniel

2008-01-01

Dark spots in the fleece area are often associated with dark fibres in wool, which limits its competitiveness with other textile fibres. Field data from a sheep experiment in Uruguay revealed an excess number of zeros for dark spots. We compared the performance of four Poisson and zero-inflated Poisson (ZIP) models under four simulation scenarios. All models performed reasonably well under the same scenario for which the data were simulated. The deviance information criterion favoured a Poisson model with residual, while the ZIP model with a residual gave estimates closer to their true values under all simulation scenarios. Both Poisson and ZIP models with an error term at the regression level performed better than their counterparts without such an error. Field data from Corriedale sheep were analysed with Poisson and ZIP models with residuals. Parameter estimates were similar for both models. Although the posterior distribution of the sire variance was skewed due to a small number of rams in the dataset, the median of this variance suggested a scope for genetic selection. The main environmental factor was the age of the sheep at shearing. In summary, age related processes seem to drive the number of dark spots in this breed of sheep. PMID:18558072

5. Modeling both of the number of pausibacillary and multibacillary leprosy patients by using bivariate poisson regression

Winahju, W. S.; Mukarromah, A.; Putri, S.

2015-03-01

Leprosy is a chronic infectious disease caused by bacteria of leprosy (Mycobacterium leprae). Leprosy has become an important thing in Indonesia because its morbidity is quite high. Based on WHO data in 2014, in 2012 Indonesia has the highest number of new leprosy patients after India and Brazil with a contribution of 18.994 people (8.7% of the world). This number makes Indonesia automatically placed as the country with the highest number of leprosy morbidity of ASEAN countries. The province that most contributes to the number of leprosy patients in Indonesia is East Java. There are two kind of leprosy. They consist of pausibacillary and multibacillary. The morbidity of multibacillary leprosy is higher than pausibacillary leprosy. This paper will discuss modeling both of the number of multibacillary and pausibacillary leprosy patients as responses variables. These responses are count variables, so modeling will be conducted by using bivariate poisson regression method. Unit experiment used is in East Java, and predictors involved are: environment, demography, and poverty. The model uses data in 2012, and the result indicates that all predictors influence significantly.

6. What's the Risk? A Simple Approach for Estimating Adjusted Risk Measures from Nonlinear Models Including Logistic Regression

PubMed Central

Kleinman, Lawrence C; Norton, Edward C

2009-01-01

Objective To develop and validate a general method (called regression risk analysis) to estimate adjusted risk measures from logistic and other nonlinear multiple regression models. We show how to estimate standard errors for these estimates. These measures could supplant various approximations (e.g., adjusted odds ratio [AOR]) that may diverge, especially when outcomes are common. Study Design Regression risk analysis estimates were compared with internal standards as well as with Mantel–Haenszel estimates, Poisson and log-binomial regressions, and a widely used (but flawed) equation to calculate adjusted risk ratios (ARR) from AOR. Data Collection Data sets produced using Monte Carlo simulations. Principal Findings Regression risk analysis accurately estimates ARR and differences directly from multiple regression models, even when confounders are continuous, distributions are skewed, outcomes are common, and effect size is large. It is statistically sound and intuitive, and has properties favoring it over other methods in many cases. Conclusions Regression risk analysis should be the new standard for presenting findings from multiple regression analysis of dichotomous outcomes for cross-sectional, cohort, and population-based case–control studies, particularly when outcomes are common or effect size is large. PMID:18793213

7. Analyzing Seasonal Variations in Suicide With Fourier Poisson Time-Series Regression: A Registry-Based Study From Norway, 1969-2007.

PubMed

Bramness, Jørgen G; Walby, Fredrik A; Morken, Gunnar; Røislien, Jo

2015-08-01

Seasonal variation in the number of suicides has long been acknowledged. It has been suggested that this seasonality has declined in recent years, but studies have generally used statistical methods incapable of confirming this. We examined all suicides occurring in Norway during 1969-2007 (more than 20,000 suicides in total) to establish whether seasonality decreased over time. Fitting of additive Fourier Poisson time-series regression models allowed for formal testing of a possible linear decrease in seasonality, or a reduction at a specific point in time, while adjusting for a possible smooth nonlinear long-term change without having to categorize time into discrete yearly units. The models were compared using Akaike's Information Criterion and analysis of variance. A model with a seasonal pattern was significantly superior to a model without one. There was a reduction in seasonality during the period. Both the model assuming a linear decrease in seasonality and the model assuming a change at a specific point in time were both superior to a model assuming constant seasonality, thus confirming by formal statistical testing that the magnitude of the seasonality in suicides has diminished. The additive Fourier Poisson time-series regression model would also be useful for studying other temporal phenomena with seasonal components.

8. Bayesian semi-parametric analysis of Poisson change-point regression models: application to policy making in Cali, Colombia

PubMed Central

Park, Taeyoung; Krafty, Robert T.; Sánchez, Alvaro I.

2012-01-01

A Poisson regression model with an offset assumes a constant baseline rate after accounting for measured covariates, which may lead to biased estimates of coefficients in an inhomogeneous Poisson process. To correctly estimate the effect of time-dependent covariates, we propose a Poisson change-point regression model with an offset that allows a time-varying baseline rate. When the nonconstant pattern of a log baseline rate is modeled with a nonparametric step function, the resulting semi-parametric model involves a model component of varying dimension and thus requires a sophisticated varying-dimensional inference to obtain correct estimates of model parameters of fixed dimension. To fit the proposed varying-dimensional model, we devise a state-of-the-art MCMC-type algorithm based on partial collapse. The proposed model and methods are used to investigate an association between daily homicide rates in Cali, Colombia and policies that restrict the hours during which the legal sale of alcoholic beverages is permitted. While simultaneously identifying the latent changes in the baseline homicide rate which correspond to the incidence of sociopolitical events, we explore the effect of policies governing the sale of alcohol on homicide rates and seek a policy that balances the economic and cultural dependencies on alcohol sales to the health of the public. PMID:23393408

9. Predictors of the number of under-five malnourished children in Bangladesh: application of the generalized poisson regression model

PubMed Central

2013-01-01

Background Malnutrition is one of the principal causes of child mortality in developing countries including Bangladesh. According to our knowledge, most of the available studies, that addressed the issue of malnutrition among under-five children, considered the categorical (dichotomous/polychotomous) outcome variables and applied logistic regression (binary/multinomial) to find their predictors. In this study malnutrition variable (i.e. outcome) is defined as the number of under-five malnourished children in a family, which is a non-negative count variable. The purposes of the study are (i) to demonstrate the applicability of the generalized Poisson regression (GPR) model as an alternative of other statistical methods and (ii) to find some predictors of this outcome variable. Methods The data is extracted from the Bangladesh Demographic and Health Survey (BDHS) 2007. Briefly, this survey employs a nationally representative sample which is based on a two-stage stratified sample of households. A total of 4,460 under-five children is analysed using various statistical techniques namely Chi-square test and GPR model. Results The GPR model (as compared to the standard Poisson regression and negative Binomial regression) is found to be justified to study the above-mentioned outcome variable because of its under-dispersion (variance < mean) property. Our study also identify several significant predictors of the outcome variable namely mother’s education, father’s education, wealth index, sanitation status, source of drinking water, and total number of children ever born to a woman. Conclusions Consistencies of our findings in light of many other studies suggest that the GPR model is an ideal alternative of other statistical models to analyse the number of under-five malnourished children in a family. Strategies based on significant predictors may improve the nutritional status of children in Bangladesh. PMID:23297699

10. The analysis of incontinence episodes and other count data in patients with overactive bladder by Poisson and negative binomial regression.

PubMed

Martina, R; Kay, R; van Maanen, R; Ridder, A

2015-01-01

Clinical studies in overactive bladder have traditionally used analysis of covariance or nonparametric methods to analyse the number of incontinence episodes and other count data. It is known that if the underlying distributional assumptions of a particular parametric method do not hold, an alternative parametric method may be more efficient than a nonparametric one, which makes no assumptions regarding the underlying distribution of the data. Therefore, there are advantages in using methods based on the Poisson distribution or extensions of that method, which incorporate specific features that provide a modelling framework for count data. One challenge with count data is overdispersion, but methods are available that can account for this through the introduction of random effect terms in the modelling, and it is this modelling framework that leads to the negative binomial distribution. These models can also provide clinicians with a clearer and more appropriate interpretation of treatment effects in terms of rate ratios. In this paper, the previously used parametric and non-parametric approaches are contrasted with those based on Poisson regression and various extensions in trials evaluating solifenacin and mirabegron in patients with overactive bladder. In these applications, negative binomial models are seen to fit the data well.

11. Does attitude matter in computer use in Australian general practice? A zero-inflated Poisson regression analysis.

PubMed

2011-01-01

The purpose of this study was to explore factors that facilitate or hinder effective use of computers in Australian general medical practice. This study is based on data extracted from a national telephone survey of 480 general practitioners (GPs) across Australia. Clinical functions performed by GPs using computers were examined using a zero-inflated Poisson (ZIP) regression modelling. About 17% of GPs were not using computer for any clinical function, while 18% reported using computers for all clinical functions. The ZIP model showed that computer anxiety was negatively associated with effective computer use, while practitioners' belief about usefulness of computers was positively associated with effective computer use. Being a female GP or working in partnership or group practice increased the odds of effectively using computers for clinical functions. To fully capitalise on the benefits of computer technology, GPs need to be convinced that this technology is useful and can make a difference.

12. Assessing Longitudinal Change: Adjustment for Regression to the Mean Effects

ERIC Educational Resources Information Center

Rocconi, Louis M.; Ethington, Corinna A.

2009-01-01

Pascarella (J Coll Stud Dev 47:508-520, 2006) has called for an increase in use of longitudinal data with pretest-posttest design when studying effects on college students. However, such designs that use multiple measures to document change are vulnerable to an important threat to internal validity, regression to the mean. Herein, we discuss a…

13. Coercively Adjusted Auto Regression Model for Forecasting in Epilepsy EEG

PubMed Central

Kim, Sun-Hee; Faloutsos, Christos; Yang, Hyung-Jeong

2013-01-01

Recently, data with complex characteristics such as epilepsy electroencephalography (EEG) time series has emerged. Epilepsy EEG data has special characteristics including nonlinearity, nonnormality, and nonperiodicity. Therefore, it is important to find a suitable forecasting method that covers these special characteristics. In this paper, we propose a coercively adjusted autoregression (CA-AR) method that forecasts future values from a multivariable epilepsy EEG time series. We use the technique of random coefficients, which forcefully adjusts the coefficients with −1 and 1. The fractal dimension is used to determine the order of the CA-AR model. We applied the CA-AR method reflecting special characteristics of data to forecast the future value of epilepsy EEG data. Experimental results show that when compared to previous methods, the proposed method can forecast faster and accurately. PMID:23710252

14. Modelling the influence of temperature and rainfall on malaria incidence in four endemic provinces of Zambia using semiparametric Poisson regression.

PubMed

Shimaponda-Mataa, Nzooma M; Tembo-Mwase, Enala; Gebreslasie, Michael; Achia, Thomas N O; Mukaratirwa, Samson

2017-02-01

Although malaria morbidity and mortality are greatly reduced globally owing to great control efforts, the disease remains the main contributor. In Zambia, all provinces are malaria endemic. However, the transmission intensities vary mainly depending on environmental factors as they interact with the vectors. Generally in Africa, possibly due to the varying perspectives and methods used, there is variation on the relative importance of malaria risk determinants. In Zambia, the role climatic factors play on malaria case rates has not been determined in combination of space and time using robust methods in modelling. This is critical considering the reversal in malaria reduction after the year 2010 and the variation by transmission zones. Using a geoadditive or structured additive semiparametric Poisson regression model, we determined the influence of climatic factors on malaria incidence in four endemic provinces of Zambia. We demonstrate a strong positive association between malaria incidence and precipitation as well as minimum temperature. The risk of malaria was 95% lower in Lusaka (ARR=0.05, 95% CI=0.04-0.06) and 68% lower in the Western Province (ARR=0.31, 95% CI=0.25-0.41) compared to Luapula Province. North-western Province did not vary from Luapula Province. The effects of geographical region are clearly demonstrated by the unique behaviour and effects of minimum and maximum temperatures in the four provinces. Environmental factors such as landscape in urbanised places may also be playing a role.

15. Longitudinal Poisson regression to evaluate the epidemiology of Cryptosporidium, Giardia, and fecal indicator bacteria in coastal California wetlands.

PubMed

Hogan, Jennifer N; Daniels, Miles E; Watson, Fred G; Conrad, Patricia A; Oates, Stori C; Miller, Melissa A; Hardin, Dane; Byrne, Barbara A; Dominik, Clare; Melli, Ann; Jessup, David A; Miller, Woutrina A

2012-05-01

Fecal pathogen contamination of watersheds worldwide is increasingly recognized, and natural wetlands may have an important role in mitigating fecal pathogen pollution flowing downstream. Given that waterborne protozoa, such as Cryptosporidium and Giardia, are transported within surface waters, this study evaluated associations between fecal protozoa and various wetland-specific and environmental risk factors. This study focused on three distinct coastal California wetlands: (i) a tidally influenced slough bordered by urban and agricultural areas, (ii) a seasonal wetland adjacent to a dairy, and (iii) a constructed wetland that receives agricultural runoff. Wetland type, seasonality, rainfall, and various water quality parameters were evaluated using longitudinal Poisson regression to model effects on concentrations of protozoa and indicator bacteria (Escherichia coli and total coliform). Among wetland types, the dairy wetland exhibited the highest protozoal and bacterial concentrations, and despite significant reductions in microbe concentrations, the wetland could still be seen to influence water quality in the downstream tidal wetland. Additionally, recent rainfall events were associated with higher protozoal and bacterial counts in wetland water samples across all wetland types. Notably, detection of E. coli concentrations greater than a 400 most probable number (MPN) per 100 ml was associated with higher Cryptosporidium oocyst and Giardia cyst concentrations. These findings show that natural wetlands draining agricultural and livestock operation runoff into human-utilized waterways should be considered potential sources of pathogens and that wetlands can be instrumental in reducing pathogen loads to downstream waters.

16. Misspecified poisson regression models for large-scale registry data: inference for 'large n and small p'.

PubMed

Grøn, Randi; Gerds, Thomas A; Andersen, Per K

2016-03-30

Poisson regression is an important tool in register-based epidemiology where it is used to study the association between exposure variables and event rates. In this paper, we will discuss the situation with 'large n and small p', where n is the sample size and p is the number of available covariates. Specifically, we are concerned with modeling options when there are time-varying covariates that can have time-varying effects. One problem is that tests of the proportional hazards assumption, of no interactions between exposure and other observed variables, or of other modeling assumptions have large power due to the large sample size and will often indicate statistical significance even for numerically small deviations that are unimportant for the subject matter. Another problem is that information on important confounders may be unavailable. In practice, this situation may lead to simple working models that are then likely misspecified. To support and improve conclusions drawn from such models, we discuss methods for sensitivity analysis, for estimation of average exposure effects using aggregated data, and a semi-parametric bootstrap method to obtain robust standard errors. The methods are illustrated using data from the Danish national registries investigating the diabetes incidence for individuals treated with antipsychotics compared with the general unexposed population.

17. Adjustment of regional regression equations for urban storm-runoff quality using at-site data

USGS Publications Warehouse

Barks, C.S.

1996-01-01

18. An Investigation of Nonlinear Controls and Regression-Adjusted Estimators for Variance Reduction in Computer Simulation

DTIC Science & Technology

1991-03-01

Adjusted Estimators for Variance 1Redutilol in Computer Simutlation by Riichiardl L. R’ r March, 1991 D~issertation Advisor: Peter A.W. Lewis Approved for...OF NONLINEAR CONTROLS AND REGRESSION-ADJUSTED ESTIMATORS FOR VARIANCE REDUCTION IN COMPUTER SIMULATION 12. Personal Author(s) Richard L. Ressler 13a...necessary and identify by block number) This dissertation develops new techniques for variance reduction in computer simulation. It demonstrates that

19. Comparison of the Properties of Regression and Categorical Risk-Adjustment Models

PubMed Central

Averill, Richard F.; Muldoon, John H.; Hughes, John S.

2016-01-01

Clinical risk-adjustment, the ability to standardize the comparison of individuals with different health needs, is based upon 2 main alternative approaches: regression models and clinical categorical models. In this article, we examine the impact of the differences in the way these models are constructed on end user applications. PMID:26945302

20. Using Wherry's Adjusted R Squared and Mallow's C (p) for Model Selection from All Possible Regressions.

ERIC Educational Resources Information Center

Olejnik, Stephen; Mills, Jamie; Keselman, Harvey

2000-01-01

Evaluated the use of Mallow's C(p) and Wherry's adjusted R squared (R. Wherry, 1931) statistics to select a final model from a pool of model solutions using computer generated data. Neither statistic identified the underlying regression model any better than, and usually less well than, the stepwise selection method, which itself was poor for…

1. Regularized Regression Versus the High-Dimensional Propensity Score for Confounding Adjustment in Secondary Database Analyses.

PubMed

Franklin, Jessica M; Eddings, Wesley; Glynn, Robert J; Schneeweiss, Sebastian

2015-10-01

Selection and measurement of confounders is critical for successful adjustment in nonrandomized studies. Although the principles behind confounder selection are now well established, variable selection for confounder adjustment remains a difficult problem in practice, particularly in secondary analyses of databases. We present a simulation study that compares the high-dimensional propensity score algorithm for variable selection with approaches that utilize direct adjustment for all potential confounders via regularized regression, including ridge regression and lasso regression. Simulations were based on 2 previously published pharmacoepidemiologic cohorts and used the plasmode simulation framework to create realistic simulated data sets with thousands of potential confounders. Performance of methods was evaluated with respect to bias and mean squared error of the estimated effects of a binary treatment. Simulation scenarios varied the true underlying outcome model, treatment effect, prevalence of exposure and outcome, and presence of unmeasured confounding. Across scenarios, high-dimensional propensity score approaches generally performed better than regularized regression approaches. However, including the variables selected by lasso regression in a regular propensity score model also performed well and may provide a promising alternative variable selection method.

2. Adjusting for Cell Type Composition in DNA Methylation Data Using a Regression-Based Approach.

PubMed

Jones, Meaghan J; Islam, Sumaiya A; Edgar, Rachel D; Kobor, Michael S

2017-01-01

Analysis of DNA methylation in a population context has the potential to uncover novel gene and environment interactions as well as markers of health and disease. In order to find such associations it is important to control for factors which may mask or alter DNA methylation signatures. Since tissue of origin and coinciding cell type composition are major contributors to DNA methylation patterns, and can easily confound important findings, it is vital to adjust DNA methylation data for such differences across individuals. Here we describe the use of a regression method to adjust for cell type composition in DNA methylation data. We specifically discuss what information is required to adjust for cell type composition and then provide detailed instructions on how to perform cell type adjustment on high dimensional DNA methylation data. This method has been applied mainly to Illumina 450K data, but can also be adapted to pyrosequencing or genome-wide bisulfite sequencing data.

3. Procedures for adjusting regional regression models of urban-runoff quality using local data

USGS Publications Warehouse

Hoos, A.B.; Sisolak, J.K.

1993-01-01

Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for

4. An evaluation of bias in propensity score-adjusted non-linear regression models.

PubMed

Wan, Fei; Mitra, Nandita

2016-04-19

Propensity score methods are commonly used to adjust for observed confounding when estimating the conditional treatment effect in observational studies. One popular method, covariate adjustment of the propensity score in a regression model, has been empirically shown to be biased in non-linear models. However, no compelling underlying theoretical reason has been presented. We propose a new framework to investigate bias and consistency of propensity score-adjusted treatment effects in non-linear models that uses a simple geometric approach to forge a link between the consistency of the propensity score estimator and the collapsibility of non-linear models. Under this framework, we demonstrate that adjustment of the propensity score in an outcome model results in the decomposition of observed covariates into the propensity score and a remainder term. Omission of this remainder term from a non-collapsible regression model leads to biased estimates of the conditional odds ratio and conditional hazard ratio, but not for the conditional rate ratio. We further show, via simulation studies, that the bias in these propensity score-adjusted estimators increases with larger treatment effect size, larger covariate effects, and increasing dissimilarity between the coefficients of the covariates in the treatment model versus the outcome model.

5. Regression Trees Identify Relevant Interactions: Can This Improve the Predictive Performance of Risk Adjustment?

PubMed

Buchner, Florian; Wasem, Jürgen; Schillo, Sonja

2017-01-01

Risk equalization formulas have been refined since their introduction about two decades ago. Because of the complexity and the abundance of possible interactions between the variables used, hardly any interactions are considered. A regression tree is used to systematically search for interactions, a methodologically new approach in risk equalization. Analyses are based on a data set of nearly 2.9 million individuals from a major German social health insurer. A two-step approach is applied: In the first step a regression tree is built on the basis of the learning data set. Terminal nodes characterized by more than one morbidity-group-split represent interaction effects of different morbidity groups. In the second step the 'traditional' weighted least squares regression equation is expanded by adding interaction terms for all interactions detected by the tree, and regression coefficients are recalculated. The resulting risk adjustment formula shows an improvement in the adjusted R(2) from 25.43% to 25.81% on the evaluation data set. Predictive ratios are calculated for subgroups affected by the interactions. The R(2) improvement detected is only marginal. According to the sample level performance measures used, not involving a considerable number of morbidity interactions forms no relevant loss in accuracy. Copyright © 2015 John Wiley & Sons, Ltd.

6. Using Quantile and Asymmetric Least Squares Regression for Optimal Risk Adjustment.

PubMed

Lorenz, Normann

2016-06-13

In this paper, we analyze optimal risk adjustment for direct risk selection (DRS). Integrating insurers' activities for risk selection into a discrete choice model of individuals' health insurance choice shows that DRS has the structure of a contest. For the contest success function (csf) used in most of the contest literature (the Tullock-csf), optimal transfers for a risk adjustment scheme have to be determined by means of a restricted quantile regression, irrespective of whether insurers are primarily engaged in positive DRS (attracting low risks) or negative DRS (repelling high risks). This is at odds with the common practice of determining transfers by means of a least squares regression. However, this common practice can be rationalized for a new csf, but only if positive and negative DRSs are equally important; if they are not, optimal transfers have to be calculated by means of a restricted asymmetric least squares regression. Using data from German and Swiss health insurers, we find considerable differences between the three types of regressions. Optimal transfers therefore critically depend on which csf represents insurers' incentives for DRS and, if it is not the Tullock-csf, whether insurers are primarily engaged in positive or negative DRS. Copyright © 2016 John Wiley & Sons, Ltd.

7. 10 km running performance predicted by a multiple linear regression model with allometrically adjusted variables

PubMed Central

Abad, Cesar C. C.; Barros, Ronaldo V.; Bertuzzi, Romulo; Gagliardi, João F. L.; Lima-Silva, Adriano E.; Lambert, Mike I.

2016-01-01

8. Moment Adjusted Imputation for Multivariate Measurement Error Data with Applications to Logistic Regression

PubMed Central

Thomas, Laine; Stefanski, Leonard A.; Davidian, Marie

2013-01-01

In clinical studies, covariates are often measured with error due to biological fluctuations, device error and other sources. Summary statistics and regression models that are based on mismeasured data will differ from the corresponding analysis based on the “true” covariate. Statistical analysis can be adjusted for measurement error, however various methods exhibit a tradeo between convenience and performance. Moment Adjusted Imputation (MAI) is method for measurement error in a scalar latent variable that is easy to implement and performs well in a variety of settings. In practice, multiple covariates may be similarly influenced by biological fluctuastions, inducing correlated multivariate measurement error. The extension of MAI to the setting of multivariate latent variables involves unique challenges. Alternative strategies are described, including a computationally feasible option that is shown to perform well. PMID:24072947

9. Regularized logistic regression with adjusted adaptive elastic net for gene selection in high dimensional cancer classification.

PubMed

Algamal, Zakariya Yahya; Lee, Muhammad Hisyam

2015-12-01

Cancer classification and gene selection in high-dimensional data have been popular research topics in genetics and molecular biology. Recently, adaptive regularized logistic regression using the elastic net regularization, which is called the adaptive elastic net, has been successfully applied in high-dimensional cancer classification to tackle both estimating the gene coefficients and performing gene selection simultaneously. The adaptive elastic net originally used elastic net estimates as the initial weight, however, using this weight may not be preferable for certain reasons: First, the elastic net estimator is biased in selecting genes. Second, it does not perform well when the pairwise correlations between variables are not high. Adjusted adaptive regularized logistic regression (AAElastic) is proposed to address these issues and encourage grouping effects simultaneously. The real data results indicate that AAElastic is significantly consistent in selecting genes compared to the other three competitor regularization methods. Additionally, the classification performance of AAElastic is comparable to the adaptive elastic net and better than other regularization methods. Thus, we can conclude that AAElastic is a reliable adaptive regularized logistic regression method in the field of high-dimensional cancer classification.

10. Validation data-based adjustments for outcome misclassification in logistic regression: an illustration.

PubMed

Lyles, Robert H; Tang, Li; Superak, Hillary M; King, Caroline C; Celentano, David D; Lo, Yungtai; Sobel, Jack D

2011-07-01

Misclassification of binary outcome variables is a known source of potentially serious bias when estimating adjusted odds ratios. Although researchers have described frequentist and Bayesian methods for dealing with the problem, these methods have seldom fully bridged the gap between statistical research and epidemiologic practice. In particular, there have been few real-world applications of readily grasped and computationally accessible methods that make direct use of internal validation data to adjust for differential outcome misclassification in logistic regression. In this paper, we illustrate likelihood-based methods for this purpose that can be implemented using standard statistical software. Using main study and internal validation data from the HIV Epidemiology Research Study, we demonstrate how misclassification rates can depend on the values of subject-specific covariates, and we illustrate the importance of accounting for this dependence. Simulation studies confirm the effectiveness of the maximum likelihood approach. We emphasize clear exposition of the likelihood function itself, to permit the reader to easily assimilate appended computer code that facilitates sensitivity analyses as well as the efficient handling of main/external and main/internal validation-study data. These methods are readily applicable under random cross-sectional sampling, and we discuss the extent to which the main/internal analysis remains appropriate under outcome-dependent (case-control) sampling.

11. [Applying temporally-adjusted land use regression models to estimate ambient air pollution exposure during pregnancy].

PubMed

Zhang, Y J; Xue, F X; Bai, Z P

2017-03-06

The impact of maternal air pollution exposure on offspring health has received much attention. Precise and feasible exposure estimation is particularly important for clarifying exposure-response relationships and reducing heterogeneity among studies. Temporally-adjusted land use regression (LUR) models are exposure assessment methods developed in recent years that have the advantage of having high spatial-temporal resolution. Studies on the health effects of outdoor air pollution exposure during pregnancy have been increasingly carried out using this model. In China, research applying LUR models was done mostly at the model construction stage, and findings from related epidemiological studies were rarely reported. In this paper, the sources of heterogeneity and research progress of meta-analysis research on the associations between air pollution and adverse pregnancy outcomes were analyzed. The methods of the characteristics of temporally-adjusted LUR models were introduced. The current epidemiological studies on adverse pregnancy outcomes that applied this model were systematically summarized. Recommendations for the development and application of LUR models in China are presented. This will encourage the implementation of more valid exposure predictions during pregnancy in large-scale epidemiological studies on the health effects of air pollution in China.

12. Short-Term Effects of Climatic Variables on Hand, Foot, and Mouth Disease in Mainland China, 2008–2013: A Multilevel Spatial Poisson Regression Model Accounting for Overdispersion

PubMed Central

Yang, Fang; Yang, Min; Hu, Yuehua; Zhang, Juying

2016-01-01

Background Hand, Foot, and Mouth Disease (HFMD) is a worldwide infectious disease. In China, many provinces have reported HFMD cases, especially the south and southwest provinces. Many studies have found a strong association between the incidence of HFMD and climatic factors such as temperature, rainfall, and relative humidity. However, few studies have analyzed cluster effects between various geographical units. Methods The nonlinear relationships and lag effects between weekly HFMD cases and climatic variables were estimated for the period of 2008–2013 using a polynomial distributed lag model. The extra-Poisson multilevel spatial polynomial model was used to model the exact relationship between weekly HFMD incidence and climatic variables after considering cluster effects, provincial correlated structure of HFMD incidence and overdispersion. The smoothing spline methods were used to detect threshold effects between climatic factors and HFMD incidence. Results The HFMD incidence spatial heterogeneity distributed among provinces, and the scale measurement of overdispersion was 548.077. After controlling for long-term trends, spatial heterogeneity and overdispersion, temperature was highly associated with HFMD incidence. Weekly average temperature and weekly temperature difference approximate inverse “V” shape and “V” shape relationships associated with HFMD incidence. The lag effects for weekly average temperature and weekly temperature difference were 3 weeks and 2 weeks. High spatial correlated HFMD incidence were detected in northern, central and southern province. Temperature can be used to explain most of variation of HFMD incidence in southern and northeastern provinces. After adjustment for temperature, eastern and Northern provinces still had high variation HFMD incidence. Conclusion We found a relatively strong association between weekly HFMD incidence and weekly average temperature. The association between the HFMD incidence and climatic

13. Effect of Nutritional Habits on Dental Caries in Permanent Dentition among Schoolchildren Aged 10–12 Years: A Zero-Inflated Generalized Poisson Regression Model Approach

PubMed Central

2016-01-01

Background: The aim of this study was to assess the associations between nutrition and dental caries in permanent dentition among schoolchildren. Methods: A cross-sectional survey was undertaken on 698 schoolchildren aged 10 to 12 yr from a random sample of primary schools in Kermanshah, western Iran, in 2014. The study was based on the data obtained from the questionnaire containing information on nutritional habits and the outcome of decayed/missing/filled teeth (DMFT) index. The association between predictors and dental caries was modeled using the Zero Inflated Generalized Poisson (ZIGP) regression model. Results: Fourteen percent of the children were caries free. The model was shown that in female children, the odds of being in a caries susceptible sub-group was 1.23 (95% CI: 1.08–1.51) times more likely than boys (P=0.041). Additionally, mean caries count in children who consumed the fizzy soft beverages and sweet biscuits more than once daily was 1.41 (95% CI: 1.19–1.63) and 1.27 (95% CI: 1.18–1.37) times more than children that were in category of less than 3 times a week or never, respectively. Conclusions: Girls were at a higher risk of caries than boys were. Since our study showed that nutritional status may have significant effect on caries in permanent teeth, we recommend that health promotion activities in school should be emphasized on healthful eating practices; especially limiting beverages containing sugar to only occasionally between meals. PMID:27141498

14. Poisson Coordinates.

PubMed

Li, Xian-Ying; Hu, Shi-Min

2013-02-01

Harmonic functions are the critical points of a Dirichlet energy functional, the linear projections of conformal maps. They play an important role in computer graphics, particularly for gradient-domain image processing and shape-preserving geometric computation. We propose Poisson coordinates, a novel transfinite interpolation scheme based on the Poisson integral formula, as a rapid way to estimate a harmonic function on a certain domain with desired boundary values. Poisson coordinates are an extension of the Mean Value coordinates (MVCs) which inherit their linear precision, smoothness, and kernel positivity. We give explicit formulas for Poisson coordinates in both continuous and 2D discrete forms. Superior to MVCs, Poisson coordinates are proved to be pseudoharmonic (i.e., they reproduce harmonic functions on n-dimensional balls). Our experimental results show that Poisson coordinates have lower Dirichlet energies than MVCs on a number of typical 2D domains (particularly convex domains). As well as presenting a formula, our approach provides useful insights for further studies on coordinates-based interpolation and fast estimation of harmonic functions.

15. A Proportional Hazards Regression Model for the Sub-distribution with Covariates Adjusted Censoring Weight for Competing Risks Data

PubMed Central

HE, PENG; ERIKSSON, FRANK; SCHEIKE, THOMAS H.; ZHANG, MEI-JIE

2015-01-01

With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution and the covariates are independent. Covariate-dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate-dependent censoring. We consider a covariate-adjusted weight function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate-adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate-adjusted weight approach works well for the variance estimator as well. We illustrate our methods with bone marrow transplant data from the Center for International Blood and Marrow Transplant Research (CIBMTR). Here cancer relapse and death in complete remission are two competing risks. PMID:27034534

16. Verification and adjustment of regional regression models for urban storm-runoff quality using data collected in Little Rock, Arkansas

USGS Publications Warehouse

Barks, C.S.

1995-01-01

Storm-runoff water-quality data were used to verify and, when appropriate, adjust regional regression models previously developed to estimate urban storm- runoff loads and mean concentrations in Little Rock, Arkansas. Data collected at 5 representative sites during 22 storms from June 1992 through January 1994 compose the Little Rock data base. Comparison of observed values (0) of storm-runoff loads and mean concentrations to the predicted values (Pu) from the regional regression models for nine constituents (chemical oxygen demand, suspended solids, total nitrogen, total ammonia plus organic nitrogen as nitrogen, total phosphorus, dissolved phosphorus, total recoverable copper, total recoverable lead, and total recoverable zinc) shows large prediction errors ranging from 63 to several thousand percent. Prediction errors for six of the regional regression models are less than 100 percent, and can be considered reasonable for water-quality models. Differences between 0 and Pu are due to variability in the Little Rock data base and error in the regional models. Where applicable, a model adjustment procedure (termed MAP-R-P) based upon regression with 0 against Pu was applied to improve predictive accuracy. For 11 of the 18 regional water-quality models, 0 and Pu are significantly correlated, that is much of the variation in 0 is explained by the regional models. Five of these 11 regional models consistently overestimate O; therefore, MAP-R-P can be used to provide a better estimate. For the remaining seven regional models, 0 and Pu are not significanfly correlated, thus neither the unadjusted regional models nor the MAP-R-P is appropriate. A simple estimator, such as the mean of the observed values may be used if the regression models are not appropriate. Standard error of estimate of the adjusted models ranges from 48 to 130 percent. Calibration results may be biased due to the limited data set sizes in the Little Rock data base. The relatively large values of

17. High Adherence to Iron/Folic Acid Supplementation during Pregnancy Time among Antenatal and Postnatal Care Attendant Mothers in Governmental Health Centers in Akaki Kality Sub City, Addis Ababa, Ethiopia: Hierarchical Negative Binomial Poisson Regression

PubMed Central

2017-01-01

18. Adjusting for unmeasured confounding due to either of two crossed factors with a logistic regression model.

PubMed

Li, Li; Brumback, Babette A; Weppelmann, Thomas A; Morris, J Glenn; Ali, Afsar

2016-08-15

Motivated by an investigation of the effect of surface water temperature on the presence of Vibrio cholerae in water samples collected from different fixed surface water monitoring sites in Haiti in different months, we investigated methods to adjust for unmeasured confounding due to either of the two crossed factors site and month. In the process, we extended previous methods that adjust for unmeasured confounding due to one nesting factor (such as site, which nests the water samples from different months) to the case of two crossed factors. First, we developed a conditional pseudolikelihood estimator that eliminates fixed effects for the levels of each of the crossed factors from the estimating equation. Using the theory of U-Statistics for independent but non-identically distributed vectors, we show that our estimator is consistent and asymptotically normal, but that its variance depends on the nuisance parameters and thus cannot be easily estimated. Consequently, we apply our estimator in conjunction with a permutation test, and we investigate use of the pigeonhole bootstrap and the jackknife for constructing confidence intervals. We also incorporate our estimator into a diagnostic test for a logistic mixed model with crossed random effects and no unmeasured confounding. For comparison, we investigate between-within models extended to two crossed factors. These generalized linear mixed models include covariate means for each level of each factor in order to adjust for the unmeasured confounding. We conduct simulation studies, and we apply the methods to the Haitian data. Copyright © 2016 John Wiley & Sons, Ltd.

19. Simulation study comparing exposure matching with regression adjustment in an observational safety setting with group sequential monitoring.

PubMed

Stratton, Kelly G; Cook, Andrea J; Jackson, Lisa A; Nelson, Jennifer C

2015-03-30

Sequential methods are well established for randomized clinical trials (RCTs), and their use in observational settings has increased with the development of national vaccine and drug safety surveillance systems that monitor large healthcare databases. Observational safety monitoring requires that sequential testing methods be better equipped to incorporate confounder adjustment and accommodate rare adverse events. New methods designed specifically for observational surveillance include a group sequential likelihood ratio test that uses exposure matching and generalized estimating equations approach that involves regression adjustment. However, little is known about the statistical performance of these methods or how they compare to RCT methods in both observational and rare outcome settings. We conducted a simulation study to determine the type I error, power and time-to-surveillance-end of group sequential likelihood ratio test, generalized estimating equations and RCT methods that construct group sequential Lan-DeMets boundaries using data from a matched (group sequential Lan-DeMets-matching) or unmatched regression (group sequential Lan-DeMets-regression) setting. We also compared the methods using data from a multisite vaccine safety study. All methods had acceptable type I error, but regression methods were more powerful, faster at detecting true safety signals and less prone to implementation difficulties with rare events than exposure matching methods. Method performance also depended on the distribution of information and extent of confounding by site. Our results suggest that choice of sequential method, especially the confounder control strategy, is critical in rare event observational settings. These findings provide guidance for choosing methods in this context and, in particular, suggest caution when conducting exposure matching.

20. Integrated analysis of transcriptomic and proteomic data of Desulfovibrio vulgaris: Zero-Inflated Poisson regression models to predict abundance of undetected proteins

SciTech Connect

Nie, Lei; Wu, Gang; Brockman, Fred J.; Zhang, Weiwen

2006-05-04

Abstract Advances in DNA microarray and proteomics technologies have enabled high-throughput measurement of mRNA expression and protein abundance. Parallel profiling of mRNA and protein on a global scale and integrative analysis of these two data types could provide additional insight into the metabolic mechanisms underlying complex biological systems. However, because protein abundance and mRNA expression are affected by many cellular and physical processes, there have been conflicting results on the correlation of these two measurements. In addition, as current proteomic methods can detect only a small fraction of proteins present in cells, no correlation study of these two data types has been done thus far at the whole-genome level. In this study, we describe a novel data-driven statistical model to integrate whole-genome microarray and proteomic data collected from Desulfovibrio vulgaris grown under three different conditions. Based on the Poisson distribution pattern of proteomic data and the fact that a large number of proteins were undetected (excess zeros), Zero-inflated Poisson models were used to define the correlation pattern of mRNA and protein abundance. The models assumed that there is a probability mass at zero representing some of the undetected proteins because of technical limitations. The models thus use abundance measurements of transcripts and proteins experimentally detected as input to generate predictions of protein abundances as output for all genes in the genome. We demonstrated the statistical models by comparatively analyzing D. vulgaris grown on lactate-based versus formate-based media. The increased expressions of Ech hydrogenase and alcohol dehydrogenase (Adh)-periplasmic Fe-only hydrogenase (Hyd) pathway for ATP synthesis were predicted for D. vulgaris grown on formate.

1. Alternatives for logistic regression in cross-sectional studies: an empirical comparison of models that directly estimate the prevalence ratio

PubMed Central

Barros, Aluísio JD; Hirakata, Vânia N

2003-01-01

Background Cross-sectional studies with binary outcomes analyzed by logistic regression are frequent in the epidemiological literature. However, the odds ratio can importantly overestimate the prevalence ratio, the measure of choice in these studies. Also, controlling for confounding is not equivalent for the two measures. In this paper we explore alternatives for modeling data of such studies with techniques that directly estimate the prevalence ratio. Methods We compared Cox regression with constant time at risk, Poisson regression and log-binomial regression against the standard Mantel-Haenszel estimators. Models with robust variance estimators in Cox and Poisson regressions and variance corrected by the scale parameter in Poisson regression were also evaluated. Results Three outcomes, from a cross-sectional study carried out in Pelotas, Brazil, with different levels of prevalence were explored: weight-for-age deficit (4%), asthma (31%) and mother in a paid job (52%). Unadjusted Cox/Poisson regression and Poisson regression with scale parameter adjusted by deviance performed worst in terms of interval estimates. Poisson regression with scale parameter adjusted by χ2 showed variable performance depending on the outcome prevalence. Cox/Poisson regression with robust variance, and log-binomial regression performed equally well when the model was correctly specified. Conclusions Cox or Poisson regression with robust variance and log-binomial regression provide correct estimates and are a better alternative for the analysis of cross-sectional studies with binary outcomes than logistic regression, since the prevalence ratio is more interpretable and easier to communicate to non-specialists than the odds ratio. However, precautions are needed to avoid estimation problems in specific situations. PMID:14567763

2. Adjustments to de Leva-anthropometric regression data for the changes in body proportions in elderly humans.

PubMed

Ho Hoang, Khai-Long; Mombaur, Katja

2015-10-15

Dynamic modeling of the human body is an important tool to investigate the fundamentals of the biomechanics of human movement. To model the human body in terms of a multi-body system, it is necessary to know the anthropometric parameters of the body segments. For young healthy subjects, several data sets exist that are widely used in the research community, e.g. the tables provided by de Leva. None such comprehensive anthropometric parameter sets exist for elderly people. It is, however, well known that body proportions change significantly during aging, e.g. due to degenerative effects in the spine, such that parameters for young people cannot be used for realistically simulating the dynamics of elderly people. In this study, regression equations are derived from the inertial parameters, center of mass positions, and body segment lengths provided by de Leva to be adjustable to the changes in proportion of the body parts of male and female humans due to aging. Additional adjustments are made to the reference points of the parameters for the upper body segments as they are chosen in a more practicable way in the context of creating a multi-body model in a chain structure with the pelvis representing the most proximal segment.

3. Mapping Lifetime Brain Volumetry with Covariate-Adjusted Restricted Cubic Spline Regression from Cross-sectional Multi-site MRI.

PubMed

Huo, Yuankai; Aboud, Katherine; Kang, Hakmook; Cutting, Laurie E; Landman, Bennett A

2016-10-01

4. The performance of automated case-mix adjustment regression model building methods in a health outcome prediction setting.

PubMed

Jen, Min-Hua; Bottle, Alex; Kirkwood, Graham; Johnston, Ron; Aylin, Paul

2011-09-01

We have previously described a system for monitoring a number of healthcare outcomes using case-mix adjustment models. It is desirable to automate the model fitting process in such a system if monitoring covers a large number of outcome measures or subgroup analyses. Our aim was to compare the performance of three different variable selection strategies: "manual", "automated" backward elimination and re-categorisation, and including all variables at once, irrespective of their apparent importance, with automated re-categorisation. Logistic regression models for predicting in-hospital mortality and emergency readmission within 28 days were fitted to an administrative database for 78 diagnosis groups and 126 procedures from 1996 to 2006 for National Health Services hospital trusts in England. The performance of models was assessed with Receiver Operating Characteristic (ROC) c statistics, (measuring discrimination) and Brier score (assessing the average of the predictive accuracy). Overall, discrimination was similar for diagnoses and procedures and consistently better for mortality than for emergency readmission. Brier scores were generally low overall (showing higher accuracy) and were lower for procedures than diagnoses, with a few exceptions for emergency readmission within 28 days. Among the three variable selection strategies, the automated procedure had similar performance to the manual method in almost all cases except low-risk groups with few outcome events. For the rapid generation of multiple case-mix models we suggest applying automated modelling to reduce the time required, in particular when examining different outcomes of large numbers of procedures and diseases in routinely collected administrative health data.

5. Introduction to the use of regression models in epidemiology.

PubMed

Bender, Ralf

2009-01-01

Regression modeling is one of the most important statistical techniques used in analytical epidemiology. By means of regression models the effect of one or several explanatory variables (e.g., exposures, subject characteristics, risk factors) on a response variable such as mortality or cancer can be investigated. From multiple regression models, adjusted effect estimates can be obtained that take the effect of potential confounders into account. Regression methods can be applied in all epidemiologic study designs so that they represent a universal tool for data analysis in epidemiology. Different kinds of regression models have been developed in dependence on the measurement scale of the response variable and the study design. The most important methods are linear regression for continuous outcomes, logistic regression for binary outcomes, Cox regression for time-to-event data, and Poisson regression for frequencies and rates. This chapter provides a nontechnical introduction to these regression models with illustrating examples from cancer research.

6. Predicting Hospital Admissions With Poisson Regression Analysis

DTIC Science & Technology

2009-06-01

East and Four West. Four East is where bariatric , general, neurologic, otolaryngology (ENT), ophthalmologic, orthopedic, and plastic surgery ...where care is provided for cardiovascular, thoracic, and vascular surgery patients. Figure 1 shows a bar graph for each unit, giving the proportion of...provided at NMCSD, or a study could be conducted on the amount of time that patients generally wait for elective surgeries . There is also the

7. Investigation of the association between the test day milk fat-protein ratio and clinical mastitis using a Poisson regression approach for analysis of time-to-event data.

PubMed

Zoche-Golob, V; Heuwieser, W; Krömker, V

2015-09-01

The objective of the present study was to investigate the association between the milk fat-protein ratio and the incidence rate of clinical mastitis including repeated cases of clinical mastitis to determine the usefulness of this association to monitor metabolic disorders as risk factors for udder health. Herd records from 10 dairy herds of Holstein cows in Saxony, Germany, from September 2005-2011 (36,827 lactations of 17,657 cows) were used for statistical analysis. A mixed Poisson regression model with the weekly incidence rate of clinical mastitis as outcome variable was fitted. The model included repeated events of the outcome, time-varying covariates and multilevel clustering. Because the recording of clinical mastitis might have been imperfect, a probabilistic bias analysis was conducted to assess the impact of the misclassification of clinical mastitis on the conventional results. The lactational incidence of clinical mastitis was 38.2%. In 36.2% and 34.9% of the lactations, there was at least one dairy herd test day with a fat-protein ratio of <1.0 or >1.5, respectively. Misclassification of clinical mastitis was assumed to have resulted in bias towards the null. A clinical mastitis case increased the incidence rate of following cases of the same cow. Fat-protein ratios of <1.0 and >1.5 were associated with higher incidence rates of clinical mastitis depending on week in milk. The effect of a fat-protein ratio >1.5 on the incidence rate of clinical mastitis increased considerably over the course of lactation, whereas the effect of a fat-protein ratio <1.0 decreased. Fat-protein ratios <1.0 or >1.5 on the precedent test days of all cows irrespective of their time in milk seemed to be better predictors for clinical mastitis than the first test day results per lactation.

8. Small-Sample Adjustments for Tests of Moderators and Model Fit in Robust Variance Estimation in Meta-Regression

ERIC Educational Resources Information Center

Tipton, Elizabeth; Pustejovsky, James E.

2015-01-01

Randomized experiments are commonly used to evaluate the effectiveness of educational interventions. The goal of the present investigation is to develop small-sample corrections for multiple contrast hypothesis tests (i.e., F-tests) such as the omnibus test of meta-regression fit or a test for equality of three or more levels of a categorical…

9. Bivariate Poisson models with varying offsets: an application to the paired mitochondrial DNA dataset.

PubMed

Su, Pei-Fang; Mau, Yu-Lin; Guo, Yan; Li, Chung-I; Liu, Qi; Boice, John D; Shyr, Yu

2017-03-01

To assess the effect of chemotherapy on mitochondrial genome mutations in cancer survivors and their offspring, a study sequenced the full mitochondrial genome and determined the mitochondrial DNA heteroplasmic (mtDNA) mutation rate. To build a model for counts of heteroplasmic mutations in mothers and their offspring, bivariate Poisson regression was used to examine the relationship between mutation count and clinical information while accounting for the paired correlation. However, if the sequencing depth is not adequate, a limited fraction of the mtDNA will be available for variant calling. The classical bivariate Poisson regression model treats the offset term as equal within pairs; thus, it cannot be applied directly. In this research, we propose an extended bivariate Poisson regression model that has a more general offset term to adjust the length of the accessible genome for each observation. We evaluate the performance of the proposed method with comprehensive simulations, and the results show that the regression model provides unbiased parameter estimations. The use of the model is also demonstrated using the paired mtDNA dataset.

10. Using an Adjusted Serfling Regression Model to Improve the Early Warning at the Arrival of Peak Timing of Influenza in Beijing

PubMed Central

Wang, Xiaoli; Wu, Shuangsheng; MacIntyre, C. Raina; Zhang, Hongbin; Shi, Weixian; Peng, Xiaomin; Duan, Wei; Yang, Peng; Zhang, Yi; Wang, Quanyi

2015-01-01

11. A zero-augmented generalized gamma regression calibration to adjust for covariate measurement error: A case of an episodically consumed dietary intake.

PubMed

Agogo, George O

2017-01-01

Measurement error in exposure variables is a serious impediment in epidemiological studies that relate exposures to health outcomes. In nutritional studies, interest could be in the association between long-term dietary intake and disease occurrence. Long-term intake is usually assessed with food frequency questionnaire (FFQ), which is prone to recall bias. Measurement error in FFQ-reported intakes leads to bias in parameter estimate that quantifies the association. To adjust for bias in the association, a calibration study is required to obtain unbiased intake measurements using a short-term instrument such as 24-hour recall (24HR). The 24HR intakes are used as response in regression calibration to adjust for bias in the association. For foods not consumed daily, 24HR-reported intakes are usually characterized by excess zeroes, right skewness, and heteroscedasticity posing serious challenge in regression calibration modeling. We proposed a zero-augmented calibration model to adjust for measurement error in reported intake, while handling excess zeroes, skewness, and heteroscedasticity simultaneously without transforming 24HR intake values. We compared the proposed calibration method with the standard method and with methods that ignore measurement error by estimating long-term intake with 24HR and FFQ-reported intakes. The comparison was done in real and simulated datasets. With the 24HR, the mean increase in mercury level per ounce fish intake was about 0.4; with the FFQ intake, the increase was about 1.2. With both calibration methods, the mean increase was about 2.0. Similar trend was observed in the simulation study. In conclusion, the proposed calibration method performs at least as good as the standard method.

12. Cumulative Poisson Distribution Program

NASA Technical Reports Server (NTRS)

Bowerman, Paul N.; Scheuer, Ernest M.; Nolty, Robert

1990-01-01

Overflow and underflow in sums prevented. Cumulative Poisson Distribution Program, CUMPOIS, one of two computer programs that make calculations involving cumulative Poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), used independently of one another. CUMPOIS determines cumulative Poisson distribution, used to evaluate cumulative distribution function (cdf) for gamma distributions with integer shape parameters and cdf for X (sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Written in C.

13. Data for and adjusted regional regression models of volume and quality of urban storm-water runoff in Boise and Garden City, Idaho, 1993-94

USGS Publications Warehouse

Kjelstrom, L.C.

1995-01-01

Previously developed U.S. Geological Survey regional regression models of runoff and 11 chemical constituents were evaluated to assess their suitability for use in urban areas in Boise and Garden City. Data collected in the study area were used to develop adjusted regional models of storm-runoff volumes and mean concentrations and loads of chemical oxygen demand, dissolved and suspended solids, total nitrogen and total ammonia plus organic nitrogen as nitrogen, total and dissolved phosphorus, and total recoverable cadmium, copper, lead, and zinc. Explanatory variables used in these models were drainage area, impervious area, land-use information, and precipitation data. Mean annual runoff volume and loads at the five outfalls were estimated from 904 individual storms during 1976 through 1993. Two methods were used to compute individual storm loads. The first method used adjusted regional models of storm loads and the second used adjusted regional models for mean concentration and runoff volume. For large storms, the first method seemed to produce excessively high loads for some constituents and the second method provided more reliable results for all constituents except suspended solids. The first method provided more reliable results for large storms for suspended solids.

14. Scaling the Poisson Distribution

ERIC Educational Resources Information Center

Farnsworth, David L.

2014-01-01

We derive the additive property of Poisson random variables directly from the probability mass function. An important application of the additive property to quality testing of computer chips is presented.

15. Regression equations for estimation of annual peak-streamflow frequency for undeveloped watersheds in Texas using an L-moment-based, PRESS-minimized, residual-adjusted approach

USGS Publications Warehouse

Asquith, William H.; Roussel, Meghan C.

2009-01-01

Annual peak-streamflow frequency estimates are needed for flood-plain management; for objective assessment of flood risk; for cost-effective design of dams, levees, and other flood-control structures; and for design of roads, bridges, and culverts. Annual peak-streamflow frequency represents the peak streamflow for nine recurrence intervals of 2, 5, 10, 25, 50, 100, 200, 250, and 500 years. Common methods for estimation of peak-streamflow frequency for ungaged or unmonitored watersheds are regression equations for each recurrence interval developed for one or more regions; such regional equations are the subject of this report. The method is based on analysis of annual peak-streamflow data from U.S. Geological Survey streamflow-gaging stations (stations). Beginning in 2007, the U.S. Geological Survey, in cooperation with the Texas Department of Transportation and in partnership with Texas Tech University, began a 3-year investigation concerning the development of regional equations to estimate annual peak-streamflow frequency for undeveloped watersheds in Texas. The investigation focuses primarily on 638 stations with 8 or more years of data from undeveloped watersheds and other criteria. The general approach is explicitly limited to the use of L-moment statistics, which are used in conjunction with a technique of multi-linear regression referred to as PRESS minimization. The approach used to develop the regional equations, which was refined during the investigation, is referred to as the 'L-moment-based, PRESS-minimized, residual-adjusted approach'. For the approach, seven unique distributions are fit to the sample L-moments of the data for each of 638 stations and trimmed means of the seven results of the distributions for each recurrence interval are used to define the station specific, peak-streamflow frequency. As a first iteration of regression, nine weighted-least-squares, PRESS-minimized, multi-linear regression equations are computed using the watershed

16. [Evaluation of chemotherapy for stage IV non-small cell lung cancer employing a regression tree type method for quality-adjusted survival analysis to determine prognostic factors].

PubMed

Fujita, A; Takabatake, H; Tagaki, S; Sohda, T; Sekine, K

1996-03-01

To evaluate the effect of chemotherapy on QOL, the survival period was categorized by 3 intervals: one in the hospital for chemotherapy (TOX), on an outpatient basis (TWiST Time without Symptom and Toxicity), and in the hospital for conservative therapy (REL). Coefficients showing the QOL level were expressed as ut, uw and ur. If uw was 1 and ut and ur were plotted at less than 1, ut TOX+uwTWiST+urREL could be a quality-adjusted value relative to TWiST (Q-TWiST). One hundred five patients with stage IV non-small cell lung cancer were included. Sixty-five were given chemotherapy, and the other 40 were not. The observation period was 2 years. Q-TWiST values for age, sex, PS, histology and chemotherapy were calculated. Their quantification was performed employing a regression tree type method. Chemotherapy contributed to Q-TWiST when ut approached 1 i.e., no side effect was supposed). When ut was less than 0.5, PS and sex had an appreciable role.

17. Poisson Structures:. Towards a Classification

Grabowski, J.; Marmo, G.; Perelomov, A. M.

In the present note we give an explicit description of certain class of Poisson structures. The methods lead to a classification of Poisson structures in low dimensions and suggest a possible approach for higher dimensions.

18. Branes in Poisson sigma models

SciTech Connect

Falceto, Fernando

2010-07-28

In this review we discuss possible boundary conditions (branes) for the Poisson sigma model. We show how to carry out the perturbative quantization in the presence of a general pre-Poisson brane and how this is related to the deformation quantization of Poisson structures. We conclude with an open problem: the perturbative quantization of the system when the boundary has several connected components and we use a different pre-Poisson brane in every component.

19. Poisson-Riemannian geometry

Beggs, Edwin J.; Majid, Shahn

2017-04-01

We study noncommutative bundles and Riemannian geometry at the semiclassical level of first order in a deformation parameter λ, using a functorial approach. This leads us to field equations of 'Poisson-Riemannian geometry' between the classical metric, the Poisson bracket and a certain Poisson-compatible connection needed as initial data for the quantisation of the differential structure. We use such data to define a functor Q to O(λ2) from the monoidal category of all classical vector bundles equipped with connections to the monoidal category of bimodules equipped with bimodule connections over the quantised algebra. This is used to 'semiquantise' the wedge product of the exterior algebra and in the Riemannian case, the metric and the Levi-Civita connection in the sense of constructing a noncommutative geometry to O(λ2) . We solve our field equations for the Schwarzschild black-hole metric under the assumption of spherical symmetry and classical dimension, finding a unique solution and the necessity of nonassociativity at order λ2, which is similar to previous results for quantum groups. The paper also includes a nonassociative hyperboloid, nonassociative fuzzy sphere and our previously algebraic bicrossproduct model.

20. The performance of functional methods for correcting non-Gaussian measurement error within Poisson regression: corrected excess risk of lung cancer mortality in relation to radon exposure among French uranium miners.

PubMed

Allodji, Rodrigue S; Thiébaut, Anne C M; Leuraud, Klervi; Rage, Estelle; Henry, Stéphane; Laurier, Dominique; Bénichou, Jacques

2012-12-30

A broad variety of methods for measurement error (ME) correction have been developed, but these methods have rarely been applied possibly because their ability to correct ME is poorly understood. We carried out a simulation study to assess the performance of three error-correction methods: two variants of regression calibration (the substitution method and the estimation calibration method) and the simulation extrapolation (SIMEX) method. Features of the simulated cohorts were borrowed from the French Uranium Miners' Cohort in which exposure to radon had been documented from 1946 to 1999. In the absence of ME correction, we observed a severe attenuation of the true effect of radon exposure, with a negative relative bias of the order of 60% on the excess relative risk of lung cancer death. In the main scenario considered, that is, when ME characteristics previously determined as most plausible from the French Uranium Miners' Cohort were used both to generate exposure data and to correct for ME at the analysis stage, all three error-correction methods showed a noticeable but partial reduction of the attenuation bias, with a slight advantage for the SIMEX method. However, the performance of the three correction methods highly depended on the accurate determination of the characteristics of ME. In particular, we encountered severe overestimation in some scenarios with the SIMEX method, and we observed lack of correction with the three methods in some other scenarios. For illustration, we also applied and compared the proposed methods on the real data set from the French Uranium Miners' Cohort study.

1. Algorithm Calculates Cumulative Poisson Distribution

NASA Technical Reports Server (NTRS)

Bowerman, Paul N.; Nolty, Robert C.; Scheuer, Ernest M.

1992-01-01

Algorithm calculates accurate values of cumulative Poisson distribution under conditions where other algorithms fail because numbers are so small (underflow) or so large (overflow) that computer cannot process them. Factors inserted temporarily to prevent underflow and overflow. Implemented in CUMPOIS computer program described in "Cumulative Poisson Distribution Program" (NPO-17714).

2. Poisson Spot with Magnetic Levitation

ERIC Educational Resources Information Center

Hoover, Matthew; Everhart, Michael; D'Arruda, Jose

2010-01-01

In this paper we describe a unique method for obtaining the famous Poisson spot without adding obstacles to the light path, which could interfere with the effect. A Poisson spot is the interference effect from parallel rays of light diffracting around a solid spherical object, creating a bright spot in the center of the shadow.

3. Climate variations and salmonellosis transmission in Adelaide, South Australia: a comparison between regression models

Zhang, Ying; Bi, Peng; Hiller, Janet

2008-01-01

This is the first study to identify appropriate regression models for the association between climate variation and salmonellosis transmission. A comparison between different regression models was conducted using surveillance data in Adelaide, South Australia. By using notified salmonellosis cases and climatic variables from the Adelaide metropolitan area over the period 1990-2003, four regression methods were examined: standard Poisson regression, autoregressive adjusted Poisson regression, multiple linear regression, and a seasonal autoregressive integrated moving average (SARIMA) model. Notified salmonellosis cases in 2004 were used to test the forecasting ability of the four models. Parameter estimation, goodness-of-fit and forecasting ability of the four regression models were compared. Temperatures occurring 2 weeks prior to cases were positively associated with cases of salmonellosis. Rainfall was also inversely related to the number of cases. The comparison of the goodness-of-fit and forecasting ability suggest that the SARIMA model is better than the other three regression models. Temperature and rainfall may be used as climatic predictors of salmonellosis cases in regions with climatic characteristics similar to those of Adelaide. The SARIMA model could, thus, be adopted to quantify the relationship between climate variations and salmonellosis transmission.

4. How does Poisson kriging compare to the popular BYM model for mapping disease risks?

PubMed Central

Goovaerts, Pierre; Gebreab, Samson

2008-01-01

Background Geostatistical techniques are now available to account for spatially varying population sizes and spatial patterns in the mapping of disease rates. At first glance, Poisson kriging represents an attractive alternative to increasingly popular Bayesian spatial models in that: 1) it is easier to implement and less CPU intensive, and 2) it accounts for the size and shape of geographical units, avoiding the limitations of conditional auto-regressive (CAR) models commonly used in Bayesian algorithms while allowing for the creation of isopleth risk maps. Both approaches, however, have never been compared in simulation studies, and there is a need to better understand their merits in terms of accuracy and precision of disease risk estimates. Results Besag, York and Mollie's (BYM) model and Poisson kriging (point and area-to-area implementations) were applied to age-adjusted lung and cervix cancer mortality rates recorded for white females in two contrasted county geographies: 1) state of Indiana that consists of 92 counties of fairly similar size and shape, and 2) four states in the Western US (Arizona, California, Nevada and Utah) forming a set of 118 counties that are vastly different geographical units. The spatial support (i.e. point versus area) has a much smaller impact on the results than the statistical methodology (i.e. geostatistical versus Bayesian models). Differences between methods are particularly pronounced in the Western US dataset: BYM model yields smoother risk surface and prediction variance that changes mainly as a function of the predicted risk, while the Poisson kriging variance increases in large sparsely populated counties. Simulation studies showed that the geostatistical approach yields smaller prediction errors, more precise and accurate probability intervals, and allows a better discrimination between counties with high and low mortality risks. The benefit of area-to-area Poisson kriging increases as the county geography becomes more

5. A note on calculating asymptotic confidence intervals for the adjusted risk difference and number needed to treat in the Cox regression model.

PubMed

Laubender, Ruediger P; Bender, Ralf

2014-02-28

Recently, Laubender and Bender (Stat. Med. 2010; 29: 851-859) applied the average risk difference (RD) approach to estimate adjusted RD and corresponding number needed to treat measures in the Cox proportional hazards model. We calculated standard errors and confidence intervals by using bootstrap techniques. In this paper, we develop asymptotic variance estimates of the adjusted RD measures and corresponding asymptotic confidence intervals within the counting process theory and evaluated them in a simulation study. We illustrate the use of the asymptotic confidence intervals by means of data of the Düsseldorf Obesity Mortality Study.

6. Newton/Poisson-Distribution Program

NASA Technical Reports Server (NTRS)

Bowerman, Paul N.; Scheuer, Ernest M.

1990-01-01

NEWTPOIS, one of two computer programs making calculations involving cumulative Poisson distributions. NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714) used independently of one another. NEWTPOIS determines Poisson parameter for given cumulative probability, from which one obtains percentiles for gamma distributions with integer shape parameters and percentiles for X(sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Program written in C.

7. ``Regressed experts'' as a new state in teachers' professional development: lessons from Computer Science teachers' adjustments to substantial changes in the curriculum

Liberman, Neomi; Ben-David Kolikant, Yifat; Beeri, Catriel

2012-09-01

Due to a program reform in Israel, experienced CS high-school teachers faced the need to master and teach a new programming paradigm. This situation served as an opportunity to explore the relationship between teachers' content knowledge (CK) and their pedagogical content knowledge (PCK). This article focuses on three case studies, with emphasis on one of them. Using observations and interviews, we examine how the teachers, we observed taught and what development of their teaching occurred as a result of their teaching experience, if at all. Our findings suggest that this situation creates a new hybrid state of teachers, which we term "regressed experts." These teachers incorporate in their professional practice some elements typical of novices and some typical of experts. We also found that these teachers' experience, although established when teaching a different CK, serve as a leverage to improve their knowledge and understanding of aspects of the new content.

8. Relaxed Poisson cure rate models.

PubMed

Rodrigues, Josemar; Cordeiro, Gauss M; Cancho, Vicente G; Balakrishnan, N

2016-03-01

The purpose of this article is to make the standard promotion cure rate model (Yakovlev and Tsodikov, ) more flexible by assuming that the number of lesions or altered cells after a treatment follows a fractional Poisson distribution (Laskin, ). It is proved that the well-known Mittag-Leffler relaxation function (Berberan-Santos, ) is a simple way to obtain a new cure rate model that is a compromise between the promotion and geometric cure rate models allowing for superdispersion. So, the relaxed cure rate model developed here can be considered as a natural and less restrictive extension of the popular Poisson cure rate model at the cost of an additional parameter, but a competitor to negative-binomial cure rate models (Rodrigues et al., ). Some mathematical properties of a proper relaxed Poisson density are explored. A simulation study and an illustration of the proposed cure rate model from the Bayesian point of view are finally presented.

9. Poisson's spot and Gouy phase

da Paz, I. G.; Soldati, Rodolfo; Cabral, L. A.; de Oliveira, J. G. G.; Sampaio, Marcos

2016-12-01

Recently there have been experimental results on Poisson spot matter-wave interferometry followed by theoretical models describing the relative importance of the wave and particle behaviors for the phenomenon. We propose an analytical theoretical model for Poisson's spot with matter waves based on the Babinet principle, in which we use the results for free propagation and single-slit diffraction. We take into account effects of loss of coherence and finite detection area using the propagator for a quantum particle interacting with an environment. We observe that the matter-wave Gouy phase plays a role in the existence of the central peak and thus corroborates the predominantly wavelike character of the Poisson's spot. Our model shows remarkable agreement with the experimental data for deuterium (D2) molecules.

10. NEWTPOIS- NEWTON POISSON DISTRIBUTION PROGRAM

NASA Technical Reports Server (NTRS)

Bowerman, P. N.

1994-01-01

The cumulative poisson distribution program, NEWTPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714), can be used independently of one another. NEWTPOIS determines percentiles for gamma distributions with integer shape parameters and calculates percentiles for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. NEWTPOIS determines the Poisson parameter (lambda), that is; the mean (or expected) number of events occurring in a given unit of time, area, or space. Given that the user already knows the cumulative probability for a specific number of occurrences (n) it is usually a simple matter of substitution into the Poisson distribution summation to arrive at lambda. However, direct calculation of the Poisson parameter becomes difficult for small positive values of n and unmanageable for large values. NEWTPOIS uses Newton's iteration method to extract lambda from the initial value condition of the Poisson distribution where n=0, taking successive estimations until some user specified error term (epsilon) is reached. The NEWTPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting epsilon, n, and the cumulative probability of the occurrence of n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 30K. NEWTPOIS was developed in 1988.

11. Characterizing the performance of the Conway-Maxwell Poisson generalized linear model.

PubMed

Francis, Royce A; Geedipally, Srinivas Reddy; Guikema, Seth D; Dhavala, Soma Sekhar; Lord, Dominique; LaRocca, Sarah

2012-01-01

Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway-Maxwell Poisson (COM-Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM-Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM-Poisson GLM, and (2) estimate the prediction accuracy of the COM-Poisson GLM using simulated data sets. The results of the study indicate that the COM-Poisson GLM is flexible enough to model under-, equi-, and overdispersed data sets with different sample mean values. The results also show that the COM-Poisson GLM yields accurate parameter estimates. The COM-Poisson GLM provides a promising and flexible approach for performing count data regression.

12. Graded geometry and Poisson reduction

SciTech Connect

Cattaneo, A. S.; Zambon, M.

2009-02-02

The main result extends the Marsden-Ratiu reduction theorem in Poisson geometry, and is proven by means of graded geometry. In this note we provide the background material about graded geometry necessary for the proof. Further, we provide an alternative algebraic proof for the main result.

13. Sparse Poisson noisy image deblurring.

PubMed

Carlavan, Mikael; Blanc-Féraud, Laure

2012-04-01

Deblurring noisy Poisson images has recently been a subject of an increasing amount of works in many areas such as astronomy and biological imaging. In this paper, we focus on confocal microscopy, which is a very popular technique for 3-D imaging of biological living specimens that gives images with a very good resolution (several hundreds of nanometers), although degraded by both blur and Poisson noise. Deconvolution methods have been proposed to reduce these degradations, and in this paper, we focus on techniques that promote the introduction of an explicit prior on the solution. One difficulty of these techniques is to set the value of the parameter, which weights the tradeoff between the data term and the regularizing term. Only few works have been devoted to the research of an automatic selection of this regularizing parameter when considering Poisson noise; therefore, it is often set manually such that it gives the best visual results. We present here two recent methods to estimate this regularizing parameter, and we first propose an improvement of these estimators, which takes advantage of confocal images. Following these estimators, we secondly propose to express the problem of the deconvolution of Poisson noisy images as the minimization of a new constrained problem. The proposed constrained formulation is well suited to this application domain since it is directly expressed using the antilog likelihood of the Poisson distribution and therefore does not require any approximation. We show how to solve the unconstrained and constrained problems using the recent alternating-direction technique, and we present results on synthetic and real data using well-known priors, such as total variation and wavelet transforms. Among these wavelet transforms, we specially focus on the dual-tree complex wavelet transform and on the dictionary composed of curvelets and an undecimated wavelet transform.

14. Calculation of the Poisson cumulative distribution function

NASA Technical Reports Server (NTRS)

Bowerman, Paul N.; Nolty, Robert G.; Scheuer, Ernest M.

1990-01-01

A method for calculating the Poisson cdf (cumulative distribution function) is presented. The method avoids computer underflow and overflow during the process. The computer program uses this technique to calculate the Poisson cdf for arbitrary inputs. An algorithm that determines the Poisson parameter required to yield a specified value of the cdf is presented.

15. Hyperbolically Patterned 3D Graphene Metamaterial with Negative Poisson's Ratio and Superelasticity.

PubMed

Zhang, Qiangqiang; Xu, Xiang; Lin, Dong; Chen, Wenli; Xiong, Guoping; Yu, Yikang; Fisher, Timothy S; Li, Hui

2016-03-16

A hyperbolically patterned 3D graphene metamaterial (GM) with negative Poisson's ratio and superelasticity is highlighted. It is synthesized by a modified hydrothermal approach and subsequent oriented freeze-casting strategy. GM presents a tunable Poisson's ratio by adjusting the structural porosity, macroscopic aspect ratio (L/D), and freeze-casting conditions. Such a GM suggests promising applications as soft actuators, sensors, robust shock absorbers, and environmental remediation.

16. Phase space reduction and Poisson structure

1999-07-01

Let (P,π,B,G) be a G-principal fiber bundle. The action of G on the cotangent bundle T*P is free and Hamiltonian. By Liberman and Marle [Symplectic Geometry and Analytical Mechanics (Reidel, Dortrecht, 1987)] and Marsden and Ratiu [Lett. Math. Phys. 11, 161 (1981)] the quotient space T*P/G is a Poisson manifold. We will determine the Poisson bracket on the reduced Poisson manifold T*P/G, and its symplectic leaves.

17. Nonlinear Poisson equation for heterogeneous media.

PubMed

Hu, Langhua; Wei, Guo-Wei

2012-08-22

The Poisson equation is a widely accepted model for electrostatic analysis. However, the Poisson equation is derived based on electric polarizations in a linear, isotropic, and homogeneous dielectric medium. This article introduces a nonlinear Poisson equation to take into consideration of hyperpolarization effects due to intensive charges and possible nonlinear, anisotropic, and heterogeneous media. Variational principle is utilized to derive the nonlinear Poisson model from an electrostatic energy functional. To apply the proposed nonlinear Poisson equation for the solvation analysis, we also construct a nonpolar solvation energy functional based on the nonlinear Poisson equation by using the geometric measure theory. At a fixed temperature, the proposed nonlinear Poisson theory is extensively validated by the electrostatic analysis of the Kirkwood model and a set of 20 proteins, and the solvation analysis of a set of 17 small molecules whose experimental measurements are also available for a comparison. Moreover, the nonlinear Poisson equation is further applied to the solvation analysis of 21 compounds at different temperatures. Numerical results are compared to theoretical prediction, experimental measurements, and those obtained from other theoretical methods in the literature. A good agreement between our results and experimental data as well as theoretical results suggests that the proposed nonlinear Poisson model is a potentially useful model for electrostatic analysis involving hyperpolarization effects.

18. Nonlinear Poisson Equation for Heterogeneous Media

PubMed Central

Hu, Langhua; Wei, Guo-Wei

2012-01-01

The Poisson equation is a widely accepted model for electrostatic analysis. However, the Poisson equation is derived based on electric polarizations in a linear, isotropic, and homogeneous dielectric medium. This article introduces a nonlinear Poisson equation to take into consideration of hyperpolarization effects due to intensive charges and possible nonlinear, anisotropic, and heterogeneous media. Variational principle is utilized to derive the nonlinear Poisson model from an electrostatic energy functional. To apply the proposed nonlinear Poisson equation for the solvation analysis, we also construct a nonpolar solvation energy functional based on the nonlinear Poisson equation by using the geometric measure theory. At a fixed temperature, the proposed nonlinear Poisson theory is extensively validated by the electrostatic analysis of the Kirkwood model and a set of 20 proteins, and the solvation analysis of a set of 17 small molecules whose experimental measurements are also available for a comparison. Moreover, the nonlinear Poisson equation is further applied to the solvation analysis of 21 compounds at different temperatures. Numerical results are compared to theoretical prediction, experimental measurements, and those obtained from other theoretical methods in the literature. A good agreement between our results and experimental data as well as theoretical results suggests that the proposed nonlinear Poisson model is a potentially useful model for electrostatic analysis involving hyperpolarization effects. PMID:22947937

19. Logistic Regression

Grégoire, G.

2014-12-01

The logistic regression originally is intended to explain the relationship between the probability of an event and a set of covariables. The model's coefficients can be interpreted via the odds and odds ratio, which are presented in introduction of the chapter. The observations are possibly got individually, then we speak of binary logistic regression. When they are grouped, the logistic regression is said binomial. In our presentation we mainly focus on the binary case. For statistical inference the main tool is the maximum likelihood methodology: we present the Wald, Rao and likelihoods ratio results and their use to compare nested models. The problems we intend to deal with are essentially the same as in multiple linear regression: testing global effect, individual effect, selection of variables to build a model, measure of the fitness of the model, prediction of new values… . The methods are demonstrated on data sets using R. Finally we briefly consider the binomial case and the situation where we are interested in several events, that is the polytomous (multinomial) logistic regression and the particular case of ordinal logistic regression.

20. A regularization corrected score method for nonlinear regression models with covariate error.

PubMed

Zucker, David M; Gorfine, Malka; Li, Yi; Tadesse, Mahlet G; Spiegelman, Donna

2013-03-01

Many regression analyses involve explanatory variables that are measured with error, and failing to account for this error is well known to lead to biased point and interval estimates of the regression coefficients. We present here a new general method for adjusting for covariate error. Our method consists of an approximate version of the Stefanski-Nakamura corrected score approach, using the method of regularization to obtain an approximate solution of the relevant integral equation. We develop the theory in the setting of classical likelihood models; this setting covers, for example, linear regression, nonlinear regression, logistic regression, and Poisson regression. The method is extremely general in terms of the types of measurement error models covered, and is a functional method in the sense of not involving assumptions on the distribution of the true covariate. We discuss the theoretical properties of the method and present simulation results in the logistic regression setting (univariate and multivariate). For illustration, we apply the method to data from the Harvard Nurses' Health Study concerning the relationship between physical activity and breast cancer mortality in the period following a diagnosis of breast cancer.

1. Sub-Poisson-binomial light

Lee, Changhyoup; Ferrari, Simone; Pernice, Wolfram H. P.; Rockstuhl, Carsten

2016-11-01

We introduce a general parameter QPB that provides an experimentally accessible nonclassicality measure for light. The parameter is quantified by the click statistics obtained from on-off detectors in a general multiplexing detection setup. Sub-Poisson-binomial statistics, observed by QPB<0 , indicates that a given state of light is nonclassical. Our parameter replaces the binomial parameter QB for more general cases, where any unbalance among the multiplexed modes is allowed, thus enabling the use of arbitrary multiplexing schemes. The significance of the parameter QPB is theoretically examined in a measurement setup that only consists of a ring resonator and a single on-off detector. The proposed setup exploits minimal experimental resources and is geared towards a fully integrated quantum nanophotonic circuit. The results show that nonclassical features remain noticeable even in the presence of significant losses, rendering our nonclassicality test more practical and sufficiently flexible to be used in various nanophotonic platforms.

2. CUMPOIS- CUMULATIVE POISSON DISTRIBUTION PROGRAM

NASA Technical Reports Server (NTRS)

Bowerman, P. N.

1994-01-01

The Cumulative Poisson distribution program, CUMPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), can be used independently of one another. CUMPOIS determines the approximate cumulative binomial distribution, evaluates the cumulative distribution function (cdf) for gamma distributions with integer shape parameters, and evaluates the cdf for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. CUMPOIS calculates the probability that n or less events (ie. cumulative) will occur within any unit when the expected number of events is given as lambda. Normally, this probability is calculated by a direct summation, from i=0 to n, of terms involving the exponential function, lambda, and inverse factorials. This approach, however, eventually fails due to underflow for sufficiently large values of n. Additionally, when the exponential term is moved outside of the summation for simplification purposes, there is a risk that the terms remaining within the summation, and the summation itself, will overflow for certain values of i and lambda. CUMPOIS eliminates these possibilities by multiplying an additional exponential factor into the summation terms and the partial sum whenever overflow/underflow situations threaten. The reciprocal of this term is then multiplied into the completed sum giving the cumulative probability. The CUMPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting lambda and n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMPOIS was

3. On Quantization of Quadratic Poisson Structures

Manchon, D.; Masmoudi, M.; Roux, A.

Any classical r-matrix on the Lie algebra of linear operators on a real vector space V gives rise to a quadratic Poisson structure on V which admits a deformation quantization stemming from the construction of V. Drinfel'd [Dr], [Gr]. We exhibit in this article an example of quadratic Poisson structure which does not arise this way.

4. Alternative Derivations for the Poisson Integral Formula

ERIC Educational Resources Information Center

Chen, J. T.; Wu, C. S.

2006-01-01

Poisson integral formula is revisited. The kernel in the Poisson integral formula can be derived in a series form through the direct BEM free of the concept of image point by using the null-field integral equation in conjunction with the degenerate kernels. The degenerate kernels for the closed-form Green's function and the series form of Poisson…

5. Analysis of Blood Transfusion Data Using Bivariate Zero-Inflated Poisson Model: A Bayesian Approach

PubMed Central

2016-01-01

Recognizing the factors affecting the number of blood donation and blood deferral has a major impact on blood transfusion. There is a positive correlation between the variables “number of blood donation” and “number of blood deferral”: as the number of return for donation increases, so does the number of blood deferral. On the other hand, due to the fact that many donors never return to donate, there is an extra zero frequency for both of the above-mentioned variables. In this study, in order to apply the correlation and to explain the frequency of the excessive zero, the bivariate zero-inflated Poisson regression model was used for joint modeling of the number of blood donation and number of blood deferral. The data was analyzed using the Bayesian approach applying noninformative priors at the presence and absence of covariates. Estimating the parameters of the model, that is, correlation, zero-inflation parameter, and regression coefficients, was done through MCMC simulation. Eventually double-Poisson model, bivariate Poisson model, and bivariate zero-inflated Poisson model were fitted on the data and were compared using the deviance information criteria (DIC). The results showed that the bivariate zero-inflated Poisson regression model fitted the data better than the other models. PMID:27703493

6. Analysis of Blood Transfusion Data Using Bivariate Zero-Inflated Poisson Model: A Bayesian Approach.

PubMed

Mohammadi, Tayeb; Kheiri, Soleiman; Sedehi, Morteza

2016-01-01

Recognizing the factors affecting the number of blood donation and blood deferral has a major impact on blood transfusion. There is a positive correlation between the variables "number of blood donation" and "number of blood deferral": as the number of return for donation increases, so does the number of blood deferral. On the other hand, due to the fact that many donors never return to donate, there is an extra zero frequency for both of the above-mentioned variables. In this study, in order to apply the correlation and to explain the frequency of the excessive zero, the bivariate zero-inflated Poisson regression model was used for joint modeling of the number of blood donation and number of blood deferral. The data was analyzed using the Bayesian approach applying noninformative priors at the presence and absence of covariates. Estimating the parameters of the model, that is, correlation, zero-inflation parameter, and regression coefficients, was done through MCMC simulation. Eventually double-Poisson model, bivariate Poisson model, and bivariate zero-inflated Poisson model were fitted on the data and were compared using the deviance information criteria (DIC). The results showed that the bivariate zero-inflated Poisson regression model fitted the data better than the other models.

7. Poisson's ratio of individual metal nanowires.

PubMed

McCarthy, Eoin K; Bellew, Allen T; Sader, John E; Boland, John J

2014-07-07

The measurement of Poisson's ratio of nanomaterials is extremely challenging. Here we report a lateral atomic force microscope experimental method to electromechanically measure the Poisson's ratio and gauge factor of individual nanowires. Under elastic loading conditions we monitor the four-point resistance of individual metallic nanowires as a function of strain and different levels of electrical stress. We determine the gauge factor of individual wires and directly measure the Poisson's ratio using a model that is independently validated for macroscopic wires. For macroscopic wires and nickel nanowires we find Poisson's ratios that closely correspond to bulk values, whereas for silver nanowires significant deviations from the bulk silver value are observed. Moreover, repeated measurements on individual silver nanowires at different levels of mechanical and electrical stress yield a small spread in Poisson ratio, with a range of mean values for different wires, all of which are distinct from the bulk value.

8. Almost efficient estimation of relative risk regression

PubMed Central

Fitzmaurice, Garrett M.; Lipsitz, Stuart R.; Arriaga, Alex; Sinha, Debajyoti; Greenberg, Caprice; Gawande, Atul A.

2014-01-01

Relative risks (RRs) are often considered the preferred measures of association in prospective studies, especially when the binary outcome of interest is common. In particular, many researchers regard RRs to be more intuitively interpretable than odds ratios. Although RR regression is a special case of generalized linear models, specifically with a log link function for the binomial (or Bernoulli) outcome, the resulting log-binomial regression does not respect the natural parameter constraints. Because log-binomial regression does not ensure that predicted probabilities are mapped to the [0,1] range, maximum likelihood (ML) estimation is often subject to numerical instability that leads to convergence problems. To circumvent these problems, a number of alternative approaches for estimating RR regression parameters have been proposed. One approach that has been widely studied is the use of Poisson regression estimating equations. The estimating equations for Poisson regression yield consistent, albeit inefficient, estimators of the RR regression parameters. We consider the relative efficiency of the Poisson regression estimator and develop an alternative, almost efficient estimator for the RR regression parameters. The proposed method uses near-optimal weights based on a Maclaurin series (Taylor series expanded around zero) approximation to the true Bernoulli or binomial weight function. This yields an almost efficient estimator while avoiding convergence problems. We examine the asymptotic relative efficiency of the proposed estimator for an increase in the number of terms in the series. Using simulations, we demonstrate the potential for convergence problems with standard ML estimation of the log-binomial regression model and illustrate how this is overcome using the proposed estimator. We apply the proposed estimator to a study of predictors of pre-operative use of beta blockers among patients undergoing colorectal surgery after diagnosis of colon cancer. PMID

9. Supervised Gamma Process Poisson Factorization

SciTech Connect

Anderson, Dylan Zachary

2015-05-01

This thesis develops the supervised gamma process Poisson factorization (S- GPPF) framework, a novel supervised topic model for joint modeling of count matrices and document labels. S-GPPF is fully generative and nonparametric: document labels and count matrices are modeled under a uni ed probabilistic framework and the number of latent topics is controlled automatically via a gamma process prior. The framework provides for multi-class classification of documents using a generative max-margin classifier. Several recent data augmentation techniques are leveraged to provide for exact inference using a Gibbs sampling scheme. The first portion of this thesis reviews supervised topic modeling and several key mathematical devices used in the formulation of S-GPPF. The thesis then introduces the S-GPPF generative model and derives the conditional posterior distributions of the latent variables for posterior inference via Gibbs sampling. The S-GPPF is shown to exhibit state-of-the-art performance for joint topic modeling and document classification on a dataset of conference abstracts, beating out competing supervised topic models. The unique properties of S-GPPF along with its competitive performance make it a novel contribution to supervised topic modeling.

10. Negative Poisson's ratio in rippled graphene.

PubMed

Qin, Huasong; Sun, Yu; Liu, Jefferson Zhe; Li, Mengjie; Liu, Yilun

2017-03-10

In this work, we perform molecular dynamics (MD) simulations to study the effect of rippling on the Poisson's ratio of graphene. Due to the atomic scale thickness of graphene, out-of-plane ripples are generated in free standing graphene with topological defects (e.g. heptagons and pentagons) to release the in-plane deformation energy. Through MD simulations, we have found that the Poisson's ratio of rippled graphene decreases upon increasing its aspect ratio η (amplitude over wavelength). For the rippled graphene sheet η = 0.188, a negative Poisson's ratio of -0.38 is observed for a tensile strain up to 8%, while the Poisson's ratio for η = 0.066 is almost zero. During uniaxial tension, the ripples gradually become flat, thus the Poisson's ratio of rippled graphene is determined by the competing factors of the intrinsic positive Poisson's ratio of graphene and the negative Poisson's ratio due to the de-wrinkling effect. Besides, the rippled graphene exhibits excellent fracture strength and toughness. With the combination of its auxetic and excellent mechanical properties, rippled graphene may possess potential for application in nano-devices and nanomaterials.

11. Negative Poisson's ratio materials via isotropic interactions.

PubMed

Rechtsman, Mikael C; Stillinger, Frank H; Torquato, Salvatore

2008-08-22

We show that under tension a classical many-body system with only isotropic pair interactions in a crystalline state can, counterintuitively, have a negative Poisson's ratio, or auxetic behavior. We derive the conditions under which the triangular lattice in two dimensions and lattices with cubic symmetry in three dimensions exhibit a negative Poisson's ratio. In the former case, the simple Lennard-Jones potential can give rise to auxetic behavior. In the latter case, a negative Poisson's ratio can be exhibited even when the material is constrained to be elastically isotropic.

12. Poisson-Based Inference for Perturbation Models in Adaptive Spelling Training

ERIC Educational Resources Information Center

Baschera, Gian-Marco; Gross, Markus

2010-01-01

We present an inference algorithm for perturbation models based on Poisson regression. The algorithm is designed to handle unclassified input with multiple errors described by independent mal-rules. This knowledge representation provides an intelligent tutoring system with local and global information about a student, such as error classification…

13. Time series regression model for infectious disease and weather.

PubMed

Imai, Chisato; Armstrong, Ben; Chalabi, Zaid; Mangtani, Punam; Hashizume, Masahiro

2015-10-01

Time series regression has been developed and long used to evaluate the short-term associations of air pollution and weather with mortality or morbidity of non-infectious diseases. The application of the regression approaches from this tradition to infectious diseases, however, is less well explored and raises some new issues. We discuss and present potential solutions for five issues often arising in such analyses: changes in immune population, strong autocorrelations, a wide range of plausible lag structures and association patterns, seasonality adjustments, and large overdispersion. The potential approaches are illustrated with datasets of cholera cases and rainfall from Bangladesh and influenza and temperature in Tokyo. Though this article focuses on the application of the traditional time series regression to infectious diseases and weather factors, we also briefly introduce alternative approaches, including mathematical modeling, wavelet analysis, and autoregressive integrated moving average (ARIMA) models. Modifications proposed to standard time series regression practice include using sums of past cases as proxies for the immune population, and using the logarithm of lagged disease counts to control autocorrelation due to true contagion, both of which are motivated from "susceptible-infectious-recovered" (SIR) models. The complexity of lag structures and association patterns can often be informed by biological mechanisms and explored by using distributed lag non-linear models. For overdispersed models, alternative distribution models such as quasi-Poisson and negative binomial should be considered. Time series regression can be used to investigate dependence of infectious diseases on weather, but may need modifying to allow for features specific to this context.

14. Negative Poisson's Ratio in Modern Functional Materials.

PubMed

Huang, Chuanwei; Chen, Lang

2016-10-01

Materials with negative Poisson's ratio attract considerable attention due to their underlying intriguing physical properties and numerous promising applications, particularly in stringent environments such as aerospace and defense areas, because of their unconventional mechanical enhancements. Recent progress in materials with a negative Poisson's ratio are reviewed here, with the current state of research regarding both theory and experiment. The inter-relationship between the underlying structure and a negative Poisson's ratio is discussed in functional materials, including macroscopic bulk, low-dimensional nanoscale particles, films, sheets, or tubes. The coexistence and correlations with other negative indexes (such as negative compressibility and negative thermal expansion) are also addressed. Finally, open questions and future research opportunities are proposed for functional materials with negative Poisson's ratios.

15. Poisson׳s ratio of arterial wall - Inconsistency of constitutive models with experimental data.

PubMed

Skacel, Pavel; Bursa, Jiri

2016-02-01

Poisson׳s ratio of fibrous soft tissues is analyzed in this paper on the basis of constitutive models and experimental data. Three different up-to-date constitutive models accounting for the dispersion of fibre orientations are analyzed. Their predictions of the anisotropic Poisson׳s ratios are investigated under finite strain conditions together with the effects of specific orientation distribution functions and of other parameters. The applied constitutive models predict the tendency to lower (or even negative) out-of-plane Poisson׳s ratio. New experimental data of porcine arterial layer under uniaxial tension in orthogonal directions are also presented and compared with the theoretical predictions and other literature data. The results point out the typical features of recent constitutive models with fibres concentrated in circumferential-axial plane of arterial layers and their potential inconsistence with some experimental data. The volumetric (in)compressibility of arterial tissues is also discussed as an eventual and significant factor influencing this inconsistency.

16. Evaluating the double Poisson generalized linear model.

PubMed

Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique

2013-10-01

The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data.

17. Prediction of forest fires occurrences with area-level Poisson mixed models.

PubMed

Boubeta, Miguel; Lombardía, María José; Marey-Pérez, Manuel Francisco; Morales, Domingo

2015-05-01

The number of fires in forest areas of Galicia (north-west of Spain) during the summer period is quite high. Local authorities are interested in analyzing the factors that explain this phenomenon. Poisson regression models are good tools for describing and predicting the number of fires per forest areas. This work employs area-level Poisson mixed models for treating real data about fires in forest areas. A parametric bootstrap method is applied for estimating the mean squared errors of fires predictors. The developed methodology and software are applied to a real data set of fires in forest areas of Galicia.

18. Generalized Poisson distribution: the property of mixture of Poisson and comparison with negative binomial distribution.

PubMed

Joe, Harry; Zhu, Rong

2005-04-01

We prove that the generalized Poisson distribution GP(theta, eta) (eta > or = 0) is a mixture of Poisson distributions; this is a new property for a distribution which is the topic of the book by Consul (1989). Because we find that the fits to count data of the generalized Poisson and negative binomial distributions are often similar, to understand their differences, we compare the probability mass functions and skewnesses of the generalized Poisson and negative binomial distributions with the first two moments fixed. They have slight differences in many situations, but their zero-inflated distributions, with masses at zero, means and variances fixed, can differ more. These probabilistic comparisons are helpful in selecting a better fitting distribution for modelling count data with long right tails. Through a real example of count data with large zero fraction, we illustrate how the generalized Poisson and negative binomial distributions as well as their zero-inflated distributions can be discriminated.

19. Loop coproducts, Gaudin models and Poisson coalgebras

Musso, F.

2010-10-01

In this paper we show that if A is a Poisson algebra equipped with a set of maps Δ(i)λ: A → Aotimes N satisfying suitable conditions, then the images of the Casimir functions of A under the maps Δ(i)λ (that we call 'loop coproducts') are in involution. Rational, trigonometric and elliptic Gaudin models can be recovered as particular cases of this construction, and we show that the same happens for the integrable (or partially integrable) models that can be obtained through the so-called coproduct method. On the other hand, we show that the loop coproduct approach provides a natural generalization of the Gaudin algebras from the Lie-Poisson to the generic Poisson algebra context and, hopefully, can lead to the definition of new integrable models.

20. Magnetostrictive contribution to Poisson ratio of galfenol

Paes, V. Z. C.; Mosca, D. H.

2013-09-01

In this work we present a detailed study on the magnetostrictive contribution to Poisson ratio for samples under applied mechanical stress. Magnetic contributions to strain and Poisson ratio for cubic materials were derived by accounting elastic and magneto-elastic anisotropy contributions. We apply our theoretical results for a material of interest in magnetomechanics, namely, galfenol (Fe1-xGax). Our results show that there is a non-negligible magnetic contribution in the linear portion of the curve of stress versus strain. The rotation of the magnetization towards [110] crystallographic direction upon application of mechanical stress leads to an auxetic behavior, i.e., exhibiting Poisson ratio with negative values. This magnetic contribution to auxetic behavior provides a novel insight for the discussion of theoretical and experimental developments of materials that display unusual mechanical properties.

1. The BRST complex of homological Poisson reduction

Müller-Lennert, Martin

2017-02-01

BRST complexes are differential graded Poisson algebras. They are associated with a coisotropic ideal J of a Poisson algebra P and provide a description of the Poisson algebra (P/J)^J as their cohomology in degree zero. Using the notion of stable equivalence introduced in Felder and Kazhdan (Contemporary Mathematics 610, Perspectives in representation theory, 2014), we prove that any two BRST complexes associated with the same coisotropic ideal are quasi-isomorphic in the case P = R[V] where V is a finite-dimensional symplectic vector space and the bracket on P is induced by the symplectic structure on V. As a corollary, the cohomology of the BRST complexes is canonically associated with the coisotropic ideal J in the symplectic case. We do not require any regularity assumptions on the constraints generating the ideal J. We finally quantize the BRST complex rigorously in the presence of infinitely many ghost variables and discuss the uniqueness of the quantization procedure.

2. Extensions of Rasch's Multiplicative Poisson Model.

ERIC Educational Resources Information Center

Jansen, Margo G. H.; van Duijn, Marijtje A. J.

1992-01-01

A model developed by G. Rasch that assumes scores on some attainment tests can be realizations of a Poisson process is explained and expanded by assuming a prior distribution, with fixed but unknown parameters, for the subject parameters. How additional between-subject and within-subject factors can be incorporated is discussed. (SLD)

3. Natural Poisson structures of nonlinear plasma dynamics

SciTech Connect

Kaufman, A.N.

1982-06-01

Hamiltonian field theories, for models of nonlinear plasma dynamics, require a Poisson bracket structure for functionals of the field variables. These are presented, applied, and derived for several sets of field variables: coherent waves, incoherent waves, particle distributions, and multifluid electrodynamics. Parametric coupling of waves and plasma yields concise expressions for ponderomotive effects (in kinetic and fluid models) and for induced scattering.

4. Measuring Poisson Ratios at Low Temperatures

NASA Technical Reports Server (NTRS)

Boozon, R. S.; Shepic, J. A.

1987-01-01

Simple extensometer ring measures bulges of specimens in compression. New method of measuring Poisson's ratio used on brittle ceramic materials at cryogenic temperatures. Extensometer ring encircles cylindrical specimen. Four strain gauges connected in fully active Wheatstone bridge self-temperature-compensating. Used at temperatures as low as liquid helium.

5. Evolutionary inference via the Poisson Indel Process.

PubMed

Bouchard-Côté, Alexandre; Jordan, Michael I

2013-01-22

We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114-124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments.

6. Easy Demonstration of the Poisson Spot

ERIC Educational Resources Information Center

Gluck, Paul

2010-01-01

Many physics teachers have a set of slides of single, double and multiple slits to show their students the phenomena of interference and diffraction. Thomas Young's historic experiments with double slits were indeed a milestone in proving the wave nature of light. But another experiment, namely the Poisson spot, was also important historically and…

7. A new bivariate negative binomial regression model

Faroughi, Pouya; Ismail, Noriszura

2014-12-01

This paper introduces a new form of bivariate negative binomial (BNB-1) regression which can be fitted to bivariate and correlated count data with covariates. The BNB regression discussed in this study can be fitted to bivariate and overdispersed count data with positive, zero or negative correlations. The joint p.m.f. of the BNB1 distribution is derived from the product of two negative binomial marginals with a multiplicative factor parameter. Several testing methods were used to check overdispersion and goodness-of-fit of the model. Application of BNB-1 regression is illustrated on Malaysian motor insurance dataset. The results indicated that BNB-1 regression has better fit than bivariate Poisson and BNB-2 models with regards to Akaike information criterion.

8. The Zero-truncated Poisson with Right Censoring: an Application to Translational Breast Cancer Research.

PubMed

Yeh, Hung-Wen; Gajewski, Byron; Mukhopadhyay, Purna; Behbod, Fariba

2012-08-30

We propose to analyze positive count data with right censoring from Behbod et al. (2009) using the censored zero-truncated Poisson model (CZTP). The comparison in truncated means across subgroups in each cell line is carried out through a log-linear model that links the un-truncated Poisson parameter and regression covariates. We also perform simulation to evaluate the performance of the CZTP model in finite and large sample sizes. In general, the CZTP model provides accurate and precise estimates. However, for data with small means and small sample sizes, it may be more proper to make inference based on the mean counts rather than on the regression coefficients. For small sample sizes and moderate means, the likelihood ratio test is more reliable than the Wald test. We also demonstrate how power analysis can be used to justify and/or guide the choice of censoring thresholds in study design. A SAS macro is provided in Appendix for readers' reference.

9. Simulation on Poisson and negative binomial models of count road accident modeling

Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.

2016-11-01

Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.

10. A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments.

PubMed

Fisicaro, G; Genovese, L; Andreussi, O; Marzari, N; Goedecker, S

2016-01-07

The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.

11. A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments

SciTech Connect

Fisicaro, G. Goedecker, S.; Genovese, L.; Andreussi, O.; Marzari, N.

2016-01-07

The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.

12. The solution of large multi-dimensional Poisson problems

NASA Technical Reports Server (NTRS)

Stone, H. S.

1974-01-01

The Buneman algorithm for solving Poisson problems can be adapted to solve large Poisson problems on computers with a rotating drum memory so that the computation is done with very little time lost due to rotational latency of the drum.

13. A Duflo Star Product for Poisson Groups

2016-09-01

Let G be a finite-dimensional Poisson algebraic, Lie or formal group. We show that the center of the quantization of G provided by an Etingof-Kazhdan functor is isomorphic as an algebra to the Poisson center of the algebra of functions on G. This recovers and generalizes Duflo's theorem which gives an isomorphism between the center of the enveloping algebra of a finite-dimensional Lie algebra a and the subalgebra of ad-invariant in the symmetric algebra of a. As our proof relies on Etingof-Kazhdan construction it ultimately depends on the existence of Drinfeld associators, but otherwise it is a fairly simple application of graphical calculus. This shed some lights on Alekseev-Torossian proof of the Kashiwara-Vergne conjecture, and on the relation observed by Bar-Natan-Le-Thurston between the Duflo isomorphism and the Kontsevich integral of the unknot.

14. Regression: A Bibliography.

ERIC Educational Resources Information Center

Pedrini, D. T.; Pedrini, Bonnie C.

Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…

15. A New Echeloned Poisson Series Processor (EPSP)

Ivanova, Tamara

2001-07-01

A specialized Echeloned Poisson Series Processor (EPSP) is proposed. It is a typical software for the implementation of analytical algorithms of Celestial Mechanics. EPSP is designed for manipulating long polynomial-trigonometric series with literal divisors. The coefficients of these echeloned series are the rational or floating-point numbers. The Keplerian processor and analytical generator of special celestial mechanics functions based on the EPSP are also developed.

16. Poisson filtering of laser ranging data

NASA Technical Reports Server (NTRS)

Ricklefs, Randall L.; Shelus, Peter J.

1993-01-01

The filtering of data in a high noise, low signal strength environment is a situation encountered routinely in lunar laser ranging (LLR) and, to a lesser extent, in artificial satellite laser ranging (SLR). The use of Poisson statistics as one of the tools for filtering LLR data is described first in a historical context. The more recent application of this statistical technique to noisy SLR data is also described.

17. Path Selection in a Poisson field

Cohen, Yossi; Rothman, Daniel H.

2016-11-01

A criterion for path selection for channels growing in a Poisson field is presented. We invoke a generalization of the principle of local symmetry. We then use this criterion to grow channels in a confined geometry. The channel trajectories reveal a self-similar shape as they reach steady state. Analyzing their paths, we identify a cause for branching that may result in a ramified structure in which the golden ratio appears.

18. Computation of solar perturbations with Poisson series

NASA Technical Reports Server (NTRS)

Broucke, R.

1974-01-01

Description of a project for computing first-order perturbations of natural or artificial satellites by integrating the equations of motion on a computer with automatic Poisson series expansions. A basic feature of the method of solution is that the classical variation-of-parameters formulation is used rather than rectangular coordinates. However, the variation-of-parameters formulation uses the three rectangular components of the disturbing force rather than the classical disturbing function, so that there is no problem in expanding the disturbing function in series. Another characteristic of the variation-of-parameters formulation employed is that six rather unusual variables are used in order to avoid singularities at the zero eccentricity and zero (or 90 deg) inclination. The integration process starts by assuming that all the orbit elements present on the right-hand sides of the equations of motion are constants. These right-hand sides are then simple Poisson series which can be obtained with the use of the Bessel expansions of the two-body problem in conjunction with certain interation methods. These Poisson series can then be integrated term by term, and a first-order solution is obtained.

19. Modelling of nonlinear filtering Poisson time series

Bochkarev, Vladimir V.; Belashova, Inna A.

2016-08-01

In this article, algorithms of non-linear filtering of Poisson time series are tested using statistical modelling. The objective is to find a representation of a time series as a wavelet series with a small number of non-linear coefficients, which allows distinguishing statistically significant details. There are well-known efficient algorithms of non-linear wavelet filtering for the case when the values of a time series have a normal distribution. However, if the distribution is not normal, good results can be expected using the maximum likelihood estimations. The filtration is studied according to the criterion of maximum likelihood by the example of Poisson time series. For direct optimisation of the likelihood function, different stochastic (genetic algorithms, annealing method) and deterministic optimization algorithms are used. Testing of the algorithm using both simulated series and empirical data (series of rare words frequencies according to the Google Books Ngram data were used) showed that filtering based on the criterion of maximum likelihood has a great advantage over well-known algorithms for the case of Poisson series. Also, the most perspective methods of optimisation were selected for this problem.

20. First- and second-order Poisson spots

Kelly, William R.; Shirley, Eric L.; Migdall, Alan L.; Polyakov, Sergey V.; Hendrix, Kurt

2009-08-01

Although Thomas Young is generally given credit for being the first to provide evidence against Newton's corpuscular theory of light, it was Augustin Fresnel who first stated the modern theory of diffraction. We review the history surrounding Fresnel's 1818 paper and the role of the Poisson spot in the associated controversy. We next discuss the boundary-diffraction-wave approach to calculating diffraction effects and show how it can reduce the complexity of calculating diffraction patterns. We briefly discuss a generalization of this approach that reduces the dimensionality of integrals needed to calculate the complete diffraction pattern of any order diffraction effect. We repeat earlier demonstrations of the conventional Poisson spot and discuss an experimental setup for demonstrating an analogous phenomenon that we call a "second-order Poisson spot." Several features of the diffraction pattern can be explained simply by considering the path lengths of singly and doubly bent paths and distinguishing between first- and second-order diffraction effects related to such paths, respectively.

1. Poisson's ratio over two centuries: challenging hypotheses

PubMed Central

Greaves, G. Neville

2013-01-01

This article explores Poisson's ratio, starting with the controversy concerning its magnitude and uniqueness in the context of the molecular and continuum hypotheses competing in the development of elasticity theory in the nineteenth century, moving on to its place in the development of materials science and engineering in the twentieth century, and concluding with its recent re-emergence as a universal metric for the mechanical performance of materials on any length scale. During these episodes France lost its scientific pre-eminence as paradigms switched from mathematical to observational, and accurate experiments became the prerequisite for scientific advance. The emergence of the engineering of metals followed, and subsequently the invention of composites—both somewhat separated from the discovery of quantum mechanics and crystallography, and illustrating the bifurcation of technology and science. Nowadays disciplines are reconnecting in the face of new scientific demands. During the past two centuries, though, the shape versus volume concept embedded in Poisson's ratio has remained invariant, but its application has exploded from its origins in describing the elastic response of solids and liquids, into areas such as materials with negative Poisson's ratio, brittleness, glass formation, and a re-evaluation of traditional materials. Moreover, the two contentious hypotheses have been reconciled in their complementarity within the hierarchical structure of materials and through computational modelling. PMID:24687094

2. On the Singularity of the Vlasov-Poisson System

SciTech Connect

and Hong Qin, Jian Zheng

2013-04-26

The Vlasov-Poisson system can be viewed as the collisionless limit of the corresponding Fokker- Planck-Poisson system. It is reasonable to expect that the result of Landau damping can also be obtained from the Fokker-Planck-Poisson system when the collision frequency v approaches zero. However, we show that the colllisionless Vlasov-Poisson system is a singular limit of the collisional Fokker-Planck-Poisson system, and Landau's result can be recovered only as the approaching zero from the positive side.

3. On the singularity of the Vlasov-Poisson system

SciTech Connect

Zheng, Jian; Qin, Hong

2013-09-15

The Vlasov-Poisson system can be viewed as the collisionless limit of the corresponding Fokker-Planck-Poisson system. It is reasonable to expect that the result of Landau damping can also be obtained from the Fokker-Planck-Poisson system when the collision frequency ν approaches zero. However, we show that the collisionless Vlasov-Poisson system is a singular limit of the collisional Fokker-Planck-Poisson system, and Landau's result can be recovered only as the ν approaches zero from the positive side.

4. Nonlocal Poisson-Fermi model for ionic solvent.

PubMed

Xie, Dexuan; Liu, Jinn-Liang; Eisenberg, Bob

2016-07-01

We propose a nonlocal Poisson-Fermi model for ionic solvent that includes ion size effects and polarization correlations among water molecules in the calculation of electrostatic potential. It includes the previous Poisson-Fermi models as special cases, and its solution is the convolution of a solution of the corresponding nonlocal Poisson dielectric model with a Yukawa-like kernel function. The Fermi distribution is shown to be a set of optimal ionic concentration functions in the sense of minimizing an electrostatic potential free energy. Numerical results are reported to show the difference between a Poisson-Fermi solution and a corresponding Poisson solution.

5. Nonlocal Poisson-Fermi model for ionic solvent

Xie, Dexuan; Liu, Jinn-Liang; Eisenberg, Bob

2016-07-01

We propose a nonlocal Poisson-Fermi model for ionic solvent that includes ion size effects and polarization correlations among water molecules in the calculation of electrostatic potential. It includes the previous Poisson-Fermi models as special cases, and its solution is the convolution of a solution of the corresponding nonlocal Poisson dielectric model with a Yukawa-like kernel function. The Fermi distribution is shown to be a set of optimal ionic concentration functions in the sense of minimizing an electrostatic potential free energy. Numerical results are reported to show the difference between a Poisson-Fermi solution and a corresponding Poisson solution.

6. Rank regression: an alternative regression approach for data with outliers.

PubMed

Chen, Tian; Tang, Wan; Lu, Ying; Tu, Xin

2014-10-01

Linear regression models are widely used in mental health and related health services research. However, the classic linear regression analysis assumes that the data are normally distributed, an assumption that is not met by the data obtained in many studies. One method of dealing with this problem is to use semi-parametric models, which do not require that the data be normally distributed. But semi-parametric models are quite sensitive to outlying observations, so the generated estimates are unreliable when study data includes outliers. In this situation, some researchers trim the extreme values prior to conducting the analysis, but the ad-hoc rules used for data trimming are based on subjective criteria so different methods of adjustment can yield different results. Rank regression provides a more objective approach to dealing with non-normal data that includes outliers. This paper uses simulated and real data to illustrate this useful regression approach for dealing with outliers and compares it to the results generated using classical regression models and semi-parametric regression models.

7. The Poisson model limits in NBA basketball: Complexity in team sports

Martín-González, Juan Manuel; de Saá Guerra, Yves; García-Manso, Juan Manuel; Arriaza, Enrique; Valverde-Estévez, Teresa

2016-12-01

Team sports are frequently studied by researchers. There is presumption that scoring in basketball is a random process and that can be described using the Poisson Model. Basketball is a collaboration-opposition sport, where the non-linear local interactions among players are reflected in the evolution of the score that ultimately determines the winner. In the NBA, the outcomes of close games are often decided in the last minute, where fouls play a main role. We examined 6130 NBA games in order to analyze the time intervals between baskets and scoring dynamics. Most numbers of baskets (n) over a time interval (ΔT) follow a Poisson distribution, but some (e.g., ΔT = 10 s, n > 3) behave as a Power Law. The Poisson distribution includes most baskets in any game, in most game situations, but in close games in the last minute, the numbers of events are distributed following a Power Law. The number of events can be adjusted by a mixture of two distributions. In close games, both teams try to maintain their advantage solely in order to reach the last minute: a completely different game. For this reason, we propose to use the Poisson model as a reference. The complex dynamics will emerge from the limits of this model.

MedlinePlus

... structural alignment and improve your body's physical function. Low back pain, neck pain and headache are the most common ... treated. Chiropractic adjustment can be effective in treating low back pain, although much of the research done shows only ...

MedlinePlus

... from other people Skipped heartbeats and other physical complaints Trembling or twitching To have adjustment disorder, you ... ADAM Health Solutions. About MedlinePlus Site Map FAQs Customer Support Get email updates Subscribe to RSS Follow ...

10. The transverse Poisson's ratio of composites.

NASA Technical Reports Server (NTRS)

Foye, R. L.

1972-01-01

An expression is developed that makes possible the prediction of Poisson's ratio for unidirectional composites with reference to any pair of orthogonal axes that are normal to the direction of the reinforcing fibers. This prediction appears to be a reasonable one in that it follows the trends of the finite element analysis and the bounding estimates, and has the correct limiting value for zero fiber content. It can only be expected to apply to composites containing stiff, circular, isotropic fibers bonded to a soft matrix material.

11. Testing the ratio of two poisson rates.

PubMed

Gu, Kangxia; Ng, Hon Keung Tony; Tang, Man Lai; Schucany, William R

2008-04-01

In this paper we compare the properties of four different general approaches for testing the ratio of two Poisson rates. Asymptotically normal tests, tests based on approximate p -values, exact conditional tests, and a likelihood ratio test are considered. The properties and power performance of these tests are studied by a Monte Carlo simulation experiment. Sample size calculation formulae are given for each of the test procedures and their validities are studied. Some recommendations favoring the likelihood ratio and certain asymptotic tests are based on these simulation results. Finally, all of the test procedures are illustrated with two real life medical examples.

12. Ridge Regression: A Panacea?

ERIC Educational Resources Information Center

Walton, Joseph M.; And Others

1978-01-01

Ridge regression is an approach to the problem of large standard errors of regression estimates of intercorrelated regressors. The effect of ridge regression on the estimated squared multiple correlation coefficient is discussed and illustrated. (JKS)

13. Trends in age-adjusted coronary heart disease mortality rates in Slovakia between 1993 and 2009.

PubMed

Psota, Marek; Pekarciková, Jarmila; O'Mullane, Monica; Rusnák, Martin

2013-06-01

Cardiovascular diseases (CVD) and especially coronary heart disease (CHD) are the main causes of death in the Slovak Republic (SR). The aim of this study is to explore trends in age-adjusted coronary heart disease mortality rates in the whole Slovak population and in the population of working age between the years 1993 and 2009. A related indicator - potential years of life lost (PYLL) due to CHD--was calculated in the same period for males and females. Crude CHD mortality rates were age-adjusted using European standard population. The joinpoint Poisson regression was performed in order to find out the annual percentage change in trends. The age-adjusted CHD mortality rates decreased in the Slovak population and also in the population of working age. The change was significant only within the working-age sub-group. We found that partial diagnoses (myocardial infarction and chronic ischaemic heart disease) developed in the mirror-like manner. PYLL per 100,000 decreased during the observed period and the decline was more prominent in males. For further research we recommend to focus on several other issues, namely, to examine the validity of cause of death codes, to examine the development of mortality rates in selected age groups, to find out the cause of differential development of mortality rates in the Slovak Republic in comparison with the Czech Republic and Poland, and to explain the causes of decrease of the age-adjusted CHD mortality rates in younger age groups in Slovakia.

14. Age-adjusted mortality and its association to variations in urban conditions in Shanghai.

PubMed

Takano, Takehito; Fu, Jia; Nakamura, Keiko; Uji, Kazuyuki; Fukuda, Yoshiharu; Watanabe, Masafumi; Nakajima, Hiroshi

2002-09-01

The objective of this study was to explore the association between health and urbanization in a megacity, Shanghai, by calculating the age-adjusted mortality ratio by ward-unit of Shanghai and by examining relationships between mortalities and urban indicators. Crude mortality rates and age-adjusted mortality ratios by ward-unit were calculated. Demographic, residential environment, healthcare, and socioeconomic indicators were formulated for each of the ward-units between 1995 and 1998. Correlation and Poisson regression analyses were performed to examine the association between urban indicators and mortalities. The crude mortality rate by ward-unit in 1997 varied from 6.3 to 9.4 deaths per 1000 population. The age-adjusted mortality ratio in 1997 by ward-units as reference to the average mortality of urban China varied from 57.8 to 113.3 within Shanghai. Age-adjusted mortalities were inversely related with indicators of a larger floor space of dwellings per population, a larger proportion of parks, gardens, and green areas to total land area; a greater number of health professionals per population; and a greater number of employees in retail business per population. Spacious living showed independent association to a higher standard of community health in Shanghai (P < 0.05). Consequences of health policy and the developments of urban infrastructural resources from the viewpoint of the Healthy Cities concept were discussed.

15. A Poisson model for random multigraphs

PubMed Central

Ranola, John M. O.; Ahn, Sangtae; Sehl, Mary; Smith, Desmond J.; Lange, Kenneth

2010-01-01

Motivation: Biological networks are often modeled by random graphs. A better modeling vehicle is a multigraph where each pair of nodes is connected by a Poisson number of edges. In the current model, the mean number of edges equals the product of two propensities, one for each node. In this context it is possible to construct a simple and effective algorithm for rapid maximum likelihood estimation of all propensities. Given estimated propensities, it is then possible to test statistically for functionally connected nodes that show an excess of observed edges over expected edges. The model extends readily to directed multigraphs. Here, propensities are replaced by outgoing and incoming propensities. Results: The theory is applied to real data on neuronal connections, interacting genes in radiation hybrids, interacting proteins in a literature curated database, and letter and word pairs in seven Shaskespearean plays. Availability: All data used are fully available online from their respective sites. Source code and software is available from http://code.google.com/p/poisson-multigraph/ Contact: klange@ucla.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20554690

16. An empirical Bayesian and Buhlmann approach with non-homogenous Poisson process

Noviyanti, Lienda

2015-12-01

All general insurance companies in Indonesia have to adjust their current premium rates according to maximum and minimum limit rates in the new regulation established by the Financial Services Authority (Otoritas Jasa Keuangan / OJK). In this research, we estimated premium rate by means of the Bayesian and the Buhlmann approach using historical claim frequency and claim severity in a five-group risk. We assumed a Poisson distributed claim frequency and a Normal distributed claim severity. Particularly, we used a non-homogenous Poisson process for estimating the parameters of claim frequency. We found that estimated premium rates are higher than the actual current rate. Regarding to the OJK upper and lower limit rates, the estimates among the five-group risk are varied; some are in the interval and some are out of the interval.

17. Non-Poisson Processes: Regression to Equilibrium Versus Equilibrium Correlation Functions

DTIC Science & Technology

2007-11-02

76203-1427, USA cDipartimento di Fisica dell’Università di Pisa and INFM, Via Buonarroti 2, 56127 Pisa, Italy dIstituto dei Processi Chimico Fisici del...CNR, Area della Ricerca di Pisa, Via G. Moruzzi 1, 56124 Pisa, Italy eDipartimento di Fisica and INFM, Center for Statistical Mechanics and Complexity...dichotomous fluctuations that generate super-diffusion. We adopt the Liouville perspective and with it a quantum-like approach based on splitting the density

18. A Method of Poisson's Ration Imaging Within a Material Part

NASA Technical Reports Server (NTRS)

Roth, Don J. (Inventor)

1994-01-01

The present invention is directed to a method of displaying the Poisson's ratio image of a material part. In the present invention, longitudinal data is produced using a longitudinal wave transducer and shear wave data is produced using a shear wave transducer. The respective data is then used to calculate the Poisson's ratio for the entire material part. The Poisson's ratio approximations are then used to display the data.

19. Method of Poisson's ratio imaging within a material part

NASA Technical Reports Server (NTRS)

Roth, Don J. (Inventor)

1996-01-01

The present invention is directed to a method of displaying the Poisson's ratio image of a material part. In the present invention longitudinal data is produced using a longitudinal wave transducer and shear wave data is produced using a shear wave transducer. The respective data is then used to calculate the Poisson's ratio for the entire material part. The Poisson's ratio approximations are then used to displayed the image.

20. Study of non-Hodgkin's lymphoma mortality associated with industrial pollution in Spain, using Poisson models

PubMed Central

Ramis, Rebeca; Vidal, Enrique; García-Pérez, Javier; Lope, Virginia; Aragonés, Nuria; Pérez-Gómez, Beatriz; Pollán, Marina; López-Abente, Gonzalo

2009-01-01

Background Non-Hodgkin's lymphomas (NHLs) have been linked to proximity to industrial areas, but evidence regarding the health risk posed by residence near pollutant industries is very limited. The European Pollutant Emission Register (EPER) is a public register that furnishes valuable information on industries that release pollutants to air and water, along with their geographical location. This study sought to explore the relationship between NHL mortality in small areas in Spain and environmental exposure to pollutant emissions from EPER-registered industries, using three Poisson-regression-based mathematical models. Methods Observed cases were drawn from mortality registries in Spain for the period 1994–2003. Industries were grouped into the following sectors: energy; metal; mineral; organic chemicals; waste; paper; food; and use of solvents. Populations having an industry within a radius of 1, 1.5, or 2 kilometres from the municipal centroid were deemed to be exposed. Municipalities outside those radii were considered as reference populations. The relative risks (RRs) associated with proximity to pollutant industries were estimated using the following methods: Poisson Regression; mixed Poisson model with random provincial effect; and spatial autoregressive modelling (BYM model). Results Only proximity of paper industries to population centres (>2 km) could be associated with a greater risk of NHL mortality (mixed model: RR:1.24, 95% CI:1.09–1.42; BYM model: RR:1.21, 95% CI:1.01–1.45; Poisson model: RR:1.16, 95% CI:1.06–1.27). Spatial models yielded higher estimates. Conclusion The reported association between exposure to air pollution from the paper, pulp and board industry and NHL mortality is independent of the model used. Inclusion of spatial random effects terms in the risk estimate improves the study of associations between environmental exposures and mortality. The EPER could be of great utility when studying the effects of industrial pollution

DOEpatents

Harry, H.H.

1988-03-11

Abstract and method for the adjustment and alignment of shafts in high power devices. A plurality of adjacent rotatable angled cylinders are positioned between a base and the shaft to be aligned which when rotated introduce an axial offset. The apparatus is electrically conductive and constructed of a structurally rigid material. The angled cylinders allow the shaft such as the center conductor in a pulse line machine to be offset in any desired alignment position within the range of the apparatus. 3 figs.

DOEpatents

Harry, Herbert H.

1989-01-01

Apparatus and method for the adjustment and alignment of shafts in high power devices. A plurality of adjacent rotatable angled cylinders are positioned between a base and the shaft to be aligned which when rotated introduce an axial offset. The apparatus is electrically conductive and constructed of a structurally rigid material. The angled cylinders allow the shaft such as the center conductor in a pulse line machine to be offset in any desired alignment position within the range of the apparatus.

3. Poisson-Boltzmann versus Size-Modified Poisson-Boltzmann Electrostatics Applied to Lipid Bilayers.

PubMed

Wang, Nuo; Zhou, Shenggao; Kekenes-Huskey, Peter M; Li, Bo; McCammon, J Andrew

2014-12-26

Mean-field methods, such as the Poisson-Boltzmann equation (PBE), are often used to calculate the electrostatic properties of molecular systems. In the past two decades, an enhancement of the PBE, the size-modified Poisson-Boltzmann equation (SMPBE), has been reported. Here, the PBE and the SMPBE are reevaluated for realistic molecular systems, namely, lipid bilayers, under eight different sets of input parameters. The SMPBE appears to reproduce the molecular dynamics simulation results better than the PBE only under specific parameter sets, but in general, it performs no better than the Stern layer correction of the PBE. These results emphasize the need for careful discussions of the accuracy of mean-field calculations on realistic systems with respect to the choice of parameters and call for reconsideration of the cost-efficiency and the significance of the current SMPBE formulation.

4. A Cartesian grid embedded boundary method for Poisson`s equation on irregular domains

SciTech Connect

Johansen, H.; Colella, P.

1997-01-31

The authors present a numerical method for solving Poisson`s equation, with variable coefficients and Dirichlet boundary conditions, on two-dimensional regions. The approach uses a finite-volume discretization, which embeds the domain in a regular Cartesian grid. They treat the solution as a cell-centered quantity, even when those centers are outside the domain. Cells that contain a portion of the domain boundary use conservation differencing of second-order accurate fluxes, on each cell volume. The calculation of the boundary flux ensures that the conditioning of the matrix is relatively unaffected by small cell volumes. This allows them to use multi-grid iterations with a simple point relaxation strategy. They have combined this with an adaptive mesh refinement (AMR) procedure. They provide evidence that the algorithm is second-order accurate on various exact solutions, and compare the adaptive and non-adaptive calculations.

5. Poisson cohomology of scalar multidimensional Dubrovin-Novikov brackets

Carlet, Guido; Casati, Matteo; Shadrin, Sergey

2017-04-01

We compute the Poisson cohomology of a scalar Poisson bracket of Dubrovin-Novikov type with D independent variables. We find that the second and third cohomology groups are generically non-vanishing in D > 1. Hence, in contrast with the D = 1 case, the deformation theory in the multivariable case is non-trivial.

6. LETTER TO THE EDITOR: New generalized Poisson structures

de Azcárraga, J. A.; Perelomov, A. M.; Pérez Bueno, J. C.

1996-04-01

New generalized Poisson structures are introduced by using suitable skew-symmetric contravariant tensors of even order. The corresponding `Jacobi identities' are provided by conditions on these tensors, which may be understood as cocycle conditions. As an example, we provide the linear generalized Poisson structures which can be constructed on the dual spaces of simple Lie algebras.

7. The Schouten - Nijenhuis bracket, cohomology and generalized Poisson structures

de Azcárraga, J. A.; Perelomov, A. M.; Pérez Bueno, J. C.

1996-12-01

Newly introduced generalized Poisson structures based on suitable skew-symmetric contravariant tensors of even order are discussed in terms of the Schouten - Nijenhuis bracket. The associated `Jacobi identities' are expressed as conditions on these tensors, the cohomological contents of which is given. In particular, we determine the linear generalized Poisson structures which can be constructed on the dual spaces of simple Lie algebras.

8. Low porosity metallic periodic structures with negative Poisson's ratio.

PubMed

Taylor, Michael; Francesconi, Luca; Gerendás, Miklós; Shanian, Ali; Carson, Carl; Bertoldi, Katia

2014-04-16

Auxetic behavior in low porosity metallic structures is demonstrated via a simple system of orthogonal elliptical voids. In this minimal 2D system, the Poisson's ratio can be effectively controlled by changing the aspect ratio of the voids. In this way, large negative values of Poisson's ratio can be achieved, indicating an effective strategy for designing auxetic structures with desired porosity.

9. Extreme values of the Poisson's ratio of cubic crystals

Epishin, A. I.; Lisovenko, D. S.

2016-10-01

The problem of determining the extrema of Poisson's ratio for cubic crystals is considered, and analytical expressions are derived to calculate its extreme values. It follows from the obtained solution that, apart from extreme values at standard orientations, extreme values of Poisson's ratio can also be detected at special orientations deviated from the standard ones. The derived analytical expressions are used to calculate the extreme values of Poisson's ratio for a large number of known cubic crystals. The extremely high values of Poisson's ratio are shown to be characteristic of metastable crystals, such as crystals with the shape memory effect caused by martensitic transformation. These crystals are mainly represented by metallic alloys. For some crystals, the absolute extrema of Poisson's ratio can exceed the standard values, which are-1 for a standard minimum and +2 for a standard maximum.

10. Deformation mechanisms in negative Poisson's ratio materials - Structural aspects

NASA Technical Reports Server (NTRS)

Lakes, R.

1991-01-01

Poisson's ratio in materials is governed by the following aspects of the microstructure: the presence of rotational degrees of freedom, non-affine deformation kinematics, or anisotropic structure. Several structural models are examined. The non-affine kinematics are seen to be essential for the production of negative Poisson's ratios for isotropic materials containing central force linkages of positive stiffness. Non-central forces combined with pre-load can also give rise to a negative Poisson's ratio in isotropic materials. A chiral microstructure with non-central force interaction or non-affine deformation can also exhibit a negative Poisson's ratio. Toughness and damage resistance in these materials may be affected by the Poisson's ratio itself, as well as by generalized continuum aspects associated with the microstructure.

11. Regressive systemic sclerosis.

PubMed Central

Black, C; Dieppe, P; Huskisson, T; Hart, F D

1986-01-01

Systemic sclerosis is a disease which usually progresses or reaches a plateau with persistence of symptoms and signs. Regression is extremely unusual. Four cases of established scleroderma are described in which regression is well documented. The significance of this observation and possible mechanisms of disease regression are discussed. Images PMID:3718012

12. NCCS Regression Test Harness

SciTech Connect

Tharrington, Arnold N.

2015-09-09

The NCCS Regression Test Harness is a software package that provides a framework to perform regression and acceptance testing on NCCS High Performance Computers. The package is written in Python and has only the dependency of a Subversion repository to store the regression tests.

13. Unitary Response Regression Models

ERIC Educational Resources Information Center

Lipovetsky, S.

2007-01-01

The dependent variable in a regular linear regression is a numerical variable, and in a logistic regression it is a binary or categorical variable. In these models the dependent variable has varying values. However, there are problems yielding an identity output of a constant value which can also be modelled in a linear or logistic regression with…

14. Poisson-Boltzmann-Nernst-Planck model

SciTech Connect

Zheng Qiong; Wei Guowei

2011-05-21

The Poisson-Nernst-Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species in the system, since each ion species corresponds to one Nernst-Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst-Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst-Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson-Boltzmann and Nernst-Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external

15. Poisson-Boltzmann-Nernst-Planck model

Zheng, Qiong; Wei, Guo-Wei

2011-05-01

The Poisson-Nernst-Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species in the system, since each ion species corresponds to one Nernst-Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst-Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst-Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson-Boltzmann and Nernst-Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external

16. Weighted hurdle regression method for joint modeling of cardiovascular events likelihood and rate in the US dialysis population.

PubMed

Sentürk, Damla; Dalrymple, Lorien S; Mu, Yi; Nguyen, Danh V

2014-11-10

We propose a new weighted hurdle regression method for modeling count data, with particular interest in modeling cardiovascular events in patients on dialysis. Cardiovascular disease remains one of the leading causes of hospitalization and death in this population. Our aim is to jointly model the relationship/association between covariates and (i) the probability of cardiovascular events, a binary process, and (ii) the rate of events once the realization is positive-when the 'hurdle' is crossed-using a zero-truncated Poisson distribution. When the observation period or follow-up time, from the start of dialysis, varies among individuals, the estimated probability of positive cardiovascular events during the study period will be biased. Furthermore, when the model contains covariates, then the estimated relationship between the covariates and the probability of cardiovascular events will also be biased. These challenges are addressed with the proposed weighted hurdle regression method. Estimation for the weighted hurdle regression model is a weighted likelihood approach, where standard maximum likelihood estimation can be utilized. The method is illustrated with data from the United States Renal Data System. Simulation studies show the ability of proposed method to successfully adjust for differential follow-up times and incorporate the effects of covariates in the weighting.

17. Fully Regressive Melanoma

PubMed Central

Ehrsam, Eric; Kallini, Joseph R.; Lebas, Damien; Modiano, Philippe; Cotten, Hervé

2016-01-01

Fully regressive melanoma is a phenomenon in which the primary cutaneous melanoma becomes completely replaced by fibrotic components as a result of host immune response. Although 10 to 35 percent of cases of cutaneous melanomas may partially regress, fully regressive melanoma is very rare; only 47 cases have been reported in the literature to date. AH of the cases of fully regressive melanoma reported in the literature were diagnosed in conjunction with metastasis on a patient. The authors describe a case of fully regressive melanoma without any metastases at the time of its diagnosis. Characteristic findings on dermoscopy, as well as the absence of melanoma on final biopsy, confirmed the diagnosis. PMID:27672418

18. The Poisson-Helmholtz-Boltzmann model.

PubMed

Bohinc, K; Shrestha, A; May, S

2011-10-01

We present a mean-field model of a one-component electrolyte solution where the mobile ions interact not only via Coulomb interactions but also through a repulsive non-electrostatic Yukawa potential. Our choice of the Yukawa potential represents a simple model for solvent-mediated interactions between ions. We employ a local formulation of the mean-field free energy through the use of two auxiliary potentials, an electrostatic and a non-electrostatic potential. Functional minimization of the mean-field free energy leads to two coupled local differential equations, the Poisson-Boltzmann equation and the Helmholtz-Boltzmann equation. Their boundary conditions account for the sources of both the electrostatic and non-electrostatic interactions on the surface of all macroions that reside in the solution. We analyze a specific example, two like-charged planar surfaces with their mobile counterions forming the electrolyte solution. For this system we calculate the pressure between the two surfaces, and we analyze its dependence on the strength of the Yukawa potential and on the non-electrostatic interactions of the mobile ions with the planar macroion surfaces. In addition, we demonstrate that our mean-field model is consistent with the contact theorem, and we outline its generalization to arbitrary interaction potentials through the use of a Laplace transformation.

19. Generalized HPC method for the Poisson equation

Bardazzi, A.; Lugni, C.; Antuono, M.; Graziani, G.; Faltinsen, O. M.

2015-10-01

An efficient and innovative numerical algorithm based on the use of Harmonic Polynomials on each Cell of the computational domain (HPC method) has been recently proposed by Shao and Faltinsen (2014) [1], to solve Boundary Value Problem governed by the Laplace equation. Here, we extend the HPC method for the solution of non-homogeneous elliptic boundary value problems. The homogeneous solution, i.e. the Laplace equation, is represented through a polynomial function with harmonic polynomials while the particular solution of the Poisson equation is provided by a bi-quadratic function. This scheme has been called generalized HPC method. The present algorithm, accurate up to the 4th order, proved to be efficient, i.e. easy to be implemented and with a low computational effort, for the solution of two-dimensional elliptic boundary value problems. Furthermore, it provides an analytical representation of the solution within each computational stencil, which allows its coupling with existing numerical algorithms within an efficient domain-decomposition strategy or within an adaptive mesh refinement algorithm.

20. Integer lattice dynamics for Vlasov-Poisson

Mocz, Philip; Succi, Sauro

2017-03-01

We revisit the integer lattice (IL) method to numerically solve the Vlasov-Poisson equations, and show that a slight variant of the method is a very easy, viable, and efficient numerical approach to study the dynamics of self-gravitating, collisionless systems. The distribution function lives in a discretized lattice phase-space, and each time-step in the simulation corresponds to a simple permutation of the lattice sites. Hence, the method is Lagrangian, conservative, and fully time-reversible. IL complements other existing methods, such as N-body/particle mesh (computationally efficient, but affected by Monte Carlo sampling noise and two-body relaxation) and finite volume (FV) direct integration schemes (expensive, accurate but diffusive). We also present improvements to the FV scheme, using a moving-mesh approach inspired by IL, to reduce numerical diffusion and the time-step criterion. Being a direct integration scheme like FV, IL is memory limited (memory requirement for a full 3D problem scales as N6, where N is the resolution per linear phase-space dimension). However, we describe a new technique for achieving N4 scaling. The method offers promise for investigating the full 6D phase-space of collisionless systems of stars and dark matter.

1. Causal Poisson bracket via deformation quantization

Berra-Montiel, Jasel; Molgado, Alberto; Palacios-García, César D.

2016-06-01

Starting with the well-defined product of quantum fields at two spacetime points, we explore an associated Poisson structure for classical field theories within the deformation quantization formalism. We realize that the induced star-product is naturally related to the standard Moyal product through an appropriate causal Green’s functions connecting points in the space of classical solutions to the equations of motion. Our results resemble the Peierls-DeWitt bracket that has been analyzed in the multisymplectic context. Once our star-product is defined, we are able to apply the Wigner-Weyl map in order to introduce a generalized version of Wick’s theorem. Finally, we include some examples to explicitly test our method: the real scalar field, the bosonic string and a physically motivated nonlinear particle model. For the field theoretic models, we have encountered causal generalizations of the creation/annihilation relations, and also a causal generalization of the Virasoro algebra for the bosonic string. For the nonlinear particle case, we use the approximate solution in terms of the Green’s function, in order to construct a well-behaved causal bracket.

2. Sign-tunable Poisson's ratio in semi-fluorinated graphene.

PubMed

Qin, Rui; Zheng, Jiaxin; Zhu, Wenjun

2017-01-07

Poisson's ratio is a fundamental property of a material which reflects the transverse strain response to the applied axial strain. Negative Poisson's ratio is allowed theoretically, but is rare in nature. Besides the discovery and tailoring of bulk auxetic materials, recent studies have also found a negative Poisson's ratio in nanomaterials, while their negative Poisson's ratio is mainly based on conventional rigid mechanical models as bulk auxetic materials. In this work, we report the existence of in-plane negative Poisson's ratio in a two-dimensional convex structure of newly synthesized semi-fluorinated graphene by using first-principles calculations. In addition, the sign of the Poisson's ratio can be tuned by the applied strain. Interestingly, we find that this unconventional negative Poisson's ratio cannot be explained by conventional rigid mechanical models but originates from the enhanced bond angle strain over the bond strain due to chemical functionalization. This new mechanism of auxetics extends the scope of auxetic nanomaterials and can serve as design principles for future discovery and design of new auxetic materials.

3. Model building in nonproportional hazard regression.

PubMed

2013-12-30

Recent developments of statistical methods allow for a very flexible modeling of covariates affecting survival times via the hazard rate, including also the inspection of possible time-dependent associations. Despite their immediate appeal in terms of flexibility, these models typically introduce additional difficulties when a subset of covariates and the corresponding modeling alternatives have to be chosen, that is, for building the most suitable model for given data. This is particularly true when potentially time-varying associations are given. We propose to conduct a piecewise exponential representation of the original survival data to link hazard regression with estimation schemes based on of the Poisson likelihood to make recent advances for model building in exponential family regression accessible also in the nonproportional hazard regression context. A two-stage stepwise selection approach, an approach based on doubly penalized likelihood, and a componentwise functional gradient descent approach are adapted to the piecewise exponential regression problem. These three techniques were compared via an intensive simulation study. An application to prognosis after discharge for patients who suffered a myocardial infarction supplements the simulation to demonstrate the pros and cons of the approaches in real data analyses.

4. Minimum risk wavelet shrinkage operator for Poisson image denoising.

PubMed

Cheng, Wu; Hirakawa, Keigo

2015-05-01

The pixel values of images taken by an image sensor are said to be corrupted by Poisson noise. To date, multiscale Poisson image denoising techniques have processed Haar frame and wavelet coefficients--the modeling of coefficients is enabled by the Skellam distribution analysis. We extend these results by solving for shrinkage operators for Skellam that minimizes the risk functional in the multiscale Poisson image denoising setting. The minimum risk shrinkage operator of this kind effectively produces denoised wavelet coefficients with minimum attainable L2 error.

5. Poisson's Ratios and Volume Changes for Plastically Orthotropic Material

NASA Technical Reports Server (NTRS)

Stowell, Elbridge Z; Pride, Richard A

1956-01-01

Measurements of Poisson's ratios have been made in three orthogonal directions on aluminum alloy blocks in compression and on stainless-steel sheet in both tension and compression. These measurements, as well as those obtained by density determinations, show that there is no permanent plastic change in volume within the accuracy of observation. A method is suggested whereby a correlation may be effected between the measured individual values of the Poisson's ratios and the stress-strain curves for the material. Allowance must be made for the difference in the stress-strain in tension and compression; this difference, wherever it appears, is accompanied by significant changes in the Poisson's ratios.

6. Future-singularity-free accelerating expansion with modified Poisson brackets

SciTech Connect

Kim, Wontae; Son, Edwin J.

2007-01-15

We show that the second accelerating expansion of the universe appears smoothly from the decelerating phase, which follows the initial inflation, in the two-dimensional soluble semiclassical dilaton gravity along with the modified Poisson brackets with noncommutativity between the relevant fields. This is in contrast to the fact that the ordinary solution of the equations of motion following from the conventional Poisson algebra describes a permanent accelerating universe without any phase change. In this modified model, it turns out that the noncommutative Poisson algebra is responsible for the remarkable phase transition to the second accelerating expansion.

7. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

PubMed

Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg

2009-11-01

G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.

8. Comparison of regression methods for modeling intensive care length of stay.

PubMed

Verburg, Ilona W M; de Keizer, Nicolette F; de Jonge, Evert; Peek, Niels

2014-01-01

Intensive care units (ICUs) are increasingly interested in assessing and improving their performance. ICU Length of Stay (LoS) could be seen as an indicator for efficiency of care. However, little consensus exists on which prognostic method should be used to adjust ICU LoS for case-mix factors. This study compared the performance of different regression models when predicting ICU LoS. We included data from 32,667 unplanned ICU admissions to ICUs participating in the Dutch National Intensive Care Evaluation (NICE) in the year 2011. We predicted ICU LoS using eight regression models: ordinary least squares regression on untransformed ICU LoS,LoS truncated at 30 days and log-transformed LoS; a generalized linear model with a Gaussian distribution and a logarithmic link function; Poisson regression; negative binomial regression; Gamma regression with a logarithmic link function; and the original and recalibrated APACHE IV model, for all patients together and for survivors and non-survivors separately. We assessed the predictive performance of the models using bootstrapping and the squared Pearson correlation coefficient (R2), root mean squared prediction error (RMSPE), mean absolute prediction error (MAPE) and bias. The distribution of ICU LoS was skewed to the right with a median of 1.7 days (interquartile range 0.8 to 4.0) and a mean of 4.2 days (standard deviation 7.9). The predictive performance of the models was between 0.09 and 0.20 for R2, between 7.28 and 8.74 days for RMSPE, between 3.00 and 4.42 days for MAPE and between -2.99 and 1.64 days for bias. The predictive performance was slightly better for survivors than for non-survivors. We were disappointed in the predictive performance of the regression models and conclude that it is difficult to predict LoS of unplanned ICU admissions using patient characteristics at admission time only.

9. Poisson noise obscures hypometabolic lesions in PET.

PubMed

Kerr, Wesley T; Lau, Edward P

2012-12-01

The technology of fluoro-deoxyglucose positron emission tomography (PET) has drastically increased our ability to visualize the metabolic process of numerous neurological diseases. The relationship between the methodological noise sources inherent to PET technology and the resulting noise in the reconstructed image is complex. In this study, we use Monte Carlo simulations to examine the effect of Poisson noise in the PET signal on the noise in reconstructed space for two pervasive reconstruction algorithms: the historical filtered back-projection (FBP) and the more modern expectation maximization (EM). We confirm previous observations that the image reconstructed with the FBP biases all intensity values toward the mean, likely due to spatial spreading of high intensity voxels. However, we demonstrate that in both algorithms the variance from high intensity voxels spreads to low intensity voxels and obliterates their signal to noise ratio. This finding has profound impacts on the clinical interpretation of hypometabolic lesions. Our results suggest that PET is relatively insensitive when it comes to detecting and quantifying changes in hypometabolic tissue. Further, the images reconstructed with EM visually match the original images more closely, but more detailed analysis reveals as much as a 40 percent decrease in the signal to noise ratio for high intensity voxels relative to the FBP. This suggests that even though the apparent spatial resolution of EM outperforms FBP, the signal to noise ratio of the intensity of each voxel may be higher in the FBP. Therefore, EM may be most appropriate for manual visualization of pathology, but FBP should be used when analyzing quantitative markers of the PET signal. This suggestion that different reconstruction algorithms should be used for quantification versus visualization represents a major paradigm shift in the analysis and interpretation of PET images.

10. Bayesian time-series analysis of a repeated-measures poisson outcome with excess zeroes.

PubMed

Murphy, Terrence E; Van Ness, Peter H; Araujo, Katy L B; Pisani, Margaret A

2011-12-01

In this article, the authors demonstrate a time-series analysis based on a hierarchical Bayesian model of a Poisson outcome with an excessive number of zeroes. The motivating example for this analysis comes from the intensive care unit (ICU) of an urban university teaching hospital (New Haven, Connecticut, 2002-2004). Studies of medication use among older patients in the ICU are complicated by statistical factors such as an excessive number of zero doses, periodicity, and within-person autocorrelation. Whereas time-series techniques adjust for autocorrelation and periodicity in outcome measurements, Bayesian analysis provides greater precision for small samples and the flexibility to conduct posterior predictive simulations. By applying elements of time-series analysis within both frequentist and Bayesian frameworks, the authors evaluate differences in shift-based dosing of medication in a medical ICU. From a small sample and with adjustment for excess zeroes, linear trend, autocorrelation, and clinical covariates, both frequentist and Bayesian models provide evidence of a significant association between a specific nursing shift and dosing level of a sedative medication. Furthermore, the posterior distributions from a Bayesian random-effects Poisson model permit posterior predictive simulations of related results that are potentially difficult to model.

11. Negative Poisson's ratios for extreme states of matter

PubMed

Baughman; Dantas; Stafstrom; Zakhidov; Mitchell; Dubin

2000-06-16

Negative Poisson's ratios are predicted for body-centered-cubic phases that likely exist in white dwarf cores and neutron star outer crusts, as well as those found for vacuumlike ion crystals, plasma dust crystals, and colloidal crystals (including certain virus crystals). The existence of this counterintuitive property, which means that a material laterally expands when stretched, is experimentally demonstrated for very low density crystals of trapped ions. At very high densities, the large predicted negative and positive Poisson's ratios might be important for understanding the asteroseismology of neutron stars and white dwarfs and the effect of stellar stresses on nuclear reaction rates. Giant Poisson's ratios are both predicted and observed for highly strained coulombic photonic crystals, suggesting possible applications of large, tunable Poisson's ratios for photonic crystal devices.

12. Information transmission using non-poisson regular firing.

PubMed

Koyama, Shinsuke; Omi, Takahiro; Kass, Robert E; Shinomoto, Shigeru

2013-04-01

In many cortical areas, neural spike trains do not follow a Poisson process. In this study, we investigate a possible benefit of non-Poisson spiking for information transmission by studying the minimal rate fluctuation that can be detected by a Bayesian estimator. The idea is that an inhomogeneous Poisson process may make it difficult for downstream decoders to resolve subtle changes in rate fluctuation, but by using a more regular non-Poisson process, the nervous system can make rate fluctuations easier to detect. We evaluate the degree to which regular firing reduces the rate fluctuation detection threshold. We find that the threshold for detection is reduced in proportion to the coefficient of variation of interspike intervals.

13. Modeling laser velocimeter signals as triply stochastic Poisson processes

NASA Technical Reports Server (NTRS)

Mayo, W. T., Jr.

1976-01-01

Previous models of laser Doppler velocimeter (LDV) systems have not adequately described dual-scatter signals in a manner useful for analysis and simulation of low-level photon-limited signals. At low photon rates, an LDV signal at the output of a photomultiplier tube is a compound nonhomogeneous filtered Poisson process, whose intensity function is another (slower) Poisson process with the nonstationary rate and frequency parameters controlled by a random flow (slowest) process. In the present paper, generalized Poisson shot noise models are developed for low-level LDV signals. Theoretical results useful in detection error analysis and simulation are presented, along with measurements of burst amplitude statistics. Computer generated simulations illustrate the difference between Gaussian and Poisson models of low-level signals.

14. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.

PubMed

Zhang, Jiachao; Hirakawa, Keigo

2017-04-01

This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.

15. Negative poisson's ratio in single-layer black phosphorus.

PubMed

Jiang, Jin-Wu; Park, Harold S

2014-08-18

The Poisson's ratio is a fundamental mechanical property that relates the resulting lateral strain to applied axial strain. Although this value can theoretically be negative, it is positive for nearly all materials, though negative values have been observed in so-called auxetic structures. However, nearly all auxetic materials are bulk materials whose microstructure has been specifically engineered to generate a negative Poisson's ratio. Here we report using first-principles calculations the existence of a negative Poisson's ratio in a single-layer, two-dimensional material, black phosphorus. In contrast to engineered bulk auxetics, this behaviour is intrinsic for single-layer black phosphorus, and originates from its puckered structure, where the pucker can be regarded as a re-entrant structure that is comprised of two coupled orthogonal hinges. As a result of this atomic structure, a negative Poisson's ratio is observed in the out-of-plane direction under uniaxial deformation in the direction parallel to the pucker.

16. Tuning the Poisson's Ratio of Biomaterials for Investigating Cellular Response

PubMed Central

Meggs, Kyle; Qu, Xin; Chen, Shaochen

2013-01-01

Cells sense and respond to mechanical forces, regardless of whether the source is from a normal tissue matrix, an adjacent cell or a synthetic substrate. In recent years, cell response to surface rigidity has been extensively studied by modulating the elastic modulus of poly(ethylene glycol) (PEG)-based hydrogels. In the context of biomaterials, Poisson's ratio, another fundamental material property parameter has not been explored, primarily because of challenges involved in tuning the Poisson's ratio in biological scaffolds. Two-photon polymerization is used to fabricate suspended web structures that exhibit positive and negative Poisson's ratio (NPR), based on analytical models. NPR webs demonstrate biaxial expansion/compression behavior, as one or multiple cells apply local forces and move the structures. Unusual cell division on NPR structures is also demonstrated. This methodology can be used to tune the Poisson's ratio of several photocurable biomaterials and could have potential implications in the field of mechanobiology. PMID:24076754

17. Two-part zero-inflated negative binomial regression model for quantitative trait loci mapping with count trait.

PubMed

Moghimbeigi, Abbas

2015-05-07

Poisson regression models provide a standard framework for quantitative trait locus (QTL) mapping of count traits. In practice, however, count traits are often over-dispersed relative to the Poisson distribution. In these situations, the zero-inflated Poisson (ZIP), zero-inflated generalized Poisson (ZIGP) and zero-inflated negative binomial (ZINB) regression may be useful for QTL mapping of count traits. Added genetic variables to the negative binomial part equation, may also affect extra zero data. In this study, to overcome these challenges, I apply two-part ZINB model. The EM algorithm with Newton-Raphson method in the M-step uses for estimating parameters. An application of the two-part ZINB model for QTL mapping is considered to detect associations between the formation of gallstone and the genotype of markers.

18. Measurement of Poisson's ratio of dental composite restorative materials.

PubMed

Chung, Sew Meng; Yap, Adrian U Jin; Koh, Wee Kiat; Tsai, Kuo Tsing; Lim, Chwee Teck

2004-06-01

The aim of this study was to determine the Poisson ratio of resin-based dental composites using a static tensile test method. Materials used in this investigation were from the same manufacturer (3M ESPE) and included microfill (A110), minifill (Z100 and Filtek Z250), polyacid-modified (F2000), and flowable (Filtek Flowable [FF]) composites. The Poisson ratio of the materials were determined after 1 week conditioning in water at 37 degrees C. The tensile test was performed with using a uniaxial testing system at crosshead speed of 0.5 mm/min. Data was analysed using one-way ANOVA/post-hoc Scheffe's test and Pearson's correlation test at significance level of 0.05. Mean Poisson's ratio (n=8) ranged from 0.302 to 0.393. The Poisson ratio of FF was significantly higher than all other composites evaluated, and the Poisson ratio of A110 was higher than Z100, Z250 and F2000. The Poisson ratio is higher for materials with lower filler volume fraction.

19. Morse–Smale Regression

SciTech Connect

Gerber, Samuel; Rubel, Oliver; Bremer, Peer -Timo; Pascucci, Valerio; Whitaker, Ross T.

2012-01-19

This paper introduces a novel partition-based regression approach that incorporates topological information. Partition-based regression typically introduces a quality-of-fit-driven decomposition of the domain. The emphasis in this work is on a topologically meaningful segmentation. Thus, the proposed regression approach is based on a segmentation induced by a discrete approximation of the Morse–Smale complex. This yields a segmentation with partitions corresponding to regions of the function with a single minimum and maximum that are often well approximated by a linear model. This approach yields regression models that are amenable to interpretation and have good predictive capacity. Typically, regression estimates are quantified by their geometrical accuracy. For the proposed regression, an important aspect is the quality of the segmentation itself. Thus, this article introduces a new criterion that measures the topological accuracy of the estimate. The topological accuracy provides a complementary measure to the classical geometrical error measures and is very sensitive to overfitting. The Morse–Smale regression is compared to state-of-the-art approaches in terms of geometry and topology and yields comparable or improved fits in many cases. Finally, a detailed study on climate-simulation data demonstrates the application of the Morse–Smale regression. Supplementary Materials are available online and contain an implementation of the proposed approach in the R package msr, an analysis and simulations on the stability of the Morse–Smale complex approximation, and additional tables for the climate-simulation study.

20. Improved Regression Calibration

ERIC Educational Resources Information Center

Skrondal, Anders; Kuha, Jouni

2012-01-01

The likelihood for generalized linear models with covariate measurement error cannot in general be expressed in closed form, which makes maximum likelihood estimation taxing. A popular alternative is regression calibration which is computationally efficient at the cost of inconsistent estimation. We propose an improved regression calibration…

1. Morse-Smale Regression

PubMed Central

Gerber, Samuel; Rübel, Oliver; Bremer, Peer-Timo; Pascucci, Valerio; Whitaker, Ross T.

2012-01-01

This paper introduces a novel partition-based regression approach that incorporates topological information. Partition-based regression typically introduce a quality-of-fit-driven decomposition of the domain. The emphasis in this work is on a topologically meaningful segmentation. Thus, the proposed regression approach is based on a segmentation induced by a discrete approximation of the Morse-Smale complex. This yields a segmentation with partitions corresponding to regions of the function with a single minimum and maximum that are often well approximated by a linear model. This approach yields regression models that are amenable to interpretation and have good predictive capacity. Typically, regression estimates are quantified by their geometrical accuracy. For the proposed regression, an important aspect is the quality of the segmentation itself. Thus, this paper introduces a new criterion that measures the topological accuracy of the estimate. The topological accuracy provides a complementary measure to the classical geometrical error measures and is very sensitive to over-fitting. The Morse-Smale regression is compared to state-of-the-art approaches in terms of geometry and topology and yields comparable or improved fits in many cases. Finally, a detailed study on climate-simulation data demonstrates the application of the Morse-Smale regression. Supplementary materials are available online and contain an implementation of the proposed approach in the R package msr, an analysis and simulations on the stability of the Morse-Smale complex approximation and additional tables for the climate-simulation study. PMID:23687424

2. Boosted Beta Regression

PubMed Central

Schmid, Matthias; Wickler, Florian; Maloney, Kelly O.; Mitchell, Richard; Fenske, Nora; Mayr, Andreas

2013-01-01

Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis of ratings that are measured on a bounded scale. In this paper, we consider beta regression, which is a generalization of logit models to situations where the response is continuous on the interval (0,1). Consequently, beta regression is a convenient tool for analyzing percentage responses. The classical approach to fit a beta regression model is to use maximum likelihood estimation with subsequent AIC-based variable selection. As an alternative to this established - yet unstable - approach, we propose a new estimation technique called boosted beta regression. With boosted beta regression estimation and variable selection can be carried out simultaneously in a highly efficient way. Additionally, both the mean and the variance of a percentage response can be modeled using flexible nonlinear covariate effects. As a consequence, the new method accounts for common problems such as overdispersion and non-binomial variance structures. PMID:23626706

3. Semiclassical Limits of Ore Extensions and a Poisson Generalized Weyl Algebra

Cho, Eun-Hee; Oh, Sei-Qwon

2016-07-01

We observe [Launois and Lecoutre, Trans. Am. Math. Soc. 368:755-785, 2016, Proposition 4.1] that Poisson polynomial extensions appear as semiclassical limits of a class of Ore extensions. As an application, a Poisson generalized Weyl algebra A 1, considered as a Poisson version of the quantum generalized Weyl algebra, is constructed and its Poisson structures are studied. In particular, a necessary and sufficient condition is obtained, such that A 1 is Poisson simple and established that the Poisson endomorphisms of A 1 are Poisson analogues of the endomorphisms of the quantum generalized Weyl algebra.

4. Comparing regression methods for the two-stage clonal expansion model of carcinogenesis.

PubMed

Kaiser, J C; Heidenreich, W F

2004-11-15

In the statistical analysis of cohort data with risk estimation models, both Poisson and individual likelihood regressions are widely used methods of parameter estimation. In this paper, their performance has been tested with the biologically motivated two-stage clonal expansion (TSCE) model of carcinogenesis. To exclude inevitable uncertainties of existing data, cohorts with simple individual exposure history have been created by Monte Carlo simulation. To generate some similar properties of atomic bomb survivors and radon-exposed mine workers, both acute and protracted exposure patterns have been generated. Then the capacity of the two regression methods has been compared to retrieve a priori known model parameters from the simulated cohort data. For simple models with smooth hazard functions, the parameter estimates from both methods come close to their true values. However, for models with strongly discontinuous functions which are generated by the cell mutation process of transformation, the Poisson regression method fails to produce reliable estimates. This behaviour is explained by the construction of class averages during data stratification. Thereby, some indispensable information on the individual exposure history was destroyed. It could not be repaired by countermeasures such as the refinement of Poisson classes or a more adequate choice of Poisson groups. Although this choice might still exist we were unable to discover it. In contrast to this, the individual likelihood regression technique was found to work reliably for all considered versions of the TSCE model.

5. George: Gaussian Process regression

Foreman-Mackey, Daniel

2015-11-01

George is a fast and flexible library, implemented in C++ with Python bindings, for Gaussian Process regression useful for accounting for correlated noise in astronomical datasets, including those for transiting exoplanet discovery and characterization and stellar population modeling.

6. Poisson image reconstruction with Hessian Schatten-norm regularization.

PubMed

Lefkimmiatis, Stamatios; Unser, Michael

2013-11-01

Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.

7. Markov modulated Poisson process models incorporating covariates for rainfall intensity.

PubMed

Thayakaran, R; Ramesh, N I

2013-01-01

Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.

8. POISs3: A 3D poisson smoother of structured grids

Lehtimaeki, R.

Flow solvers based on solving Navier-Stokes or Euler equations generally need a computational grid to represent the domain of the flow. A structured computational grid can be efficiently produced by algebraic methods like transfinite interpolation. Unfortunately, algebraic methods propagate all kinds of unsmoothness of the boundary into the field. Unsmoothness of the grid, in turn, can result in inaccuracy in the flow solver. In the present work a 3D elliptic grid smoother was developed. The smoother is based on solving three Poisson equations, one for each curvilinear direction. The Poisson equations formed in the physical region are first transformed to the computational (rectilinear) region. The resulting equations form a system of three coupled elliptic quasi-linear partial differential equations in the computational domain. A short review of the Poisson method is presented. The regularity of a grid cell is studied and a skewness value is developed.

9. A spectral Poisson solver for kinetic plasma simulation

Szeremley, Daniel; Obberath, Jens; Brinkmann, Ralf

2011-10-01

Plasma resonance spectroscopy is a well established plasma diagnostic method, realized in several designs. One of these designs is the multipole resonance probe (MRP). In its idealized - geometrically simplified - version it consists of two dielectrically shielded, hemispherical electrodes to which an RF signal is applied. A numerical tool is under development which is capable of simulating the dynamics of the plasma surrounding the MRP in electrostatic approximation. In this contribution we concentrate on the specialized Poisson solver for that tool. The plasma is represented by an ensemble of point charges. By expanding both the charge density and the potential into spherical harmonics, a largely analytical solution of the Poisson problem can be employed. For a practical implementation, the expansion must be appropriately truncated. With this spectral solver we are able to efficiently solve the Poisson equation in a kinetic plasma simulation without the need of introducing a spatial discretization.

10. Blocked Shape Memory Effect in Negative Poisson's Ratio Polymer Metamaterials.

PubMed

Boba, Katarzyna; Bianchi, Matteo; McCombe, Greg; Gatt, Ruben; Griffin, Anselm C; Richardson, Robert M; Scarpa, Fabrizio; Hamerton, Ian; Grima, Joseph N

2016-08-10

We describe a new class of negative Poisson's ratio (NPR) open cell PU-PE foams produced by blocking the shape memory effect in the polymer. Contrary to classical NPR open cell thermoset and thermoplastic foams that return to their auxetic phase after reheating (and therefore limit their use in technological applications), this new class of cellular solids has a permanent negative Poisson's ratio behavior, generated through multiple shape memory (mSM) treatments that lead to a fixity of the topology of the cell foam. The mSM-NPR foams have Poisson's ratio values similar to the auxetic foams prior their return to the conventional phase, but compressive stress-strain curves similar to the ones of conventional foams. The results show that by manipulating the shape memory effect in polymer microstructures it is possible to obtain new classes of materials with unusual deformation mechanisms.

11. Effect of Poisson noise on adiabatic quantum control

Kiely, A.; Muga, J. G.; Ruschhaupt, A.

2017-01-01

We present a detailed derivation of the master equation describing a general time-dependent quantum system with classical Poisson white noise and outline its various properties. We discuss the limiting cases of Poisson white noise and provide approximations for the different noise strength regimes. We show that using the eigenstates of the noise superoperator as a basis can be a useful way of expressing the master equation. Using this, we simulate various settings to illustrate different effects of Poisson noise. In particular, we show a dip in the fidelity as a function of noise strength where high fidelity can occur in the strong-noise regime for some cases. We also investigate recent claims [J. Jing et al., Phys. Rev. A 89, 032110 (2014), 10.1103/PhysRevA.89.032110] that this type of noise may improve rather than destroy adiabaticity.

12. Poisson distribution to analyze near-threshold motor evoked potentials.

PubMed

Kaelin-Lang, Alain; Conforto, Adriana B; Z'Graggen, Werner; Hess, Christian W

2010-11-01

Motor unit action potentials (MUAPs) evoked by repetitive, low-intensity transcranial magnetic stimulation can be modeled as a Poisson process. A mathematical consequence of such a model is that the ratio of the variance to the mean of the amplitudes of motor evoked potentials (MEPs) should provide an estimate of the mean size of the individual MUAPs that summate to generate each MEP. We found that this is, in fact, the case. Our finding thus supports the use of the Poisson distribution to model MEP generation and indicates that this model enables characterization of the motor unit population that contributes to near-threshold MEPs.

13. Composite laminates with negative through-the-thickness Poisson's ratios

NASA Technical Reports Server (NTRS)

Herakovich, C. T.

1984-01-01

A simple analysis using two dimensional lamination theory combined with the appropriate three dimensional anisotropic constitutive equation is presented to show some rather surprising results for the range of values of the through-the-thickness effective Poisson's ratio nu sub xz for angle ply laminates. Results for graphite-epoxy show that the through-the-thickness effective Poisson's ratio can range from a high of 0.49 for a 90 laminate to a low of -0.21 for a + or - 25s laminate. It is shown that negative values of nu sub xz are also possible for other laminates.

14. Composite laminates with negative through-the-thickness Poisson's ratios

NASA Technical Reports Server (NTRS)

Herakovich, C. T.

1984-01-01

A simple analysis using two-dimensional lamination theory combined with the appropriate three-dimensional anisotropic constitutive equation is presented to show some rather surprising results for the range of values of the through-the-thickness effective Poisson's ratio nu sub xz for angle ply laminates. Results for graphite-epoxy show that the through-the-thickness effective Poisson's ratio can range from a high of 0.49 for a 90 laminate to a low of -0.21 for a + or - 25s laminate. It is shown that negative values of nu sub xz are also possible for other laminates.

15. Validation of the Poisson Stochastic Radiative Transfer Model

NASA Technical Reports Server (NTRS)

Zhuravleva, Tatiana; Marshak, Alexander

2004-01-01

A new approach to validation of the Poisson stochastic radiative transfer method is proposed. In contrast to other validations of stochastic models, the main parameter of the Poisson model responsible for cloud geometrical structure - cloud aspect ratio - is determined entirely by matching measurements and calculations of the direct solar radiation. If the measurements of the direct solar radiation is unavailable, it was shown that there is a range of the aspect ratios that allows the stochastic model to accurately approximate the average measurements of surface downward and cloud top upward fluxes. Realizations of the fractionally integrated cascade model are taken as a prototype of real measurements.

16. A Study of Poisson's Ratio in the Yield Region

NASA Technical Reports Server (NTRS)

Gerard, George; Wildhorn, Sorrel

1952-01-01

In the yield region of the stress-strain curve the variation in Poisson's ratio from the elastic to the plastic value is most pronounced. This variation was studied experimentally by a systematic series of tests on several aluminum alloys. The tests were conducted under simple tensile and compressive loading along three orthogonal axes. A theoretical variation of Poisson's ratio for an orthotropic solid was obtained from dilatational considerations. The assumptions used in deriving the theory were examined by use of the test data and were found to be in reasonable agreement with experimental evidence.

17. [Understanding logistic regression].

PubMed

El Sanharawi, M; Naudet, F

2013-10-01

Logistic regression is one of the most common multivariate analysis models utilized in epidemiology. It allows the measurement of the association between the occurrence of an event (qualitative dependent variable) and factors susceptible to influence it (explicative variables). The choice of explicative variables that should be included in the logistic regression model is based on prior knowledge of the disease physiopathology and the statistical association between the variable and the event, as measured by the odds ratio. The main steps for the procedure, the conditions of application, and the essential tools for its interpretation are discussed concisely. We also discuss the importance of the choice of variables that must be included and retained in the regression model in order to avoid the omission of important confounding factors. Finally, by way of illustration, we provide an example from the literature, which should help the reader test his or her knowledge.

18. Practical Session: Logistic Regression

Clausel, M.; Grégoire, G.

2014-12-01

An exercise is proposed to illustrate the logistic regression. One investigates the different risk factors in the apparition of coronary heart disease. It has been proposed in Chapter 5 of the book of D.G. Kleinbaum and M. Klein, "Logistic Regression", Statistics for Biology and Health, Springer Science Business Media, LLC (2010) and also by D. Chessel and A.B. Dufour in Lyon 1 (see Sect. 6 of http://pbil.univ-lyon1.fr/R/pdf/tdr341.pdf). This example is based on data given in the file evans.txt coming from http://www.sph.emory.edu/dkleinb/logreg3.htm#data.

19. Ridge Regression: A Regression Procedure for Analyzing correlated Independent Variables

ERIC Educational Resources Information Center

Rakow, Ernest A.

1978-01-01

Ridge regression is a technique used to ameliorate the problem of highly correlated independent variables in multiple regression analysis. This paper explains the fundamentals of ridge regression and illustrates its use. (JKS)

20. Modern Regression Discontinuity Analysis

ERIC Educational Resources Information Center

Bloom, Howard S.

2012-01-01

This article provides a detailed discussion of the theory and practice of modern regression discontinuity (RD) analysis for estimating the effects of interventions or treatments. Part 1 briefly chronicles the history of RD analysis and summarizes its past applications. Part 2 explains how in theory an RD analysis can identify an average effect of…

1. Multiple linear regression analysis

NASA Technical Reports Server (NTRS)

Edwards, T. R.

1980-01-01

Program rapidly selects best-suited set of coefficients. User supplies only vectors of independent and dependent data and specifies confidence level required. Program uses stepwise statistical procedure for relating minimal set of variables to set of observations; final regression contains only most statistically significant coefficients. Program is written in FORTRAN IV for batch execution and has been implemented on NOVA 1200.

2. Explorations in Statistics: Regression

ERIC Educational Resources Information Center

Curran-Everett, Douglas

2011-01-01

Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This seventh installment of "Explorations in Statistics" explores regression, a technique that estimates the nature of the relationship between two things for which we may only surmise a mechanistic or predictive…

3. Zero-Inflated Poisson Modeling of Fall Risk Factors in Community-Dwelling Older Adults.

PubMed

Jung, Dukyoo; Kang, Younhee; Kim, Mi Young; Ma, Rye-Won; Bhandari, Pratibha

2016-02-01

The aim of this study was to identify risk factors for falls among community-dwelling older adults. The study used a cross-sectional descriptive design. Self-report questionnaires were used to collect data from 658 community-dwelling older adults and were analyzed using logistic and zero-inflated Poisson (ZIP) regression. Perceived health status was a significant factor in the count model, and fall efficacy emerged as a significant predictor in the logistic models. The findings suggest that fall efficacy is important for predicting not only faller and nonfaller status but also fall counts in older adults who may or may not have experienced a previous fall. The fall predictors identified in this study--perceived health status and fall efficacy--indicate the need for fall-prevention programs tailored to address both the physical and psychological issues unique to older adults.

4. On covariant Poisson brackets in classical field theory

Forger, Michael; Salles, Mário O.

2015-10-01

How to give a natural geometric definition of a covariant Poisson bracket in classical field theory has for a long time been an open problem—as testified by the extensive literature on "multisymplectic Poisson brackets," together with the fact that all these proposals suffer from serious defects. On the other hand, the functional approach does provide a good candidate which has come to be known as the Peierls-De Witt bracket and whose construction in a geometrical setting is now well understood. Here, we show how the basic "multisymplectic Poisson bracket" already proposed in the 1970s can be derived from the Peierls-De Witt bracket, applied to a special class of functionals. This relation allows to trace back most (if not all) of the problems encountered in the past to ambiguities (the relation between differential forms on multiphase space and the functionals they define is not one-to-one) and also to the fact that this class of functionals does not form a Poisson subalgebra.

5. The Poisson-Lognormal Model for Bibliometric/Scientometric Distributions.

ERIC Educational Resources Information Center

Stewart, John A.

1994-01-01

Illustrates that the Poisson-lognormal model provides good fits to a diverse set of distributions commonly studied in bibliometrics and scientometrics. Topics discussed include applications to the empirical data sets related to the laws of Lotka, Bradford, and Zipf; causal processes that could generate lognormal distributions; and implications for…

6. Wide-area traffic: The failure of Poisson modeling

SciTech Connect

Paxson, V.; Floyd, S.

1994-08-01

Network arrivals are often modeled as Poisson processes for analytic simplicity, even though a number of traffic studies have shown that packet interarrivals are not exponentially distributed. The authors evaluate 21 wide-area traces, investigating a number of wide-area TCP arrival processes (session and connection arrivals, FTPDATA connection arrivals within FTP sessions, and TELNET packet arrivals) to determine the error introduced by modeling them using Poisson processes. The authors find that user-initiated TCP session arrivals, such as remote-login and file-transfer, are well-modeled as Poisson processes with fixed hourly rates, but that other connection arrivals deviate considerably from Poisson; that modeling TELNET packet interarrivals as exponential grievously underestimates the burstiness of TELNET traffic, but using the empirical Tcplib[DJCME92] interarrivals preserves burstiness over many time scales; and that FTPDATA connection arrivals within FTP sessions come bunched into ``connection bursts``, the largest of which are so large that they completely dominate FTPDATA traffic. Finally, they offer some preliminary results regarding how the findings relate to the possible self-similarity of wide-area traffic.

7. Vectorized multigrid Poisson solver for the CDC CYBER 205

NASA Technical Reports Server (NTRS)

Barkai, D.; Brandt, M. A.

1984-01-01

The full multigrid (FMG) method is applied to the two dimensional Poisson equation with Dirichlet boundary conditions. This has been chosen as a relatively simple test case for examining the efficiency of fully vectorizing of the multigrid method. Data structure and programming considerations and techniques are discussed, accompanied by performance details.

8. Wavelet-based Poisson rate estimation using the Skellam distribution

Hirakawa, Keigo; Baqai, Farhan; Wolfe, Patrick J.

2009-02-01

Owing to the stochastic nature of discrete processes such as photon counts in imaging, real-world data measurements often exhibit heteroscedastic behavior. In particular, time series components and other measurements may frequently be assumed to be non-iid Poisson random variables, whose rate parameter is proportional to the underlying signal of interest-witness literature in digital communications, signal processing, astronomy, and magnetic resonance imaging applications. In this work, we show that certain wavelet and filterbank transform coefficients corresponding to vector-valued measurements of this type are distributed as sums and differences of independent Poisson counts, taking the so-called Skellam distribution. While exact estimates rarely admit analytical forms, we present Skellam mean estimators under both frequentist and Bayes models, as well as computationally efficient approximations and shrinkage rules, that may be interpreted as Poisson rate estimation method performed in certain wavelet/filterbank transform domains. This indicates a promising potential approach for denoising of Poisson counts in the above-mentioned applications.

9. Indentability of conventional and negative Poisson's ratio foams

NASA Technical Reports Server (NTRS)

Lakes, R. S.; Elms, K.

1992-01-01

The indentation resistance of foams, both of conventional structure and of reentrant structure giving rise to negative Poisson's ratio, is studied using holographic interferometry. In holographic indentation tests, reentrant foams had higher yield strength and lower stiffness than conventional foams of the same original relative density. Calculated energy absorption for dynamic impact is considerably higher for reentrant foam than conventional foam.

10. A note on robust inference from a conditional Poisson model.

PubMed

Solís-Trápala, Ivonne L; Farewell, Vernon T

2006-02-01

A randomised controlled trial to evaluate a training programme for physician-patient communication required the analysis of paired count data. The impact of departures from the Poisson assumption when paired count data are analysed through use of a conditional likelihood is illustrated. A simple approach to providing robust inference is outlined and illustrated.

11. Application of Poisson random effect models for highway network screening.

PubMed

Jiang, Ximiao; Abdel-Aty, Mohamed; Alamili, Samer

2014-02-01

In recent years, Bayesian random effect models that account for the temporal and spatial correlations of crash data became popular in traffic safety research. This study employs random effect Poisson Log-Normal models for crash risk hotspot identification. Both the temporal and spatial correlations of crash data were considered. Potential for Safety Improvement (PSI) were adopted as a measure of the crash risk. Using the fatal and injury crashes that occurred on urban 4-lane divided arterials from 2006 to 2009 in the Central Florida area, the random effect approaches were compared to the traditional Empirical Bayesian (EB) method and the conventional Bayesian Poisson Log-Normal model. A series of method examination tests were conducted to evaluate the performance of different approaches. These tests include the previously developed site consistence test, method consistence test, total rank difference test, and the modified total score test, as well as the newly proposed total safety performance measure difference test. Results show that the Bayesian Poisson model accounting for both temporal and spatial random effects (PTSRE) outperforms the model that with only temporal random effect, and both are superior to the conventional Poisson Log-Normal model (PLN) and the EB model in the fitting of crash data. Additionally, the method evaluation tests indicate that the PTSRE model is significantly superior to the PLN model and the EB model in consistently identifying hotspots during successive time periods. The results suggest that the PTSRE model is a superior alternative for road site crash risk hotspot identification.

12. 3D soft metamaterials with negative Poisson's ratio.

PubMed

Babaee, Sahab; Shim, Jongmin; Weaver, James C; Chen, Elizabeth R; Patel, Nikita; Bertoldi, Katia

2013-09-25

Buckling is exploited to design a new class of three-dimensional metamaterials with negative Poisson's ratio. A library of auxetic building blocks is identified and procedures are defined to guide their selection and assembly. The auxetic properties of these materials are demonstrated both through experiments and finite element simulations and exhibit excellent qualitative and quantitative agreement.

13. Tailoring graphene to achieve negative Poisson's ratio properties.

PubMed

Grima, Joseph N; Winczewski, Szymon; Mizzi, Luke; Grech, Michael C; Cauchi, Reuben; Gatt, Ruben; Attard, Daphne; Wojciechowski, Krzysztof W; Rybicki, Jarosław

2015-02-25

Graphene can be made auxetic through the introduction of vacancy defects. This results in the thinnest negative Poisson's ratio material at ambient conditions known so far, an effect achieved via a nanoscale de-wrinkling mechanism that mimics the behavior at the macroscale exhibited by a crumpled sheet of paper when stretched.

14. Translated Poisson Mixture Model for Stratification Learning (PREPRINT)

DTIC Science & Technology

2007-09-01

stratification learning in high dimensional data analysis in general and computer vision and image analysis in particular. 15. SUBJECT TERMS 16...unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Translated Poisson Mixture Model for Stratification Learning Gloria Haro Dept. Teoria ... general and computer vision and image analysis in particular. 1 Introduction Recently, there has been significant interest in analyzing the intrinsic

15. Subsonic Flow for the Multidimensional Euler-Poisson System

Bae, Myoungjean; Duan, Ben; Xie, Chunjing

2016-04-01

We establish the existence and stability of subsonic potential flow for the steady Euler-Poisson system in a multidimensional nozzle of a finite length when prescribing the electric potential difference on a non-insulated boundary from a fixed point at the exit, and prescribing the pressure at the exit of the nozzle. The Euler-Poisson system for subsonic potential flow can be reduced to a nonlinear elliptic system of second order. In this paper, we develop a technique to achieve a priori {C^{1,α}} estimates of solutions to a quasi-linear second order elliptic system with mixed boundary conditions in a multidimensional domain enclosed by a Lipschitz continuous boundary. In particular, we discovered a special structure of the Euler-Poisson system which enables us to obtain {C^{1,α}} estimates of the velocity potential and the electric potential functions, and this leads us to establish structural stability of subsonic flows for the Euler-Poisson system under perturbations of various data.

16. Negative Poisson's Ratio in Single-Layer Graphene Ribbons.

PubMed

Jiang, Jin-Wu; Park, Harold S

2016-04-13

The Poisson's ratio characterizes the resultant strain in the lateral direction for a material under longitudinal deformation. Though negative Poisson's ratios (NPR) are theoretically possible within continuum elasticity, they are most frequently observed in engineered materials and structures, as they are not intrinsic to many materials. In this work, we report NPR in single-layer graphene ribbons, which results from the compressive edge stress induced warping of the edges. The effect is robust, as the NPR is observed for graphene ribbons with widths smaller than about 10 nm, and for tensile strains smaller than about 0.5% with NPR values reaching as large as -1.51. The NPR is explained analytically using an inclined plate model, which is able to predict the Poisson's ratio for graphene sheets of arbitrary size. The inclined plate model demonstrates that the NPR is governed by the interplay between the width (a bulk property), and the warping amplitude of the edge (an edge property), which eventually yields a phase diagram determining the sign of the Poisson's ratio as a function of the graphene geometry.

17. On removal of charge singularity in Poisson-Boltzmann equation.

PubMed

Cai, Qin; Wang, Jun; Zhao, Hong-Kai; Luo, Ray

2009-04-14

The Poisson-Boltzmann theory has become widely accepted in modeling electrostatic solvation interactions in biomolecular calculations. However the standard practice of atomic point charges in molecular mechanics force fields introduces singularity into the Poisson-Boltzmann equation. The finite-difference/finite-volume discretization approach to the Poisson-Boltzmann equation alleviates the numerical difficulty associated with the charge singularity but introduces discretization error into the electrostatic potential. Decomposition of the electrostatic potential has been explored to remove the charge singularity explicitly to achieve higher numerical accuracy in the solution of the electrostatic potential. In this study, we propose an efficient method to overcome the charge singularity problem. In our framework, two separate equations for two different potentials in two different regions are solved simultaneously, i.e., the reaction field potential in the solute region and the total potential in the solvent region. The proposed method can be readily implemented with typical finite-difference Poisson-Boltzmann solvers and return the singularity-free reaction field potential with a single run. Test runs on 42 small molecules and 4 large proteins show a very high agreement between the reaction field energies computed by the proposed method and those by the classical finite-difference Poisson-Boltzmann method. It is also interesting to note that the proposed method converges faster than the classical method, though additional time is needed to compute Coulombic potential on the dielectric boundary. The higher precision, accuracy, and efficiency of the proposed method will allow for more robust electrostatic calculations in molecular mechanics simulations of complex biomolecular systems.

18. Poisson-type inequalities for growth properties of positive superharmonic functions.

PubMed

Luan, Kuan; Vieira, John

2017-01-01

In this paper, we present new Poisson-type inequalities for Poisson integrals with continuous data on the boundary. The obtained inequalities are used to obtain growth properties at infinity of positive superharmonic functions in a smooth cone.

19. Calculating a Stepwise Ridge Regression.

ERIC Educational Resources Information Center

Morris, John D.

1986-01-01

Although methods for using ordinary least squares regression computer programs to calculate a ridge regression are available, the calculation of a stepwise ridge regression requires a special purpose algorithm and computer program. The correct stepwise ridge regression procedure is given, and a parallel FORTRAN computer program is described.…

20. Orthogonal Regression: A Teaching Perspective

ERIC Educational Resources Information Center

Carr, James R.

2012-01-01

A well-known approach to linear least squares regression is that which involves minimizing the sum of squared orthogonal projections of data points onto the best fit line. This form of regression is known as orthogonal regression, and the linear model that it yields is known as the major axis. A similar method, reduced major axis regression, is…

1. Steganalysis using logistic regression

Lubenko, Ivans; Ker, Andrew D.

2011-02-01

We advocate Logistic Regression (LR) as an alternative to the Support Vector Machine (SVM) classifiers commonly used in steganalysis. LR offers more information than traditional SVM methods - it estimates class probabilities as well as providing a simple classification - and can be adapted more easily and efficiently for multiclass problems. Like SVM, LR can be kernelised for nonlinear classification, and it shows comparable classification accuracy to SVM methods. This work is a case study, comparing accuracy and speed of SVM and LR classifiers in detection of LSB Matching and other related spatial-domain image steganography, through the state-of-art 686-dimensional SPAM feature set, in three image sets.

2. Correlation between supercooled liquid relaxation and glass Poisson's ratio

Sun, Qijing; Hu, Lina; Zhou, Chao; Zheng, Haijiao; Yue, Yuanzheng

2015-10-01

We report on a correlation between the supercooled liquid (SL) relaxation and glass Poisson's ratio (v) by comparing the activation energy ratio (r) of the α and the slow β relaxations and the v values for both metallic and nonmetallic glasses. Poisson's ratio v generally increases with an increase in the ratio r and this relation can be described by the empirical function v = 0.5 - A*exp(-B*r), where A and B are constants. This correlation might imply that glass plasticity is associated with the competition between the α and the slow β relaxations in SLs. The underlying physics of this correlation lies in the heredity of the structural heterogeneity from liquid to glass. This work gives insights into both the microscopic mechanism of glass deformation through the SL dynamics and the complex structural evolution during liquid-glass transition.

3. Mixed Poisson distributions in exact solutions of stochastic autoregulation models.

PubMed

Iyer-Biswas, Srividya; Jayaprakash, C

2014-11-01

In this paper we study the interplay between stochastic gene expression and system design using simple stochastic models of autoactivation and autoinhibition. Using the Poisson representation, a technique whose particular usefulness in the context of nonlinear gene regulation models we elucidate, we find exact results for these feedback models in the steady state. Further, we exploit this representation to analyze the parameter spaces of each model, determine which dimensionless combinations of rates are the shape determinants for each distribution, and thus demarcate where in the parameter space qualitatively different behaviors arise. These behaviors include power-law-tailed distributions, bimodal distributions, and sub-Poisson distributions. We also show how these distribution shapes change when the strength of the feedback is tuned. Using our results, we reexamine how well the autoinhibition and autoactivation models serve their conventionally assumed roles as paradigms for noise suppression and noise exploitation, respectively.

4. A dictionary learning approach for Poisson image deblurring.

PubMed

Ma, Liyan; Moisan, Lionel; Yu, Jian; Zeng, Tieyong

2013-07-01

The restoration of images corrupted by blur and Poisson noise is a key issue in medical and biological image processing. While most existing methods are based on variational models, generally derived from a maximum a posteriori (MAP) formulation, recently sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, we propose in this paper a model containing three terms: a patch-based sparse representation prior over a learned dictionary, the pixel-based total variation regularization term and a data-fidelity term capturing the statistics of Poisson noise. The resulting optimization problem can be solved by an alternating minimization technique combined with variable splitting. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio value and the method noise, the proposed algorithm outperforms state-of-the-art methods.

5. Reference manual for the POISSON/SUPERFISH Group of Codes

SciTech Connect

Not Available

1987-01-01

The POISSON/SUPERFISH Group codes were set up to solve two separate problems: the design of magnets and the design of rf cavities in a two-dimensional geometry. The first stage of either problem is to describe the layout of the magnet or cavity in a way that can be used as input to solve the generalized Poisson equation for magnets or the Helmholtz equations for cavities. The computer codes require that the problems be discretized by replacing the differentials (dx,dy) by finite differences ({delta}X,{delta}Y). Instead of defining the function everywhere in a plane, the function is defined only at a finite number of points on a mesh in the plane.

6. Poisson and symplectic structures on Lie algebras. I

Alekseevsky, D. V.; Perelomov, A. M.

1997-06-01

The purpose of this paper is to describe a new class of Poisson and symplectic structures on Lie algebras. This gives a new class of solutions of the classical Yang-Baxter equation. The class of elementary Lie algebras is defined and the Poisson and symplectic structures for them are described. The algorithm is given for description of all closed 2-forms and of symplectic structures on any Lie algebra G, which is decomposed into semidirect sum of elementary subalgebras. Using these results we obtain the description of closed 2-forms and symplectic forms (if they exist) on the Borel subalgebra B(G) of semisimple Lie algebra G. As a byproduct, we get description of the second cohomology group H2( B( G)).

7. New method for blowup of the Euler-Poisson system

Kwong, Man Kam; Yuen, Manwai

2016-08-01

In this paper, we provide a new method for establishing the blowup of C2 solutions for the pressureless Euler-Poisson system with attractive forces for RN (N ≥ 2) with ρ(0, x0) > 0 and Ω 0 i j ( x 0 ) = /1 2 [" separators=" ∂ i u j ( 0 , x 0 ) - ∂ j u i ( 0 , x 0 ) ] = 0 at some point x0 ∈ RN. By applying the generalized Hubble transformation div u ( t , x 0 ( t ) ) = /N a ˙ ( t ) a ( t ) to a reduced Riccati differential inequality derived from the system, we simplify the inequality into the Emden equation a ̈ ( t ) = - /λ a ( t ) N - 1 , a ( 0 ) = 1 , a ˙ ( 0 ) = /div u ( 0 , x 0 ) N . Known results on its blowup set allow us to easily obtain the blowup conditions of the Euler-Poisson system.

8. Finite-size effects and percolation properties of Poisson geometries

Larmier, C.; Dumonteil, E.; Malvagi, F.; Mazzolo, A.; Zoia, A.

2016-07-01

Random tessellations of the space represent a class of prototype models of heterogeneous media, which are central in several applications in physics, engineering, and life sciences. In this work, we investigate the statistical properties of d -dimensional isotropic Poisson geometries by resorting to Monte Carlo simulation, with special emphasis on the case d =3 . We first analyze the behavior of the key features of these stochastic geometries as a function of the dimension d and the linear size L of the domain. Then, we consider the case of Poisson binary mixtures, where the polyhedra are assigned two labels with complementary probabilities. For this latter class of random geometries, we numerically characterize the percolation threshold, the strength of the percolating cluster, and the average cluster size.

9. Intrinsic Negative Poisson's Ratio for Single-Layer Graphene.

PubMed

Jiang, Jin-Wu; Chang, Tienchong; Guo, Xingming; Park, Harold S

2016-08-10

Negative Poisson's ratio (NPR) materials have drawn significant interest because the enhanced toughness, shear resistance, and vibration absorption that typically are seen in auxetic materials may enable a range of novel applications. In this work, we report that single-layer graphene exhibits an intrinsic NPR, which is robust and independent of its size and temperature. The NPR arises due to the interplay between two intrinsic deformation pathways (one with positive Poisson's ratio, the other with NPR), which correspond to the bond stretching and angle bending interactions in graphene. We propose an energy-based deformation pathway criteria, which predicts that the pathway with NPR has lower energy and thus becomes the dominant deformation mode when graphene is stretched by a strain above 6%, resulting in the NPR phenomenon.

10. Non-linear Poisson-Boltzmann theory for swollen clays

Leote de Carvalho, R. J. F.; Trizac, E.; Hansen, J.-P.

1998-08-01

The non-linear Poisson-Boltzmann (PB) equation for a circular, uniformly char ged platelet, confined together with co- and counter-ions to a cylindrical cell, is solved semi-analytically by transforming it into an integral equation and solving the latter iteratively. This method proves efficient and robust, and can be readily generalized to other problems based on cell models, treated within non-linear Poisson-like theory. The solution to the PB equation is computed over a wide range of physical conditions, and the resulting osmotic equation of state is shown to be in semi-quantitative agreement with recent experimental data for Laponite clay suspensions, in the concentrated gel phase.

11. Invariants and labels for Lie-Poisson Systems

SciTech Connect

Thiffeault, J.L.; Morrison, P.J.

1998-04-01

Reduction is a process that uses symmetry to lower the order of a Hamiltonian system. The new variables in the reduced picture are often not canonical: there are no clear variables representing positions and momenta, and the Poisson bracket obtained is not of the canonical type. Specifically, we give two examples that give rise to brackets of the noncanonical Lie-Poisson form: the rigid body and the two-dimensional ideal fluid. From these simple cases, we then use the semidirect product extension of algebras to describe more complex physical systems. The Casimir invariants in these systems are examined, and some are shown to be linked to the recovery of information about the configuration of the system. We discuss a case in which the extension is not a semidirect product, namely compressible reduced MHD, and find for this case that the Casimir invariants lend partial information about the configuration of the system.

12. Structural regression trees

SciTech Connect

Kramer, S.

1996-12-31

In many real-world domains the task of machine learning algorithms is to learn a theory for predicting numerical values. In particular several standard test domains used in Inductive Logic Programming (ILP) are concerned with predicting numerical values from examples and relational and mostly non-determinate background knowledge. However, so far no ILP algorithm except one can predict numbers and cope with nondeterminate background knowledge. (The only exception is a covering algorithm called FORS.) In this paper we present Structural Regression Trees (SRT), a new algorithm which can be applied to the above class of problems. SRT integrates the statistical method of regression trees into ILP. It constructs a tree containing a literal (an atomic formula or its negation) or a conjunction of literals in each node, and assigns a numerical value to each leaf. SRT provides more comprehensible results than purely statistical methods, and can be applied to a class of problems most other ILP systems cannot handle. Experiments in several real-world domains demonstrate that the approach is competitive with existing methods, indicating that the advantages are not at the expense of predictive accuracy.

13. Studying Resist Stochastics with the Multivariate Poisson Propagation Model

DOE PAGES

Naulleau, Patrick; Anderson, Christopher; Chao, Weilun; ...

2014-01-01

Progress in the ultimate performance of extreme ultraviolet resist has arguably decelerated in recent years suggesting an approach to stochastic limits both in photon counts and material parameters. Here we report on the performance of a variety of leading extreme ultraviolet resist both with and without chemical amplification. The measured performance is compared to stochastic modeling results using the Multivariate Poisson Propagation Model. The results show that the best materials are indeed nearing modeled performance limits.

14. Soft elasticity of RNA gels and negative Poisson ratio.

PubMed

Ahsan, Amir; Rudnick, Joseph; Bruinsma, Robijn

2007-12-01

We propose a model for the elastic properties of RNA gels. The model predicts anomalous elastic properties in the form of a negative Poisson ratio and shape instabilities. The anomalous elasticity is generated by the non-Gaussian force-deformation relation of single-stranded RNA. The effect is greatly magnified by broken rotational symmetry produced by double-stranded sequences and the concomitant soft modes of uniaxial elastomers.

15. Effect of poisson ratio on cellular structure formation.

PubMed

Bischofs, I B; Schwarz, U S

2005-08-05

Mechanically active cells in soft media act as force dipoles. The resulting elastic interactions are long ranged and favor the formation of strings. We show analytically that due to screening, the effective interaction between strings decays exponentially, with a decay length determined only by geometry. Both for disordered and ordered arrangements of cells, we predict novel phase transitions from paraelastic to ferroelastic and antiferroelastic phases as a function of the Poisson ratio.

16. A more general system for Poisson series manipulation.

NASA Technical Reports Server (NTRS)

Cherniack, J. R.

1973-01-01

The design of a working Poisson series processor system is described that is more general than those currently in use. This system is the result of a series of compromises among efficiency, generality, ease of programing, and ease of use. The most general form of coefficients that can be multiplied efficiently is pointed out, and the place of general-purpose algebraic systems in celestial mechanics is discussed.

17. Relaxation in two dimensions and the 'sinh-Poisson' equation

NASA Technical Reports Server (NTRS)

Montgomery, D.; Matthaeus, W. H.; Stribling, W. T.; Martinez, D.; Oughton, S.

1992-01-01

Long-time states of a turbulent, decaying, two-dimensional, Navier-Stokes flow are shown numerically to relax toward maximum-entropy configurations, as defined by the "sinh-Poisson" equation. The large-scale Reynolds number is about 14,000, the spatial resolution is (512)-squared, the boundary conditions are spatially periodic, and the evolution takes place over nearly 400 large-scale eddy-turnover times.

18. Binomial and Poisson Mixtures, Maximum Likelihood, and Maple Code

SciTech Connect

Bowman, Kimiko o; Shenton, LR

2006-01-01

The bias, variance, and skewness of maximum likelihoood estimators are considered for binomial and Poisson mixture distributions. The moments considered are asymptotic, and they are assessed using the Maple code. Question of existence of solutions and Karl Pearson's study are mentioned, along with the problems of valid sample space. Large samples to reduce variances are not unusual; this also applies to the size of the asymptotic skewness.

19. Events in time: Basic analysis of Poisson data

SciTech Connect

Engelhardt, M.E.

1994-09-01

The report presents basic statistical methods for analyzing Poisson data, such as the member of events in some period of time. It gives point estimates, confidence intervals, and Bayesian intervals for the rate of occurrence per unit of time. It shows how to compare subsets of the data, both graphically and by statistical tests, and how to look for trends in time. It presents a compound model when the rate of occurrence varies randomly. Examples and SAS programs are given.

20. Comparing Poisson Sigma Model with A-model

Bonechi, F.; Cattaneo, A. S.; Iraso, R.

2016-10-01

We discuss the A-model as a gauge fixing of the Poisson Sigma Model with target a symplectic structure. We complete the discussion in [4], where a gauge fixing defined by a compatible complex structure was introduced, by showing how to recover the A-model hierarchy of observables in terms of the AKSZ observables. Moreover, we discuss the off-shell supersymmetry of the A-model as a residual BV symmetry of the gauge fixed PSM action.

1. Indentability of conventional and negative Poisson's ratio foams

NASA Technical Reports Server (NTRS)

Lakes, R. S.; Elms, K.

1992-01-01

The indentation resistance of foams, both of conventional structure and of re-entrant structure giving rise to negative Poisson's ratio, is studied using holographic interferometry. In holographic indentation tests, re-entrant foams had higher yield strengths sigma(sub y) and lower stiffness E than conventional foams of the same original relative density. Calculated energy absorption for dynamic impact is considerably higher for re-entrant foam than conventional foam.

2. Statistical Tests of the PTHA Poisson Assumption for Submarine Landslides

Geist, E. L.; Chaytor, J. D.; Parsons, T.; Ten Brink, U. S.

2012-12-01

We demonstrate that a sequence of dated mass transport deposits (MTDs) can provide information to statistically test whether or not submarine landslides associated with these deposits conform to a Poisson model of occurrence. Probabilistic tsunami hazard analysis (PTHA) most often assumes Poissonian occurrence for all sources, with an exponential distribution of return times. Using dates that define the bounds of individual MTDs, we first describe likelihood and Monte Carlo methods of parameter estimation for a suite of candidate occurrence models (Poisson, lognormal, gamma, Brownian Passage Time). In addition to age-dating uncertainty, both methods incorporate uncertainty caused by the open time intervals: i.e., before the first and after the last event to the present. Accounting for these open intervals is critical when there are a small number of observed events. The optimal occurrence model is selected according to both the Akaike Information Criteria (AIC) and Akaike's Bayesian Information Criterion (ABIC). In addition, the likelihood ratio test can be performed on occurrence models from the same family: e.g., the gamma model relative to the exponential model of return time distribution. Parameter estimation, model selection, and hypothesis testing are performed on data from two IODP holes in the northern Gulf of Mexico that penetrated a total of 14 MTDs, some of which are correlated between the two holes. Each of these events has been assigned an age based on microfossil zonations and magnetostratigraphic datums. Results from these sites indicate that the Poisson assumption is likely valid. However, parameter estimation results using the likelihood method for one of the sites suggest that the events may have occurred quasi-periodically. Methods developed in this study provide tools with which one can determine both the rate of occurrence and the statistical validity of the Poisson assumption when submarine landslides are included in PTHA.

3. Poisson problems for semilinear Brinkman systems on Lipschitz domains in

Kohr, Mirela; Lanza de Cristoforis, Massimo; Wendland, Wolfgang L.

2015-06-01

The purpose of this paper is to combine a layer potential analysis with the Schauder fixed point theorem to show the existence of solutions of the Poisson problem for a semilinear Brinkman system on bounded Lipschitz domains in with Dirichlet or Robin boundary conditions and data in L 2-based Sobolev spaces. We also obtain an existence and uniqueness result for the Dirichlet problem for a special semilinear elliptic system, called the Darcy-Forchheimer-Brinkman system.

4. A linear regression solution to the spatial autocorrelation problem

Griffith, Daniel A.

The Moran Coefficient spatial autocorrelation index can be decomposed into orthogonal map pattern components. This decomposition relates it directly to standard linear regression, in which corresponding eigenvectors can be used as predictors. This paper reports comparative results between these linear regressions and their auto-Gaussian counterparts for the following georeferenced data sets: Columbus (Ohio) crime, Ottawa-Hull median family income, Toronto population density, southwest Ohio unemployment, Syracuse pediatric lead poisoning, and Glasgow standard mortality rates, and a small remotely sensed image of the High Peak district. This methodology is extended to auto-logistic and auto-Poisson situations, with selected data analyses including percentage of urban population across Puerto Rico, and the frequency of SIDs cases across North Carolina. These data analytic results suggest that this approach to georeferenced data analysis offers considerable promise.

5. Magnetic axis alignment and the Poisson alignment reference system

Griffith, Lee V.; Schenz, Richard F.; Sommargren, Gary E.

1989-01-01

Three distinct metrological operations are necessary to align a free-electron laser (FEL): the magnetic axis must be located, a straight line reference (SLR) must be generated, and the magnetic axis must be related to the SLR. This paper begins with a review of the motivation for developing an alignment system that will assure better than 100 micrometer accuracy in the alignment of the magnetic axis throughout an FEL. The paper describes techniques for identifying the magnetic axis of solenoids, quadrupoles, and wiggler poles. Propagation of a laser beam is described to the extent of revealing sources of nonlinearity in the beam. Development and use of the Poisson line, a diffraction effect, is described in detail. Spheres in a large-diameter laser beam create Poisson lines and thus provide a necessary mechanism for gauging between the magnetic axis and the SLR. Procedures for installing FEL components and calibrating alignment fiducials to the magnetic axes of the components are also described. An error budget shows that the Poisson alignment reference system will make it possible to meet the alignment tolerances for an FEL.

6. Magnetic alignment and the Poisson alignment reference system

Griffith, L. V.; Schenz, R. F.; Sommargren, G. E.

1990-08-01

Three distinct metrological operations are necessary to align a free-electron laser (FEL): the magnetic axis must be located, a straight line reference (SLR) must be generated, and the magnetic axis must be related to the SLR. This article begins with a review of the motivation for developing an alignment system that will assure better than 100-μm accuracy in the alignment of the magnetic axis throughout an FEL. The 100-μm accuracy is an error circle about an ideal axis for 300 m or more. The article describes techniques for identifying the magnetic axes of solenoids, quadrupoles, and wiggler poles. Propagation of a laser beam is described to the extent of revealing sources of nonlinearity in the beam. Development of a straight-line reference based on the Poisson line, a diffraction effect, is described in detail. Spheres in a large-diameter laser beam create Poisson lines and thus provide a necessary mechanism for gauging between the magnetic axis and the SLR. Procedures for installing FEL components and calibrating alignment fiducials to the magnetic axes of the components are also described. The Poisson alignment reference system should be accurate to 25 μm over 300 m, which is believed to be a factor-of-4 improvement over earlier techniques. An error budget shows that only 25% of the total budgeted tolerance is used for the alignment reference system, so the remaining tolerances should fall within the allowable range for FEL alignment.

7. A generalized Poisson solver for first-principles device simulations

SciTech Connect

Bani-Hashemian, Mohammad Hossein; VandeVondele, Joost; Brück, Sascha; Luisier, Mathieu

2016-01-28

Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative method in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated.

8. Poisson Group Testing: A Probabilistic Model for Boolean Compressed Sensing

2015-08-01

We introduce a novel probabilistic group testing framework, termed Poisson group testing, in which the number of defectives follows a right-truncated Poisson distribution. The Poisson model has a number of new applications, including dynamic testing with diminishing relative rates of defectives. We consider both nonadaptive and semi-adaptive identification methods. For nonadaptive methods, we derive a lower bound on the number of tests required to identify the defectives with a probability of error that asymptotically converges to zero; in addition, we propose test matrix constructions for which the number of tests closely matches the lower bound. For semi-adaptive methods, we describe a lower bound on the expected number of tests required to identify the defectives with zero error probability. In addition, we propose a stage-wise reconstruction algorithm for which the expected number of tests is only a constant factor away from the lower bound. The methods rely only on an estimate of the average number of defectives, rather than on the individual probabilities of subjects being defective.

9. Lattice Metamaterials with Mechanically Tunable Poisson's Ratio for Vibration Control

Chen, Yanyu; Li, Tiantian; Scarpa, Fabrizio; Wang, Lifeng

2017-02-01

Metamaterials with artificially designed architectures are increasingly considered as new paradigmatic material systems with unusual physical properties. Here, we report a class of architected lattice metamaterials with mechanically tunable negative Poisson's ratios and vibration-mitigation capability. The proposed lattice metamaterials are built by replacing regular straight beams with sinusoidally shaped ones, which are highly stretchable under uniaxial tension. Our experimental and numerical results indicate that the proposed lattices exhibit extreme Poisson's-ratio variations between -0.7 and 0.5 over large tensile deformations up to 50%. This large variation of Poisson's-ratio values is attributed to the deformation pattern switching from bending to stretching within the sinusoidally shaped beams. The interplay between the multiscale (ligament and cell) architecture and wave propagation also enables remarkable broadband vibration-mitigation capability of the lattice metamaterials, which can be dynamically tuned by an external mechanical stimulus. The material design strategy provides insights into the development of classes of architected metamaterials with potential applications including energy absorption, tunable acoustics, vibration control, responsive devices, soft robotics, and stretchable electronics.

10. Poisson Downward Continuation Solution by the Jacobi Method

Kingdon, R.; Vaníček, P.

2011-03-01

Downward continuation is a continuing problem in geodesy and geophysics. Inversion of the discrete form of the Poisson integration process provides a numerical solution to the problem, but because the B matrix that defines the discrete Poisson integration is not always well conditioned the solution may be noisy in situations where the discretization step is small and in areas containing large heights. We provide two remedies, both in the context of the Jacobi iterative solution to the Poisson downward continuation problem. First, we suggest testing according to the upward continued result from each solution, rather then testing between successive solutions on the geoid, so that choice of a tolerance for the convergence of the iterative method is more meaningful and intuitive. Second, we show how a tolerance that reflects the conditioning of the B matrix can regularize the solution, and suggest an approximate way of choosing such a tolerance. Using these methods, we are able to calculate a solution that appears regular in an area of Papua New Guinea having heights over 3200 m, over a grid with 1 arc-minute spacing, based on a very poorly conditioned B matrix.

11. Blind beam-hardening correction from Poisson measurements

Gu, Renliang; Dogandžić, Aleksandar

2016-02-01

We develop a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. We employ our mass-attenuation spectrum parameterization of the noiseless measurements and express the mass- attenuation spectrum as a linear combination of B-spline basis functions of order one. A block coordinate-descent algorithm is developed for constrained minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density map image; the image sparsity is imposed using a convex total-variation (TV) norm penalty term. This algorithm alternates between a Nesterov's proximal-gradient (NPG) step for estimating the density map image and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (L-BFGS-B) step for estimating the incident-spectrum parameters. To accelerate convergence of the density- map NPG steps, we apply function restart and a step-size selection scheme that accounts for varying local Lipschitz constants of the Poisson NLL. Real X-ray CT reconstruction examples demonstrate the performance of the proposed scheme.

12. A Poisson-Boltzmann dynamics method with nonperiodic boundary condition

Lu, Qiang; Luo, Ray

2003-12-01

We have developed a well-behaved and efficient finite difference Poisson-Boltzmann dynamics method with a nonperiodic boundary condition. This is made possible, in part, by a rather fine grid spacing used for the finite difference treatment of the reaction field interaction. The stability is also made possible by a new dielectric model that is smooth both over time and over space, an important issue in the application of implicit solvents. In addition, the electrostatic focusing technique facilitates the use of an accurate yet efficient nonperiodic boundary condition: boundary grid potentials computed by the sum of potentials from individual grid charges. Finally, the particle-particle particle-mesh technique is adopted in the computation of the Coulombic interaction to balance accuracy and efficiency in simulations of large biomolecules. Preliminary testing shows that the nonperiodic Poisson-Boltzmann dynamics method is numerically stable in trajectories at least 4 ns long. The new model is also fairly efficient: it is comparable to that of the pairwise generalized Born solvent model, making it a strong candidate for dynamics simulations of biomolecules in dilute aqueous solutions. Note that the current treatment of total electrostatic interactions is with no cutoff, which is important for simulations of biomolecules. Rigorous treatment of the Debye-Hückel screening is also possible within the Poisson-Boltzmann framework: its importance is demonstrated by a simulation of a highly charged protein.

13. Assessment of Linear Finite-Difference Poisson-Boltzmann Solvers

PubMed Central

Wang, Jun; Luo, Ray

2009-01-01

CPU time and memory usage are two vital issues that any numerical solvers for the Poisson-Boltzmann equation have to face in biomolecular applications. In this study we systematically analyzed the CPU time and memory usage of five commonly used finite-difference solvers with a large and diversified set of biomolecular structures. Our comparative analysis shows that modified incomplete Cholesky conjugate gradient and geometric multigrid are the most efficient in the diversified test set. For the two efficient solvers, our test shows that their CPU times increase approximately linearly with the numbers of grids. Their CPU times also increase almost linearly with the negative logarithm of the convergence criterion at very similar rate. Our comparison further shows that geometric multigrid performs better in the large set of tested biomolecules. However, modified incomplete Cholesky conjugate gradient is superior to geometric multigrid in molecular dynamics simulations of tested molecules. We also investigated other significant components in numerical solutions of the Poisson-Boltzmann equation. It turns out that the time-limiting step is the free boundary condition setup for the linear systems for the selected proteins if the electrostatic focusing is not used. Thus, development of future numerical solvers for the Poisson-Boltzmann equation should balance all aspects of the numerical procedures in realistic biomolecular applications. PMID:20063271

14. On Poisson's ratio and composition of the Earth's lower mantle

Poirier, J. P.

1987-07-01

Poisson's ratio of the lower mantle, calculated from recently published values of seismic wave velocities and extrapolated to atmospheric pressure and room temperature is found to be in the range 0.23 ⩽ ν ⩽ 0.25. These values are compared with the values of Poisson's ratio calculated for binary mixtures of MgSiO 3 perovskite and magnesiowüstite with various iron contents. Current values of the experimental error on measured elastic moduli give little hope to be able to discriminate between pyrolite and chondritic lower mantles: both are acceptable if the shear modulus of perovskite is in the upper range of Liebermann et al. estimates. A similar calculation using the seismic parameter φ confirms the results obtained by considering Poisson's ratio and further constrains the value of the shear modulus of perovskite to lie between 1600 and 1700 kilobars for current mantle models to remain plausible. Chemical stratification of the mantle is, therefore, possible but not required by seismological data.

15. Poisson-like spiking in circuits with probabilistic synapses.

PubMed

Moreno-Bote, Rubén

2014-07-01

Neuronal activity in cortex is variable both spontaneously and during stimulation, and it has the remarkable property that it is Poisson-like over broad ranges of firing rates covering from virtually zero to hundreds of spikes per second. The mechanisms underlying cortical-like spiking variability over such a broad continuum of rates are currently unknown. We show that neuronal networks endowed with probabilistic synaptic transmission, a well-documented source of variability in cortex, robustly generate Poisson-like variability over several orders of magnitude in their firing rate without fine-tuning of the network parameters. Other sources of variability, such as random synaptic delays or spike generation jittering, do not lead to Poisson-like variability at high rates because they cannot be sufficiently amplified by recurrent neuronal networks. We also show that probabilistic synapses predict Fano factor constancy of synaptic conductances. Our results suggest that synaptic noise is a robust and sufficient mechanism for the type of variability found in cortex.

16. A Poisson-lognormal conditional-autoregressive model for multivariate spatial analysis of pedestrian crash counts across neighborhoods.

PubMed

Wang, Yiyi; Kockelman, Kara M

2013-11-01

This work examines the relationship between 3-year pedestrian crash counts across Census tracts in Austin, Texas, and various land use, network, and demographic attributes, such as land use balance, residents' access to commercial land uses, sidewalk density, lane-mile densities (by roadway class), and population and employment densities (by type). The model specification allows for region-specific heterogeneity, correlation across response types, and spatial autocorrelation via a Poisson-based multivariate conditional auto-regressive (CAR) framework and is estimated using Bayesian Markov chain Monte Carlo methods. Least-squares regression estimates of walk-miles traveled per zone serve as the exposure measure. Here, the Poisson-lognormal multivariate CAR model outperforms an aspatial Poisson-lognormal multivariate model and a spatial model (without cross-severity correlation), both in terms of fit and inference. Positive spatial autocorrelation emerges across neighborhoods, as expected (due to latent heterogeneity or missing variables that trend in space, resulting in spatial clustering of crash counts). In comparison, the positive aspatial, bivariate cross correlation of severe (fatal or incapacitating) and non-severe crash rates reflects latent covariates that have impacts across severity levels but are more local in nature (such as lighting conditions and local sight obstructions), along with spatially lagged cross correlation. Results also suggest greater mixing of residences and commercial land uses is associated with higher pedestrian crash risk across different severity levels, ceteris paribus, presumably since such access produces more potential conflicts between pedestrian and vehicle movements. Interestingly, network densities show variable effects, and sidewalk provision is associated with lower severe-crash rates.

17. Ridge regression processing

NASA Technical Reports Server (NTRS)

Kuhl, Mark R.

1990-01-01

Current navigation requirements depend on a geometric dilution of precision (GDOP) criterion. As long as the GDOP stays below a specific value, navigation requirements are met. The GDOP will exceed the specified value when the measurement geometry becomes too collinear. A new signal processing technique, called Ridge Regression Processing, can reduce the effects of nearly collinear measurement geometry; thereby reducing the inflation of the measurement errors. It is shown that the Ridge signal processor gives a consistently better mean squared error (MSE) in position than the Ordinary Least Mean Squares (OLS) estimator. The applicability of this technique is currently being investigated to improve the following areas: receiver autonomous integrity monitoring (RAIM), coverage requirements, availability requirements, and precision approaches.

18. Classification and regression trees for epidemiologic research: an air pollution example

PubMed Central

2014-01-01

Background Identifying and characterizing how mixtures of exposures are associated with health endpoints is challenging. We demonstrate how classification and regression trees can be used to generate hypotheses regarding joint effects from exposure mixtures. Methods We illustrate the approach by investigating the joint effects of CO, NO2, O3, and PM2.5 on emergency department visits for pediatric asthma in Atlanta, Georgia. Pollutant concentrations were categorized as quartiles. Days when all pollutants were in the lowest quartile were held out as the referent group (n = 131) and the remaining 3,879 days were used to estimate the regression tree. Pollutants were parameterized as dichotomous variables representing each ordinal split of the quartiles (e.g. comparing CO quartile 1 vs. CO quartiles 2–4) and considered one at a time in a Poisson case-crossover model with control for confounding. The pollutant-split resulting in the smallest P-value was selected as the first split and the dataset was partitioned accordingly. This process repeated for each subset of the data until the P-values for the remaining splits were not below a given alpha, resulting in the formation of a “terminal node”. We used the case-crossover model to estimate the adjusted risk ratio for each terminal node compared to the referent group, as well as the likelihood ratio test for the inclusion of the terminal nodes in the final model. Results The largest risk ratio corresponded to days when PM2.5 was in the highest quartile and NO2 was in the lowest two quartiles (RR: 1.10, 95% CI: 1.05, 1.16). A simultaneous Wald test for the inclusion of all terminal nodes in the model was significant, with a chi-square statistic of 34.3 (p = 0.001, with 13 degrees of freedom). Conclusions Regression trees can be used to hypothesize about joint effects of exposure mixtures and may be particularly useful in the field of air pollution epidemiology for gaining a better understanding of complex

19. Polarizable atomic multipole solutes in a Poisson-Boltzmann continuum

Schnieders, Michael J.; Baker, Nathan A.; Ren, Pengyu; Ponder, Jay W.

2007-03-01

Modeling the change in the electrostatics of organic molecules upon moving from vacuum into solvent, due to polarization, has long been an interesting problem. In vacuum, experimental values for the dipole moments and polarizabilities of small, rigid molecules are known to high accuracy; however, it has generally been difficult to determine these quantities for a polar molecule in water. A theoretical approach introduced by Onsager [J. Am. Chem. Soc. 58, 1486 (1936)] used vacuum properties of small molecules, including polarizability, dipole moment, and size, to predict experimentally known permittivities of neat liquids via the Poisson equation. Since this important advance in understanding the condensed phase, a large number of computational methods have been developed to study solutes embedded in a continuum via numerical solutions to the Poisson-Boltzmann equation. Only recently have the classical force fields used for studying biomolecules begun to include explicit polarization in their functional forms. Here the authors describe the theory underlying a newly developed polarizable multipole Poisson-Boltzmann (PMPB) continuum electrostatics model, which builds on the atomic multipole optimized energetics for biomolecular applications (AMOEBA) force field. As an application of the PMPB methodology, results are presented for several small folded proteins studied by molecular dynamics in explicit water as well as embedded in the PMPB continuum. The dipole moment of each protein increased on average by a factor of 1.27 in explicit AMOEBA water and 1.26 in continuum solvent. The essentially identical electrostatic response in both models suggests that PMPB electrostatics offers an efficient alternative to sampling explicit solvent molecules for a variety of interesting applications, including binding energies, conformational analysis, and pKa prediction. Introduction of 150mM salt lowered the electrostatic solvation energy between 2 and 13kcal /mole, depending on

20. Brain, music, and non-Poisson renewal processes

Bianco, Simone; Ignaccolo, Massimiliano; Rider, Mark S.; Ross, Mary J.; Winsor, Phil; Grigolini, Paolo

2007-06-01

In this paper we show that both music composition and brain function, as revealed by the electroencephalogram (EEG) analysis, are renewal non-Poisson processes living in the nonergodic dominion. To reach this important conclusion we process the data with the minimum spanning tree method, so as to detect significant events, thereby building a sequence of times, which is the time series to analyze. Then we show that in both cases, EEG and music composition, these significant events are the signature of a non-Poisson renewal process. This conclusion is reached using a technique of statistical analysis recently developed by our group, the aging experiment (AE). First, we find that in both cases the distances between two consecutive events are described by nonexponential histograms, thereby proving the non-Poisson nature of these processes. The corresponding survival probabilities Ψ(t) are well fitted by stretched exponentials [ Ψ(t)∝exp (-(γt)α) , with 0.5<α<1 .] The second step rests on the adoption of AE, which shows that these are renewal processes. We show that the stretched exponential, due to its renewal character, is the emerging tip of an iceberg, whose underwater part has slow tails with an inverse power law structure with power index μ=1+α . Adopting the AE procedure we find that both EEG and music composition yield μ<2 . On the basis of the recently discovered complexity matching effect, according to which a complex system S with μS<2 responds only to a complex driving signal P with μP⩽μS , we conclude that the results of our analysis may explain the influence of music on the human brain.

1. Poisson-event-based analysis of cell proliferation.

PubMed

Summers, Huw D; Wills, John W; Brown, M Rowan; Rees, Paul

2015-05-01

A protocol for the assessment of cell proliferation dynamics is presented. This is based on the measurement of cell division events and their subsequent analysis using Poisson probability statistics. Detailed analysis of proliferation dynamics in heterogeneous populations requires single cell resolution within a time series analysis and so is technically demanding to implement. Here, we show that by focusing on the events during which cells undergo division rather than directly on the cells themselves a simplified image acquisition and analysis protocol can be followed, which maintains single cell resolution and reports on the key metrics of cell proliferation. The technique is demonstrated using a microscope with 1.3 μm spatial resolution to track mitotic events within A549 and BEAS-2B cell lines, over a period of up to 48 h. Automated image processing of the bright field images using standard algorithms within the ImageJ software toolkit yielded 87% accurate recording of the manually identified, temporal, and spatial positions of the mitotic event series. Analysis of the statistics of the interevent times (i.e., times between observed mitoses in a field of view) showed that cell division conformed to a nonhomogeneous Poisson process in which the rate of occurrence of mitotic events, λ exponentially increased over time and provided values of the mean inter mitotic time of 21.1 ± 1.2 hours for the A549 cells and 25.0 ± 1.1 h for the BEAS-2B cells. Comparison of the mitotic event series for the BEAS-2B cell line to that predicted by random Poisson statistics indicated that temporal synchronisation of the cell division process was occurring within 70% of the population and that this could be increased to 85% through serum starvation of the cell culture.

2. A Generalized QMRA Beta-Poisson Dose-Response Model.

PubMed

Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie

2016-10-01

Quantitative microbial risk assessment (QMRA) is widely accepted for characterizing the microbial risks associated with food, water, and wastewater. Single-hit dose-response models are the most commonly used dose-response models in QMRA. Denoting PI(d) as the probability of infection at a given mean dose d, a three-parameter generalized QMRA beta-Poisson dose-response model, PI(d|α,β,r*), is proposed in which the minimum number of organisms required for causing infection, Kmin , is not fixed, but a random variable following a geometric distribution with parameter 0Poisson model, PI(d|α,β), is a special case of the generalized model with Kmin = 1 (which implies r*=1). The generalized beta-Poisson model is based on a conceptual model with greater detail in the dose-response mechanism. Since a maximum likelihood solution is not easily available, a likelihood-free approximate Bayesian computation (ABC) algorithm is employed for parameter estimation. By fitting the generalized model to four experimental data sets from the literature, this study reveals that the posterior median r* estimates produced fall short of meeting the required condition of r* = 1 for single-hit assumption. However, three out of four data sets fitted by the generalized models could not achieve an improvement in goodness of fit. These combined results imply that, at least in some cases, a single-hit assumption for characterizing the dose-response process may not be appropriate, but that the more complex models may be difficult to support especially if the sample size is small. The three-parameter generalized model provides a possibility to investigate the mechanism of a dose-response process in greater detail than is possible under a single-hit model.

3. Brain, music, and non-Poisson renewal processes.

PubMed

Bianco, Simone; Ignaccolo, Massimiliano; Rider, Mark S; Ross, Mary J; Winsor, Phil; Grigolini, Paolo

2007-06-01

In this paper we show that both music composition and brain function, as revealed by the electroencephalogram (EEG) analysis, are renewal non-Poisson processes living in the nonergodic dominion. To reach this important conclusion we process the data with the minimum spanning tree method, so as to detect significant events, thereby building a sequence of times, which is the time series to analyze. Then we show that in both cases, EEG and music composition, these significant events are the signature of a non-Poisson renewal process. This conclusion is reached using a technique of statistical analysis recently developed by our group, the aging experiment (AE). First, we find that in both cases the distances between two consecutive events are described by nonexponential histograms, thereby proving the non-Poisson nature of these processes. The corresponding survival probabilities Psi(t) are well fitted by stretched exponentials [Psi(t) proportional, variant exp (-(gammat){alpha}) , with 0.5

4. Maslov indices, Poisson brackets, and singular differential forms

Esterlis, I.; Haggard, H. M.; Hedeman, A.; Littlejohn, R. G.

2014-06-01

Maslov indices are integers that appear in semiclassical wave functions and quantization conditions. They are often notoriously difficult to compute. We present methods of computing the Maslov index that rely only on typically elementary Poisson brackets and simple linear algebra. We also present a singular differential form, whose integral along a curve gives the Maslov index of that curve. The form is closed but not exact, and transforms by an exact differential under canonical transformations. We illustrate the method with the 6j-symbol, which is important in angular-momentum theory and in quantum gravity.

5. Poisson-Boltzmann theory for two parallel uniformly charged plates

SciTech Connect

Xing Xiangjun

2011-04-15

We solve the nonlinear Poisson-Boltzmann equation for two parallel and like-charged plates both inside a symmetric electrolyte, and inside a 2:1 asymmetric electrolyte, in terms of Weierstrass elliptic functions. From these solutions we derive the functional relation between the surface charge density, the plate separation, and the pressure between plates. For the one plate problem, we obtain exact expressions for the electrostatic potential and for the renormalized surface charge density, both in symmetric and in asymmetric electrolytes. For the two plate problems, we obtain new exact asymptotic results in various regimes.

6. A Poisson process approximation for generalized K-5 confidence regions

NASA Technical Reports Server (NTRS)

Arsham, H.; Miller, D. R.

1982-01-01

One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.

7. An empirical Bayes approach for the Poisson life distribution.

NASA Technical Reports Server (NTRS)

Canavos, G. C.

1973-01-01

A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.

8. Poisson approach to clustering analysis of regulatory sequences.

PubMed

Wang, Haiying; Zheng, Huiru; Hu, Jinglu

2008-01-01

The presence of similar patterns in regulatory sequences may aid users in identifying co-regulated genes or inferring regulatory modules. By modelling pattern occurrences in regulatory regions with Poisson statistics, this paper presents a log likelihood ratio statistics-based distance measure to calculate pair-wise similarities between regulatory sequences. We employed it within three clustering algorithms: hierarchical clustering, Self-Organising Map, and a self-adaptive neural network. The results indicate that, in comparison to traditional clustering algorithms, the incorporation of the log likelihood ratio statistics-based distance into the learning process may offer considerable improvements in the process of regulatory sequence-based classification of genes.

9. Fission meter and neutron detection using poisson distribution comparison

DOEpatents

Rowland, Mark S; Snyderman, Neal J

2014-11-18

A neutron detector system and method for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly into information that a first responder can use to discriminate materials. The system comprises counting neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source. Comparison of the observed neutron count distribution with a Poisson distribution is performed to distinguish fissile material from non-fissile material.

10. Theory of multicolor lattice gas - A cellular automaton Poisson solver

NASA Technical Reports Server (NTRS)

Chen, H.; Matthaeus, W. H.; Klein, L. W.

1990-01-01

The present class of models for cellular automata involving a quiescent hydrodynamic lattice gas with multiple-valued passive labels termed 'colors', the lattice collisions change individual particle colors while preserving net color. The rigorous proofs of the multicolor lattice gases' essential features are rendered more tractable by an equivalent subparticle representation in which the color is represented by underlying two-state 'spins'. Schemes for the introduction of Dirichlet and Neumann boundary conditions are described, and two illustrative numerical test cases are used to verify the theory. The lattice gas model is equivalent to a Poisson equation solution.

11. Lie-Poisson bifurcations for the Maxwell-Bloch equations

SciTech Connect

David, D.

1990-01-01

We present a study of the set of Maxwell-Bloch equations on R{sup 3} from the point of view of Hamiltonian dynamics. These equations are shown to be bi-Hamiltonian, on the one hand, and to possess several inequivalent Lie-Poisson structures, on the other hand, parametrized by the group SL(2,R). Each structure is characterized by a particular distinguished function. The level sets of this function provide two-dimensional surfaces onto which the motion takes various symplectic forms. 4 refs.

12. Poisson's Ratio and the Densification of Glass under High Pressure

SciTech Connect

Rouxel, T.; Ji, H.; Hammouda, T.; Moreac, A.

2008-06-06

Because of a relatively low atomic packing density, (C{sub g}) glasses experience significant densification under high hydrostatic pressure. Poisson's ratio ({nu}) is correlated to C{sub g} and typically varies from 0.15 for glasses with low C{sub g} such as amorphous silica to 0.38 for close-packed atomic networks such as in bulk metallic glasses. Pressure experiments were conducted up to 25 GPa at 293 K on silica, soda-lime-silica, chalcogenide, and bulk metallic glasses. We show from these high-pressure data that there is a direct correlation between {nu} and the maximum post-decompression density change.

13. Some Poisson structures and Lax equations associated with the Toeplitz lattice and the Schur lattice

Lemarie, Caroline

2016-01-01

The Toeplitz lattice is a Hamiltonian system whose Poisson structure is known. In this paper, we unveil the origins of this Poisson structure and derive from it the associated Lax equations for this lattice. We first construct a Poisson subvariety H n of GL n (C), which we view as a real or complex Poisson-Lie group whose Poisson structure comes from a quadratic R-bracket on gl n (C) for a fixed R-matrix. The existence of Hamiltonians, associated to the Toeplitz lattice for the Poisson structure on H n , combined with the properties of the quadratic R-bracket allow us to give explicit formulas for the Lax equation. Then we derive from it the integrability in the sense of Liouville of the Toeplitz lattice. When we view the lattice as being defined over R, we can construct a Poisson subvariety H n τ of U n which is itself a Poisson-Dirac subvariety of GL n R (C). We then construct a Hamiltonian for the Poisson structure induced on H n τ , corresponding to another system which derives from the Toeplitz lattice the modified Schur lattice. Thanks to the properties of Poisson-Dirac subvarieties, we give an explicit Lax equation for the new system and derive from it a Lax equation for the Schur lattice. We also deduce the integrability in the sense of Liouville of the modified Schur lattice.

14. Evaluating differential effects using regression interactions and regression mixture models

PubMed Central

Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

2015-01-01

Research increasingly emphasizes understanding differential effects. This paper focuses on understanding regression mixture models, a relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their formulation, and their assumptions are compared using Monte Carlo simulations and real data analysis. The capabilities of regression mixture models are described and specific issues to be addressed when conducting regression mixtures are proposed. The paper aims to clarify the role that regression mixtures can take in the estimation of differential effects and increase awareness of the benefits and potential pitfalls of this approach. Regression mixture models are shown to be a potentially effective exploratory method for finding differential effects when these effects can be defined by a small number of classes of respondents who share a typical relationship between a predictor and an outcome. It is also shown that the comparison between regression mixture models and interactions becomes substantially more complex as the number of classes increases. It is argued that regression interactions are well suited for direct tests of specific hypotheses about differential effects and regression mixtures provide a useful approach for exploring effect heterogeneity given adequate samples and study design. PMID:26556903

15. Identification d’une Classe de Processus de Poisson Filtres (Identification of a Class of Filtered Poisson Processes).

DTIC Science & Technology

1983-05-20

iei su + n bin oat Is param~tre Ou processus as Poisson. L𔄀vdnement en T proacalt, sur un capteur , A I ins- tant t, un offet proportionnel it I... capteur , *st + aodnergle finieatan a normalisd G. tI-exp - 1:u G(I +S5)))* 2 k k *L𔄀netrgie capt6O est aIdatolre at Is moable consi- oCj < u, 7> uI Z(t I...hiriquoment invariants. ainei I’cffet pose aucun probibme th6orique Ilobservation peut gur Is capteur vaut C G(t- T) avec G fonction car.- Sire veciorlelle

16. Measurement of Young's Modulus and Poisson's Ratio of Tuna Fish

Ogawa, Yutaka; Hagura, Yoshio

Considering that gape and heave produced during the freezing of tuna fish derive from changes in the mechanical properties of tuna fish itself during freezing,the Poisson's ratio and Young's modulus of tuna meat were measured at the respective temperature conditions of i) no freezing,ii) partial freezing,and iii) freezing. The results of measurement were shown that the mechanical properties of tuna meat displayed temperature dependence as sudden change at the boundary temperature of freezing beginning as summarized below: 1) the mechanical properties of tuna meat were anisotropic according to the tissue and structure of the fish body,but these properties greatly varied according to the test temperature;2) the Young's modulus of non-frozen tuna meat were approximately 50 KPa,but these became an extremely large value (approximately 4 GPa) after being frozen; and 3) the Poisson's ratio decreased as the frozen water percentage increased,but these displayed an approximate value of one or more.

17. PSH3D fast Poisson solver for petascale DNS

Adams, Darren; Dodd, Michael; Ferrante, Antonino

2016-11-01

Direct numerical simulation (DNS) of high Reynolds number, Re >= O (105) , turbulent flows requires computational meshes >= O (1012) grid points, and, thus, the use of petascale supercomputers. DNS often requires the solution of a Helmholtz (or Poisson) equation for pressure, which constitutes the bottleneck of the solver. We have developed a parallel solver of the Helmholtz equation in 3D, PSH3D. The numerical method underlying PSH3D combines a parallel 2D Fast Fourier transform in two spatial directions, and a parallel linear solver in the third direction. For computational meshes up to 81923 grid points, our numerical results show that PSH3D scales up to at least 262k cores of Cray XT5 (Blue Waters). PSH3D has a peak performance 6 × faster than 3D FFT-based methods when used with the 'partial-global' optimization, and for a 81923 mesh solves the Poisson equation in 1 sec using 128k cores. Also, we have verified that the use of PSH3D with the 'partial-global' optimization in our DNS solver does not reduce the accuracy of the numerical solution of the incompressible Navier-Stokes equations.

18. Saint-Venant end effects for materials with negative Poisson's ratios

NASA Technical Reports Server (NTRS)

Lakes, R. S.

1992-01-01

Results are presented from an analysis of Saint-Venant end effects for materials with negative Poisson's ratio. Examples are presented showing that slow decay of end stress occurs in circular cylinders of negative Poisson's ratio, whereas a sandwich panel containing rigid face sheets and a compliant core exhibits no anomalous effects for negative Poisson's ratio (but exhibits slow stress decay for core Poisson's ratios approaching 0.5). In sand panels with stiff but not perfectly rigid face sheets, a negative Poisson's ratio results in end stress decay, which is faster than it would be otherwise. It is suggested that the slow decay previously predicted for sandwich strips in plane deformation as a result of the geometry can be mitigated by the use of a negative Poisson's ratio material for the core.

19. Evaluating Differential Effects Using Regression Interactions and Regression Mixture Models

ERIC Educational Resources Information Center

Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

2015-01-01

Research increasingly emphasizes understanding differential effects. This article focuses on understanding regression mixture models, which are relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their…

20. Error bounds in cascading regressions

USGS Publications Warehouse

Karlinger, M.R.; Troutman, B.M.

1985-01-01

Cascading regressions is a technique for predicting a value of a dependent variable when no paired measurements exist to perform a standard regression analysis. Biases in coefficients of a cascaded-regression line as well as error variance of points about the line are functions of the correlation coefficient between dependent and independent variables. Although this correlation cannot be computed because of the lack of paired data, bounds can be placed on errors through the required properties of the correlation coefficient. The potential meansquared error of a cascaded-regression prediction can be large, as illustrated through an example using geomorphologic data. ?? 1985 Plenum Publishing Corporation.

1. Transfer Learning Based on Logistic Regression

Paul, A.; Rottensteiner, F.; Heipke, C.

2015-08-01

In this paper we address the problem of classification of remote sensing images in the framework of transfer learning with a focus on domain adaptation. The main novel contribution is a method for transductive transfer learning in remote sensing on the basis of logistic regression. Logistic regression is a discriminative probabilistic classifier of low computational complexity, which can deal with multiclass problems. This research area deals with methods that solve problems in which labelled training data sets are assumed to be available only for a source domain, while classification is needed in the target domain with different, yet related characteristics. Classification takes place with a model of weight coefficients for hyperplanes which separate features in the transformed feature space. In term of logistic regression, our domain adaptation method adjusts the model parameters by iterative labelling of the target test data set. These labelled data features are iteratively added to the current training set which, at the beginning, only contains source features and, simultaneously, a number of source features are deleted from the current training set. Experimental results based on a test series with synthetic and real data constitutes a first proof-of-concept of the proposed method.

2. Non-linear properties of metallic cellular materials with a negative Poisson's ratio

NASA Technical Reports Server (NTRS)

Choi, J. B.; Lakes, R. S.

1992-01-01

Negative Poisson's ratio copper foam was prepared and characterized experimentally. The transformation into re-entrant foam was accomplished by applying sequential permanent compressions above the yield point to achieve a triaxial compression. The Poisson's ratio of the re-entrant foam depended on strain and attained a relative minimum at strains near zero. Poisson's ratio as small as -0.8 was achieved. The strain dependence of properties occurred over a narrower range of strain than in the polymer foams studied earlier. Annealing of the foam resulted in a slightly greater magnitude of negative Poisson's ratio and greater toughness at the expense of a decrease in the Young's modulus.

3. Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.

PubMed

Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique

2015-05-01

The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model.

4. Logistic Regression: Concept and Application

ERIC Educational Resources Information Center

Cokluk, Omay

2010-01-01

The main focus of logistic regression analysis is classification of individuals in different groups. The aim of the present study is to explain basic concepts and processes of binary logistic regression analysis intended to determine the combination of independent variables which best explain the membership in certain groups called dichotomous…

5. Precision Efficacy Analysis for Regression.

ERIC Educational Resources Information Center

Brooks, Gordon P.

When multiple linear regression is used to develop a prediction model, sample size must be large enough to ensure stable coefficients. If the derivation sample size is inadequate, the model may not predict well for future subjects. The precision efficacy analysis for regression (PEAR) method uses a cross- validity approach to select sample sizes…

6. Numerical calibration of the stable poisson loaded specimen

NASA Technical Reports Server (NTRS)

Ghosn, Louis J.; Calomino, Anthony M.; Brewer, Dave N.

1992-01-01

An analytical calibration of the Stable Poisson Loaded (SPL) specimen is presented. The specimen configuration is similar to the ASTM E-561 compact-tension specimen with displacement controlled wedge loading used for R-Curve determination. The crack mouth opening displacements (CMOD's) are produced by the diametral expansion of an axially compressed cylindrical pin located in the wake of a machined notch. Due to the unusual loading configuration, a three-dimensional finite element analysis was performed with gap elements simulating the contact between the pin and specimen. In this report, stress intensity factors, CMOD's, and crack displacement profiles are reported for different crack lengths and different contacting conditions. It was concluded that the computed stress intensity factor decreases sharply with increasing crack length, thus making the SPL specimen configuration attractive for fracture testing of brittle, high modulus materials.

7. Testing homogeneity of two zero-inflated Poisson populations.

PubMed

Tse, Siu Keung; Chow, Shein Chung; Lu, Qingshu; Cosmatos, Dennis

2009-02-01

The problem of testing treatment difference in the occurrence of a safety parameter in a randomized parallel-group comparative clinical trial under the assumption that the number of occurrence follows a zero-inflated Poisson (ZIP) distribution is considered. Likelihood ratio tests (LRT) for homogeneity of two ZIP populations are derived under the hypotheses that (i) there is no difference in inflation parameters, (ii) there is no difference in non-zero means; and (iii) there is no difference in both inflation parameters and non-zero means. Approximate formulas for sample size calculation are also obtained for achieving a desired power for detecting a clinically meaningful difference under the corresponding alternative hypotheses. An example concerning the assessment of the gastrointestinal (GI) safety in terms of the number of erosion counts of a newly developed compound for the treatment of osteoarthritis and rheumatoid arthritis is given for illustration purpose.

8. Shape representation and classification using the poisson equation.

PubMed

Gorelick, Lena; Galun, Meirav; Sharon, Eitan; Basri, Ronen; Brandt, Achi

2006-12-01

We present a novel approach that allows us to reliably compute many useful properties of a silhouette. Our approach assigns, for every internal point of the silhouette, a value reflecting the mean time required for a random walk beginning at the point to hit the boundaries. This function can be computed by solving Poisson's equation, with the silhouette contours providing boundary conditions. We show how this function can be used to reliably extract various shape properties including part structure and rough skeleton, local orientation and aspect ratio of different parts, and convex and concave sections of the boundaries. In addition to this, we discuss properties of the solution and show how to efficiently compute this solution using multigrid algorithms. We demonstrate the utility of the extracted properties by using them for shape classification and retrieval.

9. Flux theory for Poisson distributed pores with Gaussian permeability.

PubMed

Salinas, Dino G

2016-01-01

The mean of the solute flux through membrane pores depends on the random distribution and permeability of the pores. Mathematical models including such randomness factors make it possible to obtain statistical parameters for pore characterization. Here, assuming that pores follow a Poisson distribution in the lipid phase and that their permeabilities follow a Gaussian distribution, a mathematical model for solute dynamics is obtained by applying a general result from a previous work regarding any number of different kinds of randomly distributed pores. The new proposed theory is studied using experimental parameters obtained elsewhere, and a method for finding the mean single pore flux rate from liposome flux assays is suggested. This method is useful for pores without requiring studies by patch-clamp in single cells or single-channel recordings. However, it does not apply in the case of ion-selective channels, in which a more complex flux law combining the concentration and electrical gradient is required.

10. Application of the sine-Poisson equation in solar magnetostatics

NASA Technical Reports Server (NTRS)

Webb, G. M.; Zank, G. P.

1990-01-01

Solutions of the sine-Poisson equations are used to construct a class of isothermal magnetostatic atmospheres, with one ignorable coordinate corresponding to a uniform gravitational field in a plane geometry. The distributed current in the model (j) is directed along the x-axis, where x is the horizontal ignorable coordinate; (j) varies as the sine of the magnetostatic potential and falls off exponentially with distance vertical to the base with an e-folding distance equal to the gravitational scale height. Solutions for the magnetostatic potential A corresponding to the one-soliton, two-soliton, and breather solutions of the sine-Gordon equation are studied. Depending on the values of the free parameters in the soliton solutions, horizontally periodic magnetostatic structures are obtained possessing either a single X-type neutral point, multiple neural X-points, or solutions without X-points.

11. Lindley frailty model for a class of compound Poisson processes

2013-10-01

The Lindley distribution gain importance in survival analysis for the similarity of exponential distribution and allowance for the different shapes of hazard function. Frailty models provide an alternative to proportional hazards model where misspecified or omitted covariates are described by an unobservable random variable. Despite of the distribution of the frailty is generally assumed to be continuous, it is appropriate to consider discrete frailty distributions In some circumstances. In this paper, frailty models with discrete compound Poisson process for the Lindley distributed failure time are introduced. Survival functions are derived and maximum likelihood estimation procedures for the parameters are studied. Then, the fit of the models to the earthquake data set of Turkey are examined.

12. Analytical stress intensity solution for the stable Poisson loaded specimen

Ghosn, Louis J.; Calomino, Anthony M.; Brewer, David N.

1993-04-01

An analytical calibration of the Stable Poisson Loaded (SPL) specimen is presented. The specimen configuration is similar to the ASTM E-561 compact-tension specimen with displacement controlled wedge loading used for R-curve determination. The crack mouth opening displacements (CMOD's) are produced by the diametral expansion of an axially compressed cylindrical pin located in the wake of a machined notch. Due to the unusual loading configuration, a three-dimensional finite element analysis was performed with gap elements simulating the contact between the pin and specimen. In this report, stress intensity factors, CMOD's, and crack displacement profiles, are reported for different crack lengths and different contacting conditions. It was concluded that the computed stress intensity factor decreases sharply with increasing crack length thus making the SPL specimen configuration attractive for fracture testing of brittle, high modulus materials.

13. Numerical calibration of the stable poisson loaded specimen

Ghosn, Louis J.; Calomino, Anthony M.; Brewer, Dave N.

1992-10-01

An analytical calibration of the Stable Poisson Loaded (SPL) specimen is presented. The specimen configuration is similar to the ASTM E-561 compact-tension specimen with displacement controlled wedge loading used for R-Curve determination. The crack mouth opening displacements (CMOD's) are produced by the diametral expansion of an axially compressed cylindrical pin located in the wake of a machined notch. Due to the unusual loading configuration, a three-dimensional finite element analysis was performed with gap elements simulating the contact between the pin and specimen. In this report, stress intensity factors, CMOD's, and crack displacement profiles are reported for different crack lengths and different contacting conditions. It was concluded that the computed stress intensity factor decreases sharply with increasing crack length, thus making the SPL specimen configuration attractive for fracture testing of brittle, high modulus materials.

14. Analytical stress intensity solution for the Stable Poisson Loaded specimen

Ghosn, Louis J.; Calomino, Anthony M.; Brewer, David N.

1993-04-01

An analytical calibration of the Stable Poisson Loaded (SPL) specimen is presented. The specimen configuration is similar to the ASTM E-561 compact-tension specimen with displacement controlled wedge loading used for R-curve determination. The crack mouth opening displacements (CMODs) are produced by the diametral expansion of an axially compressed cylindrical pin located in the wake of a machined notch. Due to the unusual loading configuration, a three-dimensional finite element analysis was performed with gap elements simulating the contact between the pin and specimen. In this report, stress intensity factors, CMODs, and crack displacement profiles, are reported for different crack lengths and different contacting conditions. It was concluded that the computed stress intensity factor decreases sharply with increasing crack length thus making the SPL specimen configuration attractive for fracture testing of brittle, high modulus materials.

15. Note on the Poisson structure of the damped oscillator

SciTech Connect

Hone, A. N. W.; Senthilvelan, M.

2009-10-15

The damped harmonic oscillator is one of the most studied systems with respect to the problem of quantizing dissipative systems. Recently Chandrasekar et al. [J. Math. Phys. 48, 032701 (2007)] applied the Prelle-Singer method to construct conserved quantities and an explicit time-independent Lagrangian and Hamiltonian structure for the damped oscillator. Here we describe the associated Poisson bracket which generates the continuous flow, pointing out that there is a subtle problem of definition on the whole phase space. The action-angle variables for the system are also presented, and we further explain how to extend these considerations to the discrete setting. Some implications for the quantum case are briefly mentioned.

16. Poisson's ratios of auxetic and other technological materials.

PubMed

Ballato, Arthur

2010-01-01

Poisson's ratio, the relation between lateral contraction of a thin, linearly elastic rod when subjected to a longitudinal extension, has a long and interesting history. For isotropic bodies, it can theoretically range from +1/2 to -1; the experimental gamut for anisotropics is even larger. The ratio is positive for all combinations of directions in most crystals. But as far back as the 1800s, Voigt and others found that negative values were encountered for some materials, a property now called auxeticity. Here we examine this property from the point of view of crystal stability and compute extrema of the ratio for various interesting and technologically important materials. Potential applications of the auxetic property are mentioned.

17. Optical Signal Processing: Poisson Image Restoration and Shearing Interferometry

NASA Technical Reports Server (NTRS)

Hong, Yie-Ming

1973-01-01

Optical signal processing can be performed in either digital or analog systems. Digital computers and coherent optical systems are discussed as they are used in optical signal processing. Topics include: image restoration; phase-object visualization; image contrast reversal; optical computation; image multiplexing; and fabrication of spatial filters. Digital optical data processing deals with restoration of images degraded by signal-dependent noise. When the input data of an image restoration system are the numbers of photoelectrons received from various areas of a photosensitive surface, the data are Poisson distributed with mean values proportional to the illuminance of the incoherently radiating object and background light. Optical signal processing using coherent optical systems is also discussed. Following a brief review of the pertinent details of Ronchi's diffraction grating interferometer, moire effect, carrier-frequency photography, and achromatic holography, two new shearing interferometers based on them are presented. Both interferometers can produce variable shear.

18. Nonstationary elementary-field light randomly triggered by Poisson impulses.

PubMed

Fernández-Pousa, Carlos R

2013-05-01

A stochastic theory of nonstationary light describing the random emission of elementary pulses is presented. The emission is governed by a nonhomogeneous Poisson point process determined by a time-varying emission rate. The model describes, in the appropriate limits, stationary, cyclostationary, locally stationary, and pulsed radiation, and reduces to a Gaussian theory in the limit of dense emission rate. The first- and second-order coherence theories are solved after the computation of second- and fourth-order correlation functions by use of the characteristic function. The ergodicity of second-order correlations under various types of detectors is explored and a number of observables, including optical spectrum, amplitude, and intensity correlations, are analyzed.

19. Analytical stress intensity solution for the Stable Poisson Loaded specimen

NASA Technical Reports Server (NTRS)

Ghosn, Louis J.; Calomino, Anthony M.; Brewer, David N.

1993-01-01

An analytical calibration of the Stable Poisson Loaded (SPL) specimen is presented. The specimen configuration is similar to the ASTM E-561 compact-tension specimen with displacement controlled wedge loading used for R-curve determination. The crack mouth opening displacements (CMODs) are produced by the diametral expansion of an axially compressed cylindrical pin located in the wake of a machined notch. Due to the unusual loading configuration, a three-dimensional finite element analysis was performed with gap elements simulating the contact between the pin and specimen. In this report, stress intensity factors, CMODs, and crack displacement profiles, are reported for different crack lengths and different contacting conditions. It was concluded that the computed stress intensity factor decreases sharply with increasing crack length thus making the SPL specimen configuration attractive for fracture testing of brittle, high modulus materials.

20. Discrete Integrable Systems and Poisson Algebras From Cluster Maps

Fordy, Allan P.; Hone, Andrew

2014-01-01

We consider nonlinear recurrences generated from cluster mutations applied to quivers that have the property of being cluster mutation-periodic with period 1. Such quivers were completely classified by Fordy and Marsh, who characterised them in terms of the skew-symmetric matrix that defines the quiver. The associated nonlinear recurrences are equivalent to birational maps, and we explain how these maps can be endowed with an invariant Poisson bracket and/or presymplectic structure. Upon applying the algebraic entropy test, we are led to a series of conjectures which imply that the entropy of the cluster maps can be determined from their tropical analogues, which leads to a sharp classification result. Only four special families of these maps should have zero entropy. These families are examined in detail, with many explicit examples given, and we show how they lead to discrete dynamics that is integrable in the Liouville-Arnold sense.

1. Anisotropic norm-oriented mesh adaptation for a Poisson problem

Brèthes, Gautier; Dervieux, Alain

2016-10-01

We present a novel formulation for the mesh adaptation of the approximation of a Partial Differential Equation (PDE). The discussion is restricted to a Poisson problem. The proposed norm-oriented formulation extends the goal-oriented formulation since it is equation-based and uses an adjoint. At the same time, the norm-oriented formulation somewhat supersedes the goal-oriented one since it is basically a solution-convergent method. Indeed, goal-oriented methods rely on the reduction of the error in evaluating a chosen scalar output with the consequence that, as mesh size is increased (more degrees of freedom), only this output is proven to tend to its continuous analog while the solution field itself may not converge. A remarkable quality of goal-oriented metric-based adaptation is the mathematical formulation of the mesh adaptation problem under the form of the optimization, in the well-identified set of metrics, of a well-defined functional. In the new proposed formulation, we amplify this advantage. We search, in the same well-identified set of metrics, the minimum of a norm of the approximation error. The norm is prescribed by the user and the method allows addressing the case of multi-objective adaptation like, for example in aerodynamics, adaptating the mesh for drag, lift and moment in one shot. In this work, we consider the basic linear finite-element approximation and restrict our study to L2 norm in order to enjoy second-order convergence. Numerical examples for the Poisson problem are computed.

2. A geometric multigrid Poisson solver for domains containing solid inclusions

Botto, Lorenzo

2013-03-01

A Cartesian grid method for the fast solution of the Poisson equation in three-dimensional domains with embedded solid inclusions is presented and its performance analyzed. The efficiency of the method, which assume Neumann conditions at the immersed boundaries, is comparable to that of a multigrid method for regular domains. The method is light in terms of memory usage, and easily adaptable to parallel architectures. Tests with random and ordered arrays of solid inclusions, including spheres and ellipsoids, demonstrate smooth convergence of the residual for small separation between the inclusion surfaces. This feature is important, for instance, in simulations of nearly-touching finite-size particles. The implementation of the method, “MG-Inc”, is available online. Catalogue identifier: AEOE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 19068 No. of bytes in distributed program, including test data, etc.: 215118 Distribution format: tar.gz Programming language: C++ (fully tested with GNU GCC compiler). Computer: Any machine supporting standard C++ compiler. Operating system: Any OS supporting standard C++ compiler. RAM: About 150MB for 1283 resolution Classification: 4.3. Nature of problem: Poisson equation in domains containing inclusions; Neumann boundary conditions at immersed boundaries. Solution method: Geometric multigrid with finite-volume discretization. Restrictions: Stair-case representation of the immersed boundaries. Running time: Typically a fraction of a minute for 1283 resolution.

3. A Three-dimensional Polymer Scaffolding Material Exhibiting a Zero Poisson's Ratio.

PubMed

Soman, Pranav; Fozdar, David Y; Lee, Jin Woo; Phadke, Ameya; Varghese, Shyni; Chen, Shaochen

2012-05-14

Poisson's ratio describes the degree to which a material contracts (expands) transversally when axially strained. A material with a zero Poisson's ratio does not transversally deform in response to an axial strain (stretching). In tissue engineering applications, scaffolding having a zero Poisson's ratio (ZPR) may be more suitable for emulating the behavior of native tissues and accommodating and transmitting forces to the host tissue site during wound healing (or tissue regrowth). For example, scaffolding with a zero Poisson's ratio may be beneficial in the engineering of cartilage, ligament, corneal, and brain tissues, which are known to possess Poisson's ratios of nearly zero. Here, we report a 3D biomaterial constructed from polyethylene glycol (PEG) exhibiting in-plane Poisson's ratios of zero for large values of axial strain. We use digital micro-mirror device projection printing (DMD-PP) to create single- and double-layer scaffolds composed of semi re-entrant pores whose arrangement and deformation mechanisms contribute the zero Poisson's ratio. Strain experiments prove the zero Poisson's behavior of the scaffolds and that the addition of layers does not change the Poisson's ratio. Human mesenchymal stem cells (hMSCs) cultured on biomaterials with zero Poisson's ratio demonstrate the feasibility of utilizing these novel materials for biological applications which require little to no transverse deformations resulting from axial strains. Techniques used in this work allow Poisson's ratio to be both scale-independent and independent of the choice of strut material for strains in the elastic regime, and therefore ZPR behavior can be imparted to a variety of photocurable biomaterial.

NASA Technical Reports Server (NTRS)

Kouns, H. H.; Gardner, L. D.

1987-01-01

Outlet pressure adjusted to match varying loads. Electrohydraulic servo has positioned sleeve in leftmost position, adjusting outlet pressure to maximum value. Sleeve in equilibrium position, with control land covering control port. For lowest pressure setting, sleeve shifted toward right by increased pressure on sleeve shoulder from servovalve. Pump used in aircraft and robots, where hydraulic actuators repeatedly turned on and off, changing pump load frequently and over wide range.

NASA Technical Reports Server (NTRS)

Ashby, George C., Jr.; Robbins, W. Eugene; Horsley, Lewis A.

1991-01-01

Probe readily positionable in core of uniform flow in hypersonic wind tunnel. Formed of pair of mating cylindrical housings: transducer housing and pitot-tube housing. Pitot tube supported by adjustable wedge fairing attached to top of pitot-tube housing with semicircular foot. Probe adjusted both radially and circumferentially. In addition, pressure-sensing transducer cooled internally by water or other cooling fluid passing through annulus of cooling system.

6. Permeation through an open channel: Poisson-Nernst-Planck theory of a synthetic ionic channel.

PubMed Central

Chen, D; Lear, J; Eisenberg, B

1997-01-01

The synthetic channel [acetyl-(LeuSerSerLeuLeuSerLeu)3-CONH2]6 (pore diameter approximately 8 A, length approximately 30 A) is a bundle of six alpha-helices with blocked termini. This simple channel has complex properties, which are difficult to explain, even qualitatively, by traditional theories: its single-channel currents rectify in symmetrical solutions and its selectivity (defined by reversal potential) is a sensitive function of bathing solution. These complex properties can be fit quantitatively if the channel has fixed charge at its ends, forming a kind of macrodipole, bracketing a central charged region, and the shielding of the fixed charges is described by the Poisson-Nernst-Planck (PNP) equations. PNP fits current voltage relations measured in 15 solutions with an r.m.s. error of 3.6% using four adjustable parameters: the diffusion coefficients in the channel's pore DK = 2.1 x 10(-6) and DCl = 2.6 x 10(-7) cm2/s; and the fixed charge at the ends of the channel of +/- 0.12e (with unequal densities 0.71 M = 0.021e/A on the N-side and -1.9 M = -0.058e/A on the C-side). The fixed charge in the central region is 0.31e (with density P2 = 0.47 M = 0.014e/A). In contrast to traditional theories, PNP computes the electric field in the open channel from all of the charges in the system, by a rapid and accurate numerical procedure. In essence, PNP is a theory of the shielding of fixed (i.e., permanent) charge of the channel by mobile charge and by the ionic atmosphere in and near the channel's pore. The theory fits a wide range of data because the ionic contents and potential profile in the channel change significantly with experimental conditions, as they must, if the channel simultaneously satisfies the Poisson and Nernst-Planck equations and boundary conditions. Qualitatively speaking, the theory shows that small changes in the ionic atmosphere of the channel (i.e., shielding) make big changes in the potential profile and even bigger changes in flux, because

7. Geostatistical analysis of disease data: estimation of cancer mortality risk from empirical frequencies using Poisson kriging

PubMed Central

Goovaerts, Pierre

2005-01-01

Background Cancer mortality maps are used by public health officials to identify areas of excess and to guide surveillance and control activities. Quality of decision-making thus relies on an accurate quantification of risks from observed rates which can be very unreliable when computed from sparsely populated geographical units or recorded for minority populations. This paper presents a geostatistical methodology that accounts for spatially varying population sizes and spatial patterns in the processing of cancer mortality data. Simulation studies are conducted to compare the performances of Poisson kriging to a few simple smoothers (i.e. population-weighted estimators and empirical Bayes smoothers) under different scenarios for the disease frequency, the population size, and the spatial pattern of risk. A public-domain executable with example datasets is provided. Results The analysis of age-adjusted mortality rates for breast and cervix cancers illustrated some key features of commonly used smoothing techniques. Because of the small weight assigned to the rate observed over the entity being smoothed (kernel weight), the population-weighted average leads to risk maps that show little variability. Other techniques assign larger and similar kernel weights but they use a different piece of auxiliary information in the prediction: global or local means for global or local empirical Bayes smoothers, and spatial combination of surrounding rates for the geostatistical estimator. Simulation studies indicated that Poisson kriging outperforms other approaches for most scenarios, with a clear benefit when the risk values are spatially correlated. Global empirical Bayes smoothers provide more accurate predictions under the least frequent scenario of spatially random risk. Conclusion The approach presented in this paper enables researchers to incorporate the pattern of spatial dependence of mortality rates into the mapping of risk values and the quantification of the

USGS Publications Warehouse

Anderson, Walter L.

1969-01-01

The variation of coordinates method is employed to perform a weighted least squares adjustment of horizontal survey networks. Geodetic coordinates are required for each fixed and adjustable station. A preliminary inverse geodetic position computation is made for each observed line. Weights associated with each observed equation for direction, azimuth, and distance are applied in the formation of the normal equations in-the least squares adjustment. The number of normal equations that may be solved is twice the number of new stations and less than 150. When the normal equations are solved, shifts are produced at adjustable stations. Previously computed correction factors are applied to the shifts and a most probable geodetic position is found for each adjustable station. Pinal azimuths and distances are computed. These may be written onto magnetic tape for subsequent computation of state plane or grid coordinates. Input consists of punch cards containing project identification, program options, and position and observation information. Results listed include preliminary and final positions, residuals, observation equations, solution of the normal equations showing magnitudes of shifts, and a plot of each adjusted and fixed station. During processing, data sets containing irrecoverable errors are rejected and the type of error is listed. The computer resumes processing of additional data sets.. Other conditions cause warning-errors to be issued, and processing continues with the current data set.

9. Extension of the application of conway-maxwell-poisson models: analyzing traffic crash data exhibiting underdispersion.

PubMed

Lord, Dominique; Geedipally, Srinivas Reddy; Guikema, Seth D

2010-08-01

The objective of this article is to evaluate the performance of the COM-Poisson GLM for analyzing crash data exhibiting underdispersion (when conditional on the mean). The COM-Poisson distribution, originally developed in 1962, has recently been reintroduced by statisticians for analyzing count data subjected to either over- or underdispersion. Over the last year, the COM-Poisson GLM has been evaluated in the context of crash data analysis and it has been shown that the model performs as well as the Poisson-gamma model for crash data exhibiting overdispersion. To accomplish the objective of this study, several COM-Poisson models were estimated using crash data collected at 162 railway-highway crossings in South Korea between 1998 and 2002. This data set has been shown to exhibit underdispersion when models linking crash data to various explanatory variables are estimated. The modeling results were compared to those produced from the Poisson and gamma probability models documented in a previous published study. The results of this research show that the COM-Poisson GLM can handle crash data when the modeling output shows signs of underdispersion. Finally, they also show that the model proposed in this study provides better statistical performance than the gamma probability and the traditional Poisson models, at least for this data set.

10. Three-Dimensional Polymer Constructs Exhibiting a Tunable Negative Poisson's Ratio.

PubMed

Fozdar, David Y; Soman, Pranav; Lee, Jin Woo; Han, Li-Hsin; Chen, Shaochen

2011-07-22

Young's modulus and Poisson's ratio of a porous polymeric construct (scaffold) quantitatively describe how it supports and transmits external stresses to its surroundings. While Young's modulus is always non-negative and highly tunable in magnitude, Poisson's ratio can, indeed, take on negative values despite the fact that it is non-negative for virtually every naturally occurring and artificial material. In some applications, a construct having a tunable negative Poisson's ratio (an auxetic construct) may be more suitable for supporting the external forces imposed upon it by its environment. Here, three-dimensional polyethylene glycol scaffolds with tunable negative Poisson's ratios are fabricated. Digital micromirror device projection printing (DMD-PP) is used to print single-layer constructs composed of cellular structures (pores) with special geometries, arrangements, and deformation mechanisms. The presence of the unit-cellular structures tunes the magnitude and polarity (positive or negative) of Poisson's ratio. Multilayer constructs are fabricated with DMD-PP by stacking the single-layer constructs with alternating layers of vertical connecting posts. The Poisson's ratios of the single- and multilayer constructs are determined from strain experiments, which show (1) that the Poisson's ratios of the constructs are accurately predicted by analytical deformation models and (2) that no slipping occurrs between layers in the multilayer constructs and the addition of new layers does not affect Poisson's ratio.

11. Double-Negative Mechanical Metamaterials Displaying Simultaneous Negative Stiffness and Negative Poisson's Ratio Properties.

PubMed

Hewage, Trishan A M; Alderson, Kim L; Alderson, Andrew; Scarpa, Fabrizio

2016-12-01

A scalable mechanical metamaterial simultaneously displaying negative stiffness and negative Poisson's ratio responses is presented. Interlocking hexagonal subunit assemblies containing 3 alternative embedded negative stiffness (NS) element types display Poisson's ratio values of -1 and NS values over two orders of magnitude (-1.4 N mm(-1) to -160 N mm(-1) ), in good agreement with model predictions.

12. Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.

PubMed

de Nijs, Robin

2015-07-21

In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.

13. Practical Session: Simple Linear Regression

Clausel, M.; Grégoire, G.

2014-12-01

Two exercises are proposed to illustrate the simple linear regression. The first one is based on the famous Galton's data set on heredity. We use the lm R command and get coefficients estimates, standard error of the error, R2, residuals …In the second example, devoted to data related to the vapor tension of mercury, we fit a simple linear regression, predict values, and anticipate on multiple linear regression. This pratical session is an excerpt from practical exercises proposed by A. Dalalyan at EPNC (see Exercises 1 and 2 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_4.pdf).

14. Auxetic Black Phosphorus: A 2D Material with Negative Poisson's Ratio

Du, Yuchen; Maassen, Jesse; Wu, Wangran; Luo, Zhe; Xu, Xianfan; Ye, Peide D.

2016-10-01

The Poisson's ratio of a material characterizes its response to uniaxial strain. Materials normally possess a positive Poisson's ratio - they contract laterally when stretched, and expand laterally when compressed. A negative Poisson's ratio is theoretically permissible but has not, with few exceptions of man-made bulk structures, been experimentally observed in any natural materials. Here, we show that the negative Poisson's ratio exists in the low-dimensional natural material black phosphorus, and that our experimental observations are consistent with first principles simulations. Through application of uniaxial strain along zigzag and armchair directions, we find that both interlayer and intralayer negative Poisson's ratios can be obtained in black phosphorus. The phenomenon originates from the puckered structure of its in-plane lattice, together with coupled hinge-like bonding configurations.

15. Regional regression of flood characteristics employing historical information

USGS Publications Warehouse

1987-01-01

Streamflow gauging networks provide hydrologic information for use in estimating the parameters of regional regression models. The regional regression models can be used to estimate flood statistics, such as the 100 yr peak, at ungauged sites as functions of drainage basin characteristics. A recent innovation in regional regression is the use of a generalized least squares (GLS) estimator that accounts for unequal station record lengths and sample cross correlation among the flows. However, this technique does not account for historical flood information. A method is proposed here to adjust this generalized least squares estimator to account for possible information about historical floods available at some stations in a region. The historical information is assumed to be in the form of observations of all peaks above a threshold during a long period outside the systematic record period. A Monte Carlo simulation experiment was performed to compare the GLS estimator adjusted for historical floods with the unadjusted GLS estimator and the ordinary least squares estimator. Results indicate that using the GLS estimator adjusted for historical information significantly improves the regression model. ?? 1987.

16. Marginalized zero-inflated negative binomial regression with application to dental caries.

PubMed

Preisser, John S; Das, Kalyan; Long, D Leann; Divaris, Kimon

2016-05-10

The zero-inflated negative binomial regression model (ZINB) is often employed in diverse fields such as dentistry, health care utilization, highway safety, and medicine to examine relationships between exposures of interest and overdispersed count outcomes exhibiting many zeros. The regression coefficients of ZINB have latent class interpretations for a susceptible subpopulation at risk for the disease/condition under study with counts generated from a negative binomial distribution and for a non-susceptible subpopulation that provides only zero counts. The ZINB parameters, however, are not well-suited for estimating overall exposure effects, specifically, in quantifying the effect of an explanatory variable in the overall mixture population. In this paper, a marginalized zero-inflated negative binomial regression (MZINB) model for independent responses is proposed to model the population marginal mean count directly, providing straightforward inference for overall exposure effects based on maximum likelihood estimation. Through simulation studies, the finite sample performance of MZINB is compared with marginalized zero-inflated Poisson, Poisson, and negative binomial regression. The MZINB model is applied in the evaluation of a school-based fluoride mouthrinse program on dental caries in 677 children.

17. Multiple Regression and Its Discontents

ERIC Educational Resources Information Center

Snell, Joel C.; Marsh, Mitchell

2012-01-01

Multiple regression is part of a larger statistical strategy originated by Gauss. The authors raise questions about the theory and suggest some changes that would make room for Mandelbrot and Serendipity.

18. Wrong Signs in Regression Coefficients

NASA Technical Reports Server (NTRS)

McGee, Holly

1999-01-01

When using parametric cost estimation, it is important to note the possibility of the regression coefficients having the wrong sign. A wrong sign is defined as a sign on the regression coefficient opposite to the researcher's intuition and experience. Some possible causes for the wrong sign discussed in this paper are a small range of x's, leverage points, missing variables, multicollinearity, and computational error. Additionally, techniques for determining the cause of the wrong sign are given.

SciTech Connect

Stoody, R.R.

1987-02-24

20. Incremental learning for ν-Support Vector Regression.

PubMed

Gu, Bin; Sheng, Victor S; Wang, Zhijie; Ho, Derek; Osman, Said; Li, Shuo

2015-07-01

The ν-Support Vector Regression (ν-SVR) is an effective regression learning algorithm, which has the advantage of using a parameter ν on controlling the number of support vectors and adjusting the width of the tube automatically. However, compared to ν-Support Vector Classification (ν-SVC) (Schölkopf et al., 2000), ν-SVR introduces an additional linear term into its objective function. Thus, directly applying the accurate on-line ν-SVC algorithm (AONSVM) to ν-SVR will not generate an effective initial solution. It is the main challenge to design an incremental ν-SVR learning algorithm. To overcome this challenge, we propose a special procedure called initial adjustments in this paper. This procedure adjusts the weights of ν-SVC based on the Karush-Kuhn-Tucker (KKT) conditions to prepare an initial solution for the incremental learning. Combining the initial adjustments with the two steps of AONSVM produces an exact and effective incremental ν-SVR learning algorithm (INSVR). Theoretical analysis has proven the existence of the three key inverse matrices, which are the cornerstones of the three steps of INSVR (including the initial adjustments), respectively. The experiments on benchmark datasets demonstrate that INSVR can avoid the infeasible updating paths as far as possible, and successfully converges to the optimal solution. The results also show that INSVR is faster than batch ν-SVR algorithms with both cold and warm starts.

PubMed

Eugster, Patrick; Sennhauser, Michèle; Zweifel, Peter

2010-07-01

When premiums are community-rated, risk adjustment (RA) serves to mitigate competitive insurers' incentive to select favorable risks. However, unless fully prospective, it also undermines their incentives for efficiency. By capping its volume, one may try to counteract this tendency, exposing insurers to some financial risk. This in term runs counter the quest to refine the RA formula, which would increase RA volume. Specifically, the adjuster, "Hospitalization or living in a nursing home during the previous year" will be added in Switzerland starting 2012. This paper investigates how to minimize the opportunity cost of capping RA in terms of increased incentives for risk selection.

2. Analytical solutions of the Poisson-Boltzmann equation: biological applications

Fenley, Andrew; Gordon, John; Onufriev, Alexey

2006-03-01

Electrostatic interactions are a key factor for determining many properties of bio-molecules. The ability to compute the electrostatic potential generated by a molecule is often essential in understanding the mechanism behind its biological function such as catalytic activity, ligand binding, and macromolecular association. We propose an approximate analytical solution to the (linearized) Poisson-Boltzmann (PB) equation that is suitable for computing electrostatic potential around realistic biomolecules. The approximation is tested against the numerical solutions of the PB equation on a test set of 600 representative structures including proteins, DNA, and macromolecular complexes. The approach allows one to generate, with the power of a desktop PC, electrostatic potential maps of virtually any molecule of interest, from single proteins to large protein complexes such as viral capsids. The new approach is orders of magnitude less computationally intense than its numerical counterpart, yet is almost equal in accuracy. When studying very large molecular systems, our method is a practical and inexpensive way of computing bio- molecular potential at atomic resolution. We demonstrate the usefullnes of the new approach by exploring the details of electrostatic potentials generated by two of such systems: the nucleosome core particle (25,000 atoms) and tobacco ring spot virus (500,000 atoms). Biologically relevant insights are generated.

3. Continental crust composition constrained by measurements of crustal Poisson's ratio

Zandt, George; Ammon, Charles J.

1995-03-01

DECIPHERING the geological evolution of the Earth's continental crust requires knowledge of its bulk composition and global variability. The main uncertainties are associated with the composition of the lower crust. Seismic measurements probe the elastic properties of the crust at depth, from which composition can be inferred. Of particular note is Poisson's ratio,Σ ; this elastic parameter can be determined uniquely from the ratio of P- to S-wave seismic velocity, and provides a better diagnostic of crustal composition than either P- or S-wave velocity alone1. Previous attempts to measure Σ have been limited by difficulties in obtaining coincident P- and S-wave data sampling the entire crust2. Here we report 76 new estimates of crustal Σ spanning all of the continents except Antarctica. We find that, on average, Σ increases with the age of the crust. Our results strongly support the presence of a mafic lower crust beneath cratons, and suggest either a uniformitarian craton formation process involving delamination of the lower crust during continental collisions, followed by magmatic underplating, or a model in which crust formation processes have changed since the Precambrian era.

4. Linear-Nonlinear-Poisson Models of Primate Choice Dynamics

PubMed Central

Corrado, Greg S; Sugrue, Leo P; Sebastian Seung, H; Newsome, William T

2005-01-01

The equilibrium phenomenon of matching behavior traditionally has been studied in stationary environments. Here we attempt to uncover the local mechanism of choice that gives rise to matching by studying behavior in a highly dynamic foraging environment. In our experiments, 2 rhesus monkeys (Macacca mulatta) foraged for juice rewards by making eye movements to one of two colored icons presented on a computer monitor, each rewarded on dynamic variable-interval schedules. Using a generalization of Wiener kernel analysis, we recover a compact mechanistic description of the impact of past reward on future choice in the form of a Linear-Nonlinear-Poisson model. We validate this model through rigorous predictive and generative testing. Compared to our earlier work with this same data set, this model proves to be a better description of choice behavior and is more tightly correlated with putative neural value signals. Refinements over previous models include hyperbolic (as opposed to exponential) temporal discounting of past rewards, and differential (as opposed to fractional) comparisons of option value. Through numerical simulation we find that within this class of strategies, the model parameters employed by animals are very close to those that maximize reward harvesting efficiency. PMID:16596981

5. Fast Poisson noise removal by biorthogonal Haar domain hypothesis testing

Zhang, B.; Fadili, M. J.; Starck, J.-L.; Digel, S. W.

2008-07-01

Methods based on hypothesis tests (HTs) in the Haar domain are widely used to denoise Poisson count data. Facing large datasets or real-time applications, Haar-based denoisers have to use the decimated transform to meet limited-memory or computation-time constraints. Unfortunately, for regular underlying intensities, decimation yields discontinuous estimates and strong “staircase” artifacts. In this paper, we propose to combine the HT framework with the decimated biorthogonal Haar (Bi-Haar) transform instead of the classical Haar. The Bi-Haar filter bank is normalized such that the p-values of Bi-Haar coefficients (p) provide good approximation to those of Haar (pH) for high-intensity settings or large scales; for low-intensity settings and small scales, we show that p are essentially upper-bounded by pH. Thus, we may apply the Haar-based HTs to Bi-Haar coefficients to control a prefixed false positive rate. By doing so, we benefit from the regular Bi-Haar filter bank to gain a smooth estimate while always maintaining a low computational complexity. A Fisher-approximation-based threshold implementing the HTs is also established. The efficiency of this method is illustrated on an example of hyperspectral-source-flux estimation.

6. The Euler-Poisson-Darboux equation for relativists

Stewart, John M.

2009-09-01

7. Analysis of Poisson frequency data under a simple crossover trial.

PubMed

Lui, Kung-Jong; Chang, Kuang-Chao

2016-02-01

When the frequency of occurrence for an event of interest follows a Poisson distribution, we develop asymptotic and exact procedures for testing non-equality, non-inferiority and equivalence, as well as asymptotic and exact interval estimators for the ratio of mean frequencies between two treatments under a simple crossover design. Using Monte Carlo simulations, we evaluate the performance of these test procedures and interval estimators in a variety of situations. We note that all asymptotic test procedures developed here can generally perform well with respect to Type I error and can be preferable to the exact test procedure with respect to power if the number of patients per group is moderate or large. We further find that in these cases the asymptotic interval estimator with the logarithmic transformation can be more precise than the exact interval estimator without sacrificing the accuracy with respect to the coverage probability. However, the exact test procedure and exact interval estimator can be of use when the number of patients per group is small. We use a double-blind randomized crossover trial comparing salmeterol with a placebo in exacerbations of asthma to illustrate the practical use of these estimators.

8. The Poisson Gamma distribution for wind speed data

Ćakmakyapan, Selen; Özel, Gamze

2016-04-01

The wind energy is one of the most significant alternative clean energy source and rapidly developing renewable energy sources in the world. For the evaluation of wind energy potential, probability density functions (pdfs) are usually used to model wind speed distributions. The selection of the appropriate pdf reduces the wind power estimation error and also allow to achieve characteristics. In the literature, different pdfs used to model wind speed data for wind energy applications. In this study, we propose a new probability distribution to model the wind speed data. Firstly, we defined the new probability distribution named Poisson-Gamma (PG) distribution and we analyzed a wind speed data sets which are about five pressure degree for the station. We obtained the data sets from Turkish State Meteorological Service. Then, we modelled the data sets with Exponential, Weibull, Lomax, 3 parameters Burr, Gumbel, Gamma, Rayleigh which are used to model wind speed data, and PG distributions. Finally, we compared the distribution, to select the best fitted model and demonstrated that PG distribution modeled the data sets better.

9. Bayesian Inference and Online Learning in Poisson Neuronal Networks.

PubMed

Huang, Yanping; Rao, Rajesh P N

2016-08-01

Motivated by the growing evidence for Bayesian computation in the brain, we show how a two-layer recurrent network of Poisson neurons can perform both approximate Bayesian inference and learning for any hidden Markov model. The lower-layer sensory neurons receive noisy measurements of hidden world states. The higher-layer neurons infer a posterior distribution over world states via Bayesian inference from inputs generated by sensory neurons. We demonstrate how such a neuronal network with synaptic plasticity can implement a form of Bayesian inference similar to Monte Carlo methods such as particle filtering. Each spike in a higher-layer neuron represents a sample of a particular hidden world state. The spiking activity across the neural population approximates the posterior distribution over hidden states. In this model, variability in spiking is regarded not as a nuisance but as an integral feature that provides the variability necessary for sampling during inference. We demonstrate how the network can learn the likelihood model, as well as the transition probabilities underlying the dynamics, using a Hebbian learning rule. We present results illustrating the ability of the network to perform inference and learning for arbitrary hidden Markov models.

10. Reducing bias in parameter estimates from stepwise regression in proportional hazards regression with right-censored data.

PubMed

Soh, Chang-Heok; Harrington, David P; Zaslavsky, Alan M

2008-03-01

When variable selection with stepwise regression and model fitting are conducted on the same data set, competition for inclusion in the model induces a selection bias in coefficient estimators away from zero. In proportional hazards regression with right-censored data, selection bias inflates the absolute value of parameter estimate of selected parameters, while the omission of other variables may shrink coefficients toward zero. This paper explores the extent of the bias in parameter estimates from stepwise proportional hazards regression and proposes a bootstrap method, similar to those proposed by Miller (Subset Selection in Regression, 2nd edn. Chapman & Hall/CRC, 2002) for linear regression, to correct for selection bias. We also use bootstrap methods to estimate the standard error of the adjusted estimators. Simulation results show that substantial biases could be present in uncorrected stepwise estimators and, for binary covariates, could exceed 250% of the true parameter value. The simulations also show that the conditional mean of the proposed bootstrap bias-corrected parameter estimator, given that a variable is selected, is moved closer to the unconditional mean of the standard partial likelihood estimator in the chosen model, and to the population value of the parameter. We also explore the effect of the adjustment on estimates of log relative risk, given the values of the covariates in a selected model. The proposed method is illustrated with data sets in primary biliary cirrhosis and in multiple myeloma from the Eastern Cooperative Oncology Group.

11. Universal Poisson Statistics of mRNAs with Complex Decay Pathways.

PubMed

Thattai, Mukund

2016-01-19

Messenger RNA (mRNA) dynamics in single cells are often modeled as a memoryless birth-death process with a constant probability per unit time that an mRNA molecule is synthesized or degraded. This predicts a Poisson steady-state distribution of mRNA number, in close agreement with experiments. This is surprising, since mRNA decay is known to be a complex process. The paradox is resolved by realizing that the Poisson steady state generalizes to arbitrary mRNA lifetime distributions. A mapping between mRNA dynamics and queueing theory highlights an identifiability problem: a measured Poisson steady state is consistent with a large variety of microscopic models. Here, I provide a rigorous and intuitive explanation for the universality of the Poisson steady state. I show that the mRNA birth-death process and its complex decay variants all take the form of the familiar Poisson law of rare events, under a nonlinear rescaling of time. As a corollary, not only steady-states but also transients are Poisson distributed. Deviations from the Poisson form occur only under two conditions, promoter fluctuations leading to transcriptional bursts or nonindependent degradation of mRNA molecules. These results place severe limits on the power of single-cell experiments to probe microscopic mechanisms, and they highlight the need for single-molecule measurements.

12. Pluripotent stem cell expansion and neural differentiation in 3-D scaffolds of tunable Poisson's ratio.

PubMed

Yan, Yuanwei; Li, Yan; Song, Liqing; Zeng, Changchun; Li, Yan

2017-02-01

Biophysical properties of the scaffolds such as the elastic modulus, have been recently shown to impact stem cell lineage commitment. On the other hand, the contribution of the Poisson's ratio, another important biophysical property, to the stem cell fate decision, has not been studied. Scaffolds with tunable Poisson's ratio (ν) (termed as auxetic scaffolds when Poisson's ratio is zero or negative) are anticipated to provide a spectrum of unique biophysical 3-D microenvironments to influence stem cell fate. To test this hypothesis, in the present work we fabricated auxetic polyurethane scaffolds (ν=0 to -0.45) and evaluated their effects on neural differentiation of mouse embryonic stem cells (ESCs) and human induced pluripotent stem cells (hiPSCs). Compared to the regular scaffolds (ν=+0.30) before auxetic conversion, the auxetic scaffolds supported smaller aggregate formation and higher expression of β-tubulin III upon neural differentiation. The influences of pore structure, Poisson's ratio, and elastic modulus on neural lineage commitment were further evaluated using a series of auxetic scaffolds. The results indicate that Poisson's ratio may confound the effects of elastic modulus, and auxetic scaffolds with proper pore structure and Poisson's ratio enhance neural differentiation. This study demonstrates that tuning the Poisson's ratio of the scaffolds together with elastic modulus and microstructure would enhance the capability to generate broader, more diversified ranges of biophysical 3-D microenvironments for the modulation of cellular differentiation.

13. XRA image segmentation using regression

Jin, Jesse S.

1996-04-01

Segmentation is an important step in image analysis. Thresholding is one of the most important approaches. There are several difficulties in segmentation, such as automatic selecting threshold, dealing with intensity distortion and noise removal. We have developed an adaptive segmentation scheme by applying the Central Limit Theorem in regression. A Gaussian regression is used to separate the distribution of background from foreground in a single peak histogram. The separation will help to automatically determine the threshold. A small 3 by 3 widow is applied and the modal of the local histogram is used to overcome noise. Thresholding is based on local weighting, where regression is used again for parameter estimation. A connectivity test is applied to the final results to remove impulse noise. We have applied the algorithm to x-ray angiogram images to extract brain arteries. The algorithm works well for single peak distribution where there is no valley in the histogram. The regression provides a method to apply knowledge in clustering. Extending regression for multiple-level segmentation needs further investigation.

ERIC Educational Resources Information Center

Gonsiorek, John C.

In this paper, the diverse literature bearing on the topic of homosexuality and psychological adjustment is critically reviewed and synthesized. The first chapter discusses the most crucial methodological issue in this area, the problem of sampling. The kinds of samples used to date are critically examined, and some suggestions for improved…

NASA Technical Reports Server (NTRS)

1986-01-01

Corning Glass Works' Serengeti Driver sunglasses are unique in that their lenses self-adjust and filter light while suppressing glare. They eliminate more than 99% of the ultraviolet rays in sunlight. The frames are based on the NASA Anthropometric Source Book.

DOEpatents

Hunter, Steven L.

2002-01-01

An inclinometer utilizing synchronous demodulation for high resolution and electronic offset adjustment provides a wide dynamic range without any moving components. A device encompassing a tiltmeter and accompanying electronic circuitry provides quasi-leveled tilt sensors that detect highly resolved tilt change without signal saturation.

17. Survival Data and Regression Models

Grégoire, G.

2014-12-01

We start this chapter by introducing some basic elements for the analysis of censored survival data. Then we focus on right censored data and develop two types of regression models. The first one concerns the so-called accelerated failure time models (AFT), which are parametric models where a function of a parameter depends linearly on the covariables. The second one is a semiparametric model, where the covariables enter in a multiplicative form in the expression of the hazard rate function. The main statistical tool for analysing these regression models is the maximum likelihood methodology and, in spite we recall some essential results about the ML theory, we refer to the chapter "Logistic Regression" for a more detailed presentation.

18. Regressive evolution in Astyanax cavefish.

PubMed

Jeffery, William R

2009-01-01

A diverse group of animals, including members of most major phyla, have adapted to life in the perpetual darkness of caves. These animals are united by the convergence of two regressive phenotypes, loss of eyes and pigmentation. The mechanisms of regressive evolution are poorly understood. The teleost Astyanax mexicanus is of special significance in studies of regressive evolution in cave animals. This species includes an ancestral surface dwelling form and many con-specific cave-dwelling forms, some of which have evolved their recessive phenotypes independently. Recent advances in Astyanax development and genetics have provided new information about how eyes and pigment are lost during cavefish evolution; namely, they have revealed some of the molecular and cellular mechanisms involved in trait modification, the number and identity of the underlying genes and mutations, the molecular basis of parallel evolution, and the evolutionary forces driving adaptation to the cave environment.

19. Holographic study of conventional and negative Poisson's ratio metallic foams - Elasticity, yield and micro-deformation

NASA Technical Reports Server (NTRS)

Chen, C. P.; Lakes, R. S.

1991-01-01

An experimental study by holographic interferometry is reported of the following material properties of conventional and negative Poisson's ratio copper foams: Young's moduli, Poisson's ratios, yield strengths and characteristic lengths associated with inhomogeneous deformation. The Young's modulus and yield strength of the conventional copper foam were comparable to those predicted by microstructural modeling on the basis of cellular rib bending. The reentrant copper foam exhibited a negative Poisson's ratio, as indicated by the elliptical contour fringes on the specimen surface in the bending tests. Inhomogeneous, non-affine deformation was observed holographically in both foam materials.

20. Universal Negative Poisson Ratio of Self-Avoiding Fixed-Connectivity Membranes

SciTech Connect

Bowick, M.; Cacciuto, A.; Thorleifsson, G.; Travesset, A.

2001-10-01

We determine the Poisson ratio of self-avoiding fixed-connectivity membranes, modeled as impenetrable plaquettes, to be {sigma}=-0.37(6) , in statistical agreement with the Poisson ratio of phantom fixed-connectivity membranes {sigma}=-0.32(4) . Together with the equality of critical exponents, this result implies a unique universality class for fixed-connectivity membranes. Our findings thus establish that physical fixed-connectivity membranes provide a wide class of auxetic (negative Poisson ratio) materials with significant potential applications in materials science.

1. Hamiltonian field description of the one-dimensional Poisson-Vlasov equations

SciTech Connect

Morrison, P.J.

1981-07-01

The one-dimensional Poisson-Vlasov equations are cast into Hamiltonian form. A Poisson Bracket in terms of the phase space density, as sole dynamical variable, is presented. This Poisson bracket is not of the usual form, but possesses the commutator properties of antisymmetry, bilinearity, and nonassociativity by virtue of the Jacobi requirement. Clebsch potentials are seen to yield a conventional (canonical) formulation. This formulation is discretized by expansion in terms of an arbitrary complete set of basis functions. In particular, a wave field representation is obtained.

2. Poisson-weighted Lindley distribution and its application on insurance claim data

Manesh, Somayeh Nik; Hamzah, Nor Aishah; Zamani, Hossein

2014-07-01

This paper introduces a new two-parameter mixed Poisson distribution, namely the Poisson-weighted Lindley (P-WL), which is obtained by mixing the Poisson with a new class of weighted Lindley distributions. The closed form, the moment generating function and the probability generating function are derived. The parameter estimations methods of moments and the maximum likelihood procedure are provided. Some simulation studies are conducted to investigate the performance of P-WL distribution. In addition, the compound P-WL distribution is derived and some applications to insurance area based on observations of the number of claims and on observations of the total amount of claims incurred will be illustrated.

3. Blow-up conditions for two dimensional modified Euler-Poisson equations

Lee, Yongki

2016-09-01

The multi-dimensional Euler-Poisson system describes the dynamic behavior of many important physical flows, yet as a hyperbolic system its solution can blow-up for some initial configurations. This article strives to advance our understanding on the critical threshold phenomena through the study of a two-dimensional modified Euler-Poisson system with a modified Riesz transform where the singularity at the origin is removed. We identify upper-thresholds for finite time blow-up of solutions for the modified Euler-Poisson equations with attractive/repulsive forcing.

4. Universal negative poisson ratio of self-avoiding fixed-connectivity membranes.

PubMed

Bowick, M; Cacciuto, A; Thorleifsson, G; Travesset, A

2001-10-01

We determine the Poisson ratio of self-avoiding fixed-connectivity membranes, modeled as impenetrable plaquettes, to be sigma = -0.37(6), in statistical agreement with the Poisson ratio of phantom fixed-connectivity membranes sigma = -0.32(4). Together with the equality of critical exponents, this result implies a unique universality class for fixed-connectivity membranes. Our findings thus establish that physical fixed-connectivity membranes provide a wide class of auxetic (negative Poisson ratio) materials with significant potential applications in materials science.

5. 3DGRAPE - THREE DIMENSIONAL GRIDS ABOUT ANYTHING BY POISSON'S EQUATION

NASA Technical Reports Server (NTRS)

Sorenson, R. L.

1994-01-01

The ability to treat arbitrary boundary shapes is one of the most desirable characteristics of a method for generating grids. 3DGRAPE is designed to make computational grids in or about almost any shape. These grids are generated by the solution of Poisson's differential equations in three dimensions. The program automatically finds its own values for inhomogeneous terms which give near-orthogonality and controlled grid cell height at boundaries. Grids generated by 3DGRAPE have been applied to both viscous and inviscid aerodynamic problems, and to problems in other fluid-dynamic areas. 3DGRAPE uses zones to solve the problem of warping one cube into the physical domain in real-world computational fluid dynamics problems. In a zonal approach, a physical domain is divided into regions, each of which maps into its own computational cube. It is believed that even the most complicated physical region can be divided into zones, and since it is possible to warp a cube into each zone, a grid generator which is oriented to zones and allows communication across zonal boundaries (where appropriate) solves the problem of topological complexity. 3DGRAPE expects to read in already-distributed x,y,z coordinates on the bodies of interest, coordinates which will remain fixed during the entire grid-generation process. The 3DGRAPE code makes no attempt to fit given body shapes and redistribute points thereon. Body-fitting is a formidable problem in itself. The user must either be working with some simple analytical body shape, upon which a simple analytical distribution can be easily effected, or must have available some sophisticated stand-alone body-fitting software. 3DGRAPE does not require the user to supply the block-to-block boundaries nor the shapes of the distribution of points. 3DGRAPE will typically supply those block-to-block boundaries simply as surfaces in the elliptic grid. Thus at block-to-block boundaries the following conditions are obtained: (1) grids lines will

6. Cactus: An Introduction to Regression

ERIC Educational Resources Information Center

Hyde, Hartley

2008-01-01

When the author first used "VisiCalc," the author thought it a very useful tool when he had the formulas. But how could he design a spreadsheet if there was no known formula for the quantities he was trying to predict? A few months later, the author relates he learned to use multiple linear regression software and suddenly it all clicked into…

7. Multiple Regression: A Leisurely Primer.

ERIC Educational Resources Information Center

Daniel, Larry G.; Onwuegbuzie, Anthony J.

Multiple regression is a useful statistical technique when the researcher is considering situations in which variables of interest are theorized to be multiply caused. It may also be useful in those situations in which the researchers is interested in studies of predictability of phenomena of interest. This paper provides an introduction to…

8. Weighting Regressions by Propensity Scores

ERIC Educational Resources Information Center

Freedman, David A.; Berk, Richard A.

2008-01-01

Regressions can be weighted by propensity scores in order to reduce bias. However, weighting is likely to increase random error in the estimates, and to bias the estimated standard errors downward, even when selection mechanisms are well understood. Moreover, in some cases, weighting will increase the bias in estimated causal parameters. If…

9. Quantile Regression with Censored Data

ERIC Educational Resources Information Center

Lin, Guixian

2009-01-01

The Cox proportional hazards model and the accelerated failure time model are frequently used in survival data analysis. They are powerful, yet have limitation due to their model assumptions. Quantile regression offers a semiparametric approach to model data with possible heterogeneity. It is particularly powerful for censored responses, where the…

10. A Poisson-based adaptive affinity propagation clustering for SAGE data.

PubMed

Tang, DongMing; Zhu, QingXin; Yang, Fan

2010-02-01

Serial analysis of gene expression (SAGE) is a powerful tool to obtain gene expression profiles. Clustering analysis is a valuable technique for analyzing SAGE data. In this paper, we propose an adaptive clustering method for SAGE data analysis, namely, PoissonAPS. The method incorporates a novel clustering algorithm, Affinity Propagation (AP). While AP algorithm has demonstrated good performance on many different data sets, it also faces several limitations. PoissonAPS overcomes the limitations of AP using the clustering validation measure as a cost function of merging and splitting, and as a result, it can automatically cluster SAGE data without user-specified parameters. We evaluated PoissonAPS and compared its performance with other methods on several real life SAGE datasets. The experimental results show that PoissonAPS can produce meaningful and interpretable clusters for SAGE data.

11. Particle trapping: A key requisite of structure formation and stability of Vlasov–Poisson plasmas

SciTech Connect

Schamel, Hans

2015-04-15

Particle trapping is shown to control the existence of undamped coherent structures in Vlasov–Poisson plasmas and thereby affects the onset of plasma instability beyond the realm of linear Landau theory.

12. A Hands-on Activity for Teaching the Poisson Distribution Using the Stock Market

ERIC Educational Resources Information Center

Dunlap, Mickey; Studstill, Sharyn

2014-01-01

The number of increases a particular stock makes over a fixed period follows a Poisson distribution. This article discusses using this easily-found data as an opportunity to let students become involved in the data collection and analysis process.

13. Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors

PubMed Central

Woodard, Dawn B.; Crainiceanu, Ciprian; Ruppert, David

2013-01-01

We propose a new method for regression using a parsimonious and scientifically interpretable representation of functional predictors. Our approach is designed for data that exhibit features such as spikes, dips, and plateaus whose frequency, location, size, and shape varies stochastically across subjects. We propose Bayesian inference of the joint functional and exposure models, and give a method for efficient computation. We contrast our approach with existing state-of-the-art methods for regression with functional predictors, and show that our method is more effective and efficient for data that include features occurring at varying locations. We apply our methodology to a large and complex dataset from the Sleep Heart Health Study, to quantify the association between sleep characteristics and health outcomes. Software and technical appendices are provided in online supplemental materials. PMID:24293988

NASA Technical Reports Server (NTRS)

Malin, Jane T.; Schrenkenghost, Debra K.

2001-01-01

The Adjustable Autonomy Testbed (AAT) is a simulation-based testbed located in the Intelligent Systems Laboratory in the Automation, Robotics and Simulation Division at NASA Johnson Space Center. The purpose of the testbed is to support evaluation and validation of prototypes of adjustable autonomous agent software for control and fault management for complex systems. The AA T project has developed prototype adjustable autonomous agent software and human interfaces for cooperative fault management. This software builds on current autonomous agent technology by altering the architecture, components and interfaces for effective teamwork between autonomous systems and human experts. Autonomous agents include a planner, flexible executive, low level control and deductive model-based fault isolation. Adjustable autonomy is intended to increase the flexibility and effectiveness of fault management with an autonomous system. The test domain for this work is control of advanced life support systems for habitats for planetary exploration. The CONFIG hybrid discrete event simulation environment provides flexible and dynamically reconfigurable models of the behavior of components and fluids in the life support systems. Both discrete event and continuous (discrete time) simulation are supported, and flows and pressures are computed globally. This provides fast dynamic simulations of interacting hardware systems in closed loops that can be reconfigured during operations scenarios, producing complex cascading effects of operations and failures. Current object-oriented model libraries support modeling of fluid systems, and models have been developed of physico-chemical and biological subsystems for processing advanced life support gases. In FY01, water recovery system models will be developed.

DOEpatents

Cutburth, Ronald W.; Silva, Leonard L.

1988-01-01

An improved mounting stage of the type used for the detection of laser beams is disclosed. A stage center block is mounted on each of two opposite sides by a pair of spaced ball bearing tracks which provide stability as well as simplicity. The use of the spaced ball bearing pairs in conjunction with an adjustment screw which also provides support eliminates extraneous stabilization components and permits maximization of the area of the center block laser transmission hole.

16. SU-E-T-144: Bayesian Inference of Local Relapse Data Using a Poisson-Based Tumour Control Probability Model

SciTech Connect

La Russa, D

2015-06-15

Purpose: The purpose of this project is to develop a robust method of parameter estimation for a Poisson-based TCP model using Bayesian inference. Methods: Bayesian inference was performed using the PyMC3 probabilistic programming framework written in Python. A Poisson-based TCP regression model that accounts for clonogen proliferation was fit to observed rates of local relapse as a function of equivalent dose in 2 Gy fractions for a population of 623 stage-I non-small-cell lung cancer patients. The Slice Markov Chain Monte Carlo sampling algorithm was used to sample the posterior distributions, and was initiated using the maximum of the posterior distributions found by optimization. The calculation of TCP with each sample step required integration over the free parameter α, which was performed using an adaptive 24-point Gauss-Legendre quadrature. Convergence was verified via inspection of the trace plot and posterior distribution for each of the fit parameters, as well as with comparisons of the most probable parameter values with their respective maximum likelihood estimates. Results: Posterior distributions for α, the standard deviation of α (σ), the average tumour cell-doubling time (Td), and the repopulation delay time (Tk), were generated assuming α/β = 10 Gy, and a fixed clonogen density of 10{sup 7} cm−{sup 3}. Posterior predictive plots generated from samples from these posterior distributions are in excellent agreement with the observed rates of local relapse used in the Bayesian inference. The most probable values of the model parameters also agree well with maximum likelihood estimates. Conclusion: A robust method of performing Bayesian inference of TCP data using a complex TCP model has been established.

17. Noise parameter estimation for poisson corrupted images using variance stabilization transforms.

PubMed

Jin, Xiaodan; Xu, Zhenyu; Hirakawa, Keigo

2014-03-01

Noise is present in all images captured by real-world image sensors. Poisson distribution is said to model the stochastic nature of the photon arrival process and agrees with the distribution of measured pixel values. We propose a method for estimating unknown noise parameters from Poisson corrupted images using properties of variance stabilization. With a significantly lower computational complexity and improved stability, the proposed estimation technique yields noise parameters that are comparable in accuracy to the state-of-art methods.

18. Massively Parallel Solution of Poisson Equation on Coarse Grain MIMD Architectures

NASA Technical Reports Server (NTRS)

Fijany, A.; Weinberger, D.; Roosta, R.; Gulati, S.

1998-01-01

In this paper a new algorithm, designated as Fast Invariant Imbedding algorithm, for solution of Poisson equation on vector and massively parallel MIMD architectures is presented. This algorithm achieves the same optimal computational efficiency as other Fast Poisson solvers while offering a much better structure for vector and parallel implementation. Our implementation on the Intel Delta and Paragon shows that a speedup of over two orders of magnitude can be achieved even for moderate size problems.

19. Fractional poisson--a simple dose-response model for human norovirus.

PubMed

Messner, Michael J; Berger, Philip; Nappier, Sharon P

2014-10-01

This study utilizes old and new Norovirus (NoV) human challenge data to model the dose-response relationship for human NoV infection. The combined data set is used to update estimates from a previously published beta-Poisson dose-response model that includes parameters for virus aggregation and for a beta-distribution that describes variable susceptibility among hosts. The quality of the beta-Poisson model is examined and a simpler model is proposed. The new model (fractional Poisson) characterizes hosts as either perfectly susceptible or perfectly immune, requiring a single parameter (the fraction of perfectly susceptible hosts) in place of the two-parameter beta-distribution. A second parameter is included to account for virus aggregation in the same fashion as it is added to the beta-Poisson model. Infection probability is simply the product of the probability of nonzero exposure (at least one virus or aggregate is ingested) and the fraction of susceptible hosts. The model is computationally simple and appears to be well suited to the data from the NoV human challenge studies. The model's deviance is similar to that of the beta-Poisson, but with one parameter, rather than two. As a result, the Akaike information criterion favors the fractional Poisson over the beta-Poisson model. At low, environmentally relevant exposure levels (<100), estimation error is small for the fractional Poisson model; however, caution is advised because no subjects were challenged at such a low dose. New low-dose data would be of great value to further clarify the NoV dose-response relationship and to support improved risk assessment for environmentally relevant exposures.

20. Finite element solution of torsion and other 2-D Poisson equations

NASA Technical Reports Server (NTRS)

Everstine, G. C.

1982-01-01

The NASTRAN structural analysis computer program may be used, without modification, to solve two dimensional Poisson equations such as arise in the classical Saint Venant torsion problem. The nonhomogeneous term (the right-hand side) in the Poisson equation can be handled conveniently by specifying a gravitational load in a "structural" analysis. The use of an analogy between the equations of elasticity and those of classical mathematical physics is summarized in detail.

1. Regression Verification Using Impact Summaries

NASA Technical Reports Server (NTRS)

Backes, John; Person, Suzette J.; Rungta, Neha; Thachuk, Oksana

2013-01-01

Regression verification techniques are used to prove equivalence of syntactically similar programs. Checking equivalence of large programs, however, can be computationally expensive. Existing regression verification techniques rely on abstraction and decomposition techniques to reduce the computational effort of checking equivalence of the entire program. These techniques are sound but not complete. In this work, we propose a novel approach to improve scalability of regression verification by classifying the program behaviors generated during symbolic execution as either impacted or unimpacted. Our technique uses a combination of static analysis and symbolic execution to generate summaries of impacted program behaviors. The impact summaries are then checked for equivalence using an o-the-shelf decision procedure. We prove that our approach is both sound and complete for sequential programs, with respect to the depth bound of symbolic execution. Our evaluation on a set of sequential C artifacts shows that reducing the size of the summaries can help reduce the cost of software equivalence checking. Various reduction, abstraction, and compositional techniques have been developed to help scale software verification techniques to industrial-sized systems. Although such techniques have greatly increased the size and complexity of systems that can be checked, analysis of large software systems remains costly. Regression analysis techniques, e.g., regression testing [16], regression model checking [22], and regression verification [19], restrict the scope of the analysis by leveraging the differences between program versions. These techniques are based on the idea that if code is checked early in development, then subsequent versions can be checked against a prior (checked) version, leveraging the results of the previous analysis to reduce analysis cost of the current version. Regression verification addresses the problem of proving equivalence of closely related program

2. Psychosocial adjustment to ALS: a longitudinal study.

PubMed

Matuz, Tamara; Birbaumer, Niels; Hautzinger, Martin; Kübler, Andrea

2015-01-01

For the current study the Lazarian stress-coping theory and the appendant model of psychosocial adjustment to chronic illness and disabilities (Pakenham, 1999) has shaped the foundation for identifying determinants of adjustment to ALS. We aimed to investigate the evolution of psychosocial adjustment to ALS and to determine its long-term predictors. A longitudinal study design with four measurement time points was therefore, used to assess patients' quality of life, depression, and stress-coping model related aspects, such as illness characteristics, social support, cognitive appraisals, and coping strategies during a period of 2 years. Regression analyses revealed that 55% of the variance of severity of depressive symptoms and 47% of the variance in quality of life at T2 was accounted for by all the T1 predictor variables taken together. On the level of individual contributions, protective buffering, and appraisal of own coping potential accounted for a significant percentage in the variance in severity of depressive symptoms, whereas problem management coping strategies explained variance in quality of life scores. Illness characteristics at T2 did not explain any variance of both adjustment outcomes. Overall, the pattern of the longitudinal results indicated stable depressive symptoms and quality of life indices reflecting a successful adjustment to the disease across four measurement time points during a period of about two years. Empirical evidence is provided for the predictive value of social support, cognitive appraisals, and coping strategies, but not illness parameters such as severity and duration for adaptation to ALS. The current study contributes to a better conceptualization of adjustment, allowing us to provide evidence-based support beyond medical and physical intervention for people with ALS.

3. Psychosocial adjustment to ALS: a longitudinal study

PubMed Central

Matuz, Tamara; Birbaumer, Niels; Hautzinger, Martin; Kübler, Andrea

2015-01-01

For the current study the Lazarian stress-coping theory and the appendant model of psychosocial adjustment to chronic illness and disabilities (Pakenham, 1999) has shaped the foundation for identifying determinants of adjustment to ALS. We aimed to investigate the evolution of psychosocial adjustment to ALS and to determine its long-term predictors. A longitudinal study design with four measurement time points was therefore, used to assess patients' quality of life, depression, and stress-coping model related aspects, such as illness characteristics, social support, cognitive appraisals, and coping strategies during a period of 2 years. Regression analyses revealed that 55% of the variance of severity of depressive symptoms and 47% of the variance in quality of life at T2 was accounted for by all the T1 predictor variables taken together. On the level of individual contributions, protective buffering, and appraisal of own coping potential accounted for a significant percentage in the variance in severity of depressive symptoms, whereas problem management coping strategies explained variance in quality of life scores. Illness characteristics at T2 did not explain any variance of both adjustment outcomes. Overall, the pattern of the longitudinal results indicated stable depressive symptoms and quality of life indices reflecting a successful adjustment to the disease across four measurement time points during a period of about two years. Empirical evidence is provided for the predictive value of social support, cognitive appraisals, and coping strategies, but not illness parameters such as severity and duration for adaptation to ALS. The current study contributes to a better conceptualization of adjustment, allowing us to provide evidence-based support beyond medical and physical intervention for people with ALS. PMID:26441696

4. Bayesian inference on multiscale models for poisson intensity estimation: applications to photon-limited image denoising.

PubMed

Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George

2009-08-01

We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.

5. An intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces.

PubMed

Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying

2013-09-01

Poisson disk sampling has excellent spatial and spectral properties, and plays an important role in a variety of visual computing. Although many promising algorithms have been proposed for multidimensional sampling in euclidean space, very few studies have been reported with regard to the problem of generating Poisson disks on surfaces due to the complicated nature of the surface. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. In sharp contrast to the conventional parallel approaches, our method neither partitions the given surface into small patches nor uses any spatial data structure to maintain the voids in the sampling domain. Instead, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. Our algorithm guarantees that the generated Poisson disks are uniformly and randomly distributed without bias. It is worth noting that our method is intrinsic and independent of the embedding space. This intrinsic feature allows us to generate Poisson disk patterns on arbitrary surfaces in IR(n). To our knowledge, this is the first intrinsic, parallel, and accurate algorithm for surface Poisson disk sampling. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.

6. On the Determination of Poisson Statistics for Haystack Radar Observations of Orbital Debris

NASA Technical Reports Server (NTRS)

Stokely, Christopher L.; Benbrook, James R.; Horstman, Matt

2007-01-01

A convenient and powerful method is used to determine if radar detections of orbital debris are observed according to Poisson statistics. This is done by analyzing the time interval between detection events. For Poisson statistics, the probability distribution of the time interval between events is shown to be an exponential distribution. This distribution is a special case of the Erlang distribution that is used in estimating traffic loads on telecommunication networks. Poisson statistics form the basis of many orbital debris models but the statistical basis of these models has not been clearly demonstrated empirically until now. Interestingly, during the fiscal year 2003 observations with the Haystack radar in a fixed staring mode, there are no statistically significant deviations observed from that expected with Poisson statistics, either independent or dependent of altitude or inclination. One would potentially expect some significant clustering of events in time as a result of satellite breakups, but the presence of Poisson statistics indicates that such debris disperse rapidly with respect to Haystack's very narrow radar beam. An exception to Poisson statistics is observed in the months following the intentional breakup of the Fengyun satellite in January 2007.

7. Auxetic Black Phosphorus: A 2D Material with Negative Poisson's Ratio.

PubMed

Du, Yuchen; Maassen, Jesse; Wu, Wangran; Luo, Zhe; Xu, Xianfan; Ye, Peide D

2016-10-12

The Poisson's ratio of a material characterizes its response to uniaxial strain. Materials normally possess a positive Poisson's ratio - they contract laterally when stretched, and expand laterally when compressed. A negative Poisson's ratio is theoretically permissible but has not, with few exceptions of man-made bulk structures, been experimentally observed in any natural materials. Here, we show that the negative Poisson's ratio exists in the low-dimensional natural material black phosphorus and that our experimental observations are consistent with first-principles simulations. Through applying uniaxial strain along armchair direction, we have succeeded in demonstrating a cross-plane interlayer negative Poisson's ratio on black phosphorus for the first time. Meanwhile, our results support the existence of a cross-plane intralayer negative Poisson's ratio in the constituent phosphorene layers under uniaxial deformation along the zigzag axis, which is in line with a previous theoretical prediction. The phenomenon originates from the puckered structure of its in-plane lattice, together with coupled hinge-like bonding configurations.

8. High order solution of Poisson problems with piecewise constant coefficients and interface jumps

Marques, Alexandre Noll; Nave, Jean-Christophe; Rosales, Rodolfo Ruben

2017-04-01

We present a fast and accurate algorithm to solve Poisson problems in complex geometries, using regular Cartesian grids. We consider a variety of configurations, including Poisson problems with interfaces across which the solution is discontinuous (of the type arising in multi-fluid flows). The algorithm is based on a combination of the Correction Function Method (CFM) and Boundary Integral Methods (BIM). Interface and boundary conditions can be treated in a fast and accurate manner using boundary integral equations, and the associated BIM. Unfortunately, BIM can be costly when the solution is needed everywhere in a grid, e.g. fluid flow problems. We use the CFM to circumvent this issue. The solution from the BIM is used to rewrite the problem as a series of Poisson problems in rectangular domains-which requires the BIM solution at interfaces/boundaries only. These Poisson problems involve discontinuities at interfaces, of the type that the CFM can handle. Hence we use the CFM to solve them (to high order of accuracy) with finite differences and a Fast Fourier Transform based fast Poisson solver. We present 2-D examples of the algorithm applied to Poisson problems involving complex geometries, including cases in which the solution is discontinuous. We show that the algorithm produces solutions that converge with either 3rd or 4th order of accuracy, depending on the type of boundary condition and solution discontinuity.

9. Combining biomarkers for classification with covariate adjustment.

PubMed

Kim, Soyoung; Huang, Ying

2017-03-09

Combining multiple markers can improve classification accuracy compared with using a single marker. In practice, covariates associated with markers or disease outcome can affect the performance of a biomarker or biomarker combination in the population. The covariate-adjusted receiver operating characteristic (ROC) curve has been proposed as a tool to tease out the covariate effect in the evaluation of a single marker; this curve characterizes the classification accuracy solely because of the marker of interest. However, research on the effect of covariates on the performance of marker combinations and on how to adjust for the covariate effect when combining markers is still lacking. In this article, we examine the effect of covariates on classification performance of linear marker combinations and propose to adjust for covariates in combining markers by maximizing the nonparametric estimate of the area under the covariate-adjusted ROC curve. The proposed method provides a way to estimate the best linear biomarker combination that is robust to risk model assumptions underlying alternative regression-model-based methods. The proposed estimator is shown to be consistent and asymptotically normally distributed. We conduct simulations to evaluate the performance of our estimator in cohort and case/control designs and compare several different weighting strategies during estimation with respect to efficiency. Our estimator is also compared with alternative regression-model-based estimators or estimators that maximize the empirical area under the ROC curve, with respect to bias and efficiency. We apply the proposed method to a biomarker study from an human immunodeficiency virus vaccine trial. Copyright © 2017 John Wiley & Sons, Ltd.

10. 3D Regression Heat Map Analysis of Population Study Data.

PubMed

Klemm, Paul; Lawonn, Kai; Glaßer, Sylvia; Niemann, Uli; Hegenscheid, Katrin; Völzke, Henry; Preim, Bernhard

2016-01-01

Epidemiological studies comprise heterogeneous data about a subject group to define disease-specific risk factors. These data contain information (features) about a subject's lifestyle, medical status as well as medical image data. Statistical regression analysis is used to evaluate these features and to identify feature combinations indicating a disease (the target feature). We propose an analysis approach of epidemiological data sets by incorporating all features in an exhaustive regression-based analysis. This approach combines all independent features w.r.t. a target feature. It provides a visualization that reveals insights into the data by highlighting relationships. The 3D Regression Heat Map, a novel 3D visual encoding, acts as an overview of the whole data set. It shows all combinations of two to three independent features with a specific target disease. Slicing through the 3D Regression Heat Map allows for the detailed analysis of the underlying relationships. Expert knowledge about disease-specific hypotheses can be included into the analysis by adjusting the regression model formulas. Furthermore, the influences of features can be assessed using a difference view comparing different calculation results. We applied our 3D Regression Heat Map method to a hepatic steatosis data set to reproduce results from a data mining-driven analysis. A qualitative analysis was conducted on a breast density data set. We were able to derive new hypotheses about relations between breast density and breast lesions with breast cancer. With the 3D Regression Heat Map, we present a visual overview of epidemiological data that allows for the first time an interactive regression-based analysis of large feature sets with respect to a disease.

11. Hydrodynamic limit of Wigner-Poisson kinetic theory: Revisited

Akbari-Moghanjoughi, M.

2015-02-01

In this paper, we revisit the hydrodynamic limit of the Langmuir wave dispersion relation based on the Wigner-Poisson model in connection with that obtained directly from the original Lindhard dielectric function based on the random-phase-approximation. It is observed that the (fourth-order) expansion of the exact Lindhard dielectric constant correctly reduces to the hydrodynamic dispersion relation with an additional term of fourth-order, beside that caused by the quantum diffraction effect. It is also revealed that the generalized Lindhard dielectric theory accounts for the recently discovered Shukla-Eliasson attractive potential (SEAP). However, the expansion of the exact Lindhard static dielectric function leads to a k4 term of different magnitude than that obtained from the linearized quantum hydrodynamics model. It is shown that a correction factor of 1/9 should be included in the term arising from the quantum Bohm potential of the momentum balance equation in fluid model in order for a correct plasma dielectric response treatment. Finally, it is observed that the long-range oscillatory screening potential (Friedel oscillations) of type cos ( 2 k F r ) / r 3 , which is a consequence of the divergence of the dielectric function at point k = 2kF in a quantum plasma, arises due to the finiteness of the Fermi-wavenumber and is smeared out in the limit of very high electron number-densities, typical of white dwarfs and neutron stars. In the very low electron number-density regime, typical of semiconductors and metals, where the Friedel oscillation wavelength becomes much larger compared to the interparticle distances, the SEAP appears with a much deeper potential valley. It is remarked that the fourth-order approximate Lindhard dielectric constant approaches that of the linearized quantum hydrodynamic in the limit if very high electron number-density. By evaluation of the imaginary part of the Lindhard dielectric function, it is shown that the Landau

12. Interaction Models for Functional Regression

PubMed Central

USSET, JOSEPH; STAICU, ANA-MARIA; MAITY, ARNAB

2015-01-01

A functional regression model with a scalar response and multiple functional predictors is proposed that accommodates two-way interactions in addition to their main effects. The proposed estimation procedure models the main effects using penalized regression splines, and the interaction effect by a tensor product basis. Extensions to generalized linear models and data observed on sparse grids or with measurement error are presented. A hypothesis testing procedure for the functional interaction effect is described. The proposed method can be easily implemented through existing software. Numerical studies show that fitting an additive model in the presence of interaction leads to both poor estimation performance and lost prediction power, while fitting an interaction model where there is in fact no interaction leads to negligible losses. The methodology is illustrated on the AneuRisk65 study data. PMID:26744549

13. Astronomical Methods for Nonparametric Regression

2017-01-01

I will discuss commonly used techniques for nonparametric regression in astronomy. We find that several of them, particularly running averages and running medians, are generically biased, asymmetric between dependent and independent variables, and perform poorly in recovering the underlying function, even when errors are present only in one variable. We then examine less-commonly used techniques such as Multivariate Adaptive Regressive Splines and Boosted Trees and find them superior in bias, asymmetry, and variance both theoretically and in practice under a wide range of numerical benchmarks. In this context the chief advantage of the common techniques is runtime, which even for large datasets is now measured in microseconds compared with milliseconds for the more statistically robust techniques. This points to a tradeoff between bias, variance, and computational resources which in recent years has shifted heavily in favor of the more advanced methods, primarily driven by Moore's Law. Along these lines, we also propose a new algorithm which has better overall statistical properties than all techniques examined thus far, at the cost of significantly worse runtime, in addition to providing guidance on choosing the nonparametric regression technique most suitable to any specific problem. We then examine the more general problem of errors in both variables and provide a new algorithm which performs well in most cases and lacks the clear asymmetry of existing non-parametric methods, which fail to account for errors in both variables.

NASA Technical Reports Server (NTRS)

Farley, Gary L.

1994-01-01

Local characteristics of fabrics varied to suit special applications. Adjustable reed machinery proposed for use in weaving fabrics in various net shapes, widths, yarn spacings, and yarn angles. Locations of edges of fabric and configuration of warp and filling yarns varied along fabric to obtain specified properties. In machinery, reed wires mounted in groups on sliders, mounted on lengthwise rails in reed frame. Mechanisms incorporated to move sliders lengthwise, parallel to warp yarns, by sliding them along rails; move sliders crosswise by translating reed frame rails perpendicular to warp yarns; and crosswise by spreading reed rails within group. Profile of reed wires in group on each slider changed.

15. The Impact of Financial Sophistication on Adjustable Rate Mortgage Ownership

ERIC Educational Resources Information Center

Smith, Hyrum; Finke, Michael S.; Huston, Sandra J.

2011-01-01

The influence of a financial sophistication scale on adjustable-rate mortgage (ARM) borrowing is explored. Descriptive statistics and regression analysis using recent data from the Survey of Consumer Finances reveal that ARM borrowing is driven by both the least and most financially sophisticated households but for different reasons. Less…

16. Effects of Relational Authenticity on Adjustment to College

ERIC Educational Resources Information Center

Lenz, A. Stephen; Holman, Rachel L.; Lancaster, Chloe; Gotay, Stephanie G.

2016-01-01

The authors examined the association between relational health and student adjustment to college. Data were collected from 138 undergraduate students completing their 1st semester at a large university in the mid-southern United States. Regression analysis indicated that higher levels of relational authenticity were a predictor of success during…

17. Adjustment in mothers of children with Asperger syndrome: an application of the double ABCX model of family adjustment.

PubMed

Pakenham, Kenneth I; Samios, Christina; Sofronoff, Kate

2005-05-01

The present study examined the applicability of the double ABCX model of family adjustment in explaining maternal adjustment to caring for a child diagnosed with Asperger syndrome. Forty-seven mothers completed questionnaires at a university clinic while their children were participating in an anxiety intervention. The children were aged between 10 and 12 years. Results of correlations showed that each of the model components was related to one or more domains of maternal adjustment in the direction predicted, with the exception of problem-focused coping. Hierarchical regression analyses demonstrated that, after controlling for the effects of relevant demographics, stressor severity, pile-up of demands and coping were related to adjustment. Findings indicate the utility of the double ABCX model in guiding research into parental adjustment when caring for a child with Asperger syndrome. Limitations of the study and clinical implications are discussed.

Jacobs, Ken; Karpf, Ron

2011-03-01

A number of Pulfrich 3-D movies and TV shows have been produced, but the standard implementation has inherent drawbacks. The movie and TV industries have correctly concluded that the standard Pulfrich 3-D implementation is not a useful 3-D technique. Continuously Adjustable Pulfrich Spectacles (CAPS) is a new implementation of the Pulfrich effect that allows any scene containing movement in a standard 2-D movie, which are most scenes, to be optionally viewed in 3-D using inexpensive viewing specs. Recent scientific results in the fields of human perception, optoelectronics, video compression and video format conversion are translated into a new implementation of Pulfrich 3- D. CAPS uses these results to continuously adjust to the movie so that the viewing spectacles always conform to the optical density that optimizes the Pulfrich stereoscopic illusion. CAPS instantly provides 3-D immersion to any moving scene in any 2-D movie. Without the glasses, the movie will appear as a normal 2-D image. CAPS work on any viewing device, and with any distribution medium. CAPS is appropriate for viewing Internet streamed movies in 3-D.

19. An assessment of precipitation adjustment and feedback computation methods

Richardson, T. B.; Samset, B. H.; Andrews, T.; Myhre, G.; Forster, P. M.

2016-10-01

The precipitation adjustment and feedback framework is a useful tool for understanding global and regional precipitation changes. However, there is no definitive method for making the decomposition. In this study we highlight important differences which arise in results due to methodological choices. The responses to five different forcing agents (CO2, CH4, SO4, black carbon, and solar insolation) are analyzed using global climate model simulations. Three decomposition methods are compared: using fixed sea surface temperature experiments (fSST), regressing transient climate change after an abrupt forcing (regression), and separating based on timescale using the first year of coupled simulations (YR1). The YR1 method is found to incorporate significant SST-driven feedbacks into the adjustment and is therefore not suitable for making the decomposition. Globally, the regression and fSST methods produce generally consistent results; however, the regression values are dependent on the number of years analyzed and have considerably larger uncertainties. Regionally, there are substantial differences between methods. The pattern of change calculated using regression reverses sign in many regions as the number of years analyzed increases. This makes it difficult to establish what effects are included in the decomposition. The fSST method provides a more clear-cut separation in terms of what physical drivers are included in each component. The fSST results are less affected by methodological choices and exhibit much less variability. We find that the precipitation adjustment is weakly affected by the choice of SST climatology.

20. Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.

PubMed

Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram

2017-02-01

In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.

1. Bias associated with using the estimated propensity score as a regression covariate.

PubMed

2014-01-15

The use of propensity score methods to adjust for selection bias in observational studies has become increasingly popular in public health and medical research. A substantial portion of studies using propensity score adjustment treat the propensity score as a conventional regression predictor. Through a Monte Carlo simulation study, Austin and colleagues. investigated the bias associated with treatment effect estimation when the propensity score is used as a covariate in nonlinear regression models, such as logistic regression and Cox proportional hazards models. We show that the bias exists even in a linear regression model when the estimated propensity score is used and derive the explicit form of the bias. We also conduct an extensive simulation study to compare the performance of such covariate adjustment with propensity score stratification, propensity score matching, inverse probability of treatment weighted method, and nonparametric functional estimation using splines. The simulation scenarios are designed to reflect real data analysis practice. Instead of specifying a known parametric propensity score model, we generate the data by considering various degrees of overlap of the covariate distributions between treated and control groups. Propensity score matching excels when the treated group is contained within a larger control pool, while the model-based adjustment may have an edge when treated and control groups do not have too much overlap. Overall, adjusting for the propensity score through stratification or matching followed by regression or using splines, appears to be a good practical strategy.

2. Regression analysis of cytopathological data

SciTech Connect

Whittemore, A.S.; McLarty, J.W.; Fortson, N.; Anderson, K.

1982-12-01

Epithelial cells from the human body are frequently labelled according to one of several ordered levels of abnormality, ranging from normal to malignant. The label of the most abnormal cell in a specimen determines the score for the specimen. This paper presents a model for the regression of specimen scores against continuous and discrete variables, as in host exposure to carcinogens. Application to data and tests for adequacy of model fit are illustrated using sputum specimens obtained from a cohort of former asbestos workers.

3. Multiatlas segmentation as nonparametric regression.

PubMed

Awate, Suyash P; Whitaker, Ross T

2014-09-01

This paper proposes a novel theoretical framework to model and analyze the statistical characteristics of a wide range of segmentation methods that incorporate a database of label maps or atlases; such methods are termed as label fusion or multiatlas segmentation. We model these multiatlas segmentation problems as nonparametric regression problems in the high-dimensional space of image patches. We analyze the nonparametric estimator's convergence behavior that characterizes expected segmentation error as a function of the size of the multiatlas database. We show that this error has an analytic form involving several parameters that are fundamental to the specific segmentation problem (determined by the chosen anatomical structure, imaging modality, registration algorithm, and label-fusion algorithm). We describe how to estimate these parameters and show that several human anatomical structures exhibit the trends modeled analytically. We use these parameter estimates to optimize the regression estimator. We show that the expected error for large database sizes is well predicted by models learned on small databases. Thus, a few expert segmentations can help predict the database sizes required to keep the expected error below a specified tolerance level. Such cost-benefit analysis is crucial for deploying clinical multiatlas segmentation systems.

4. Multiatlas Segmentation as Nonparametric Regression

PubMed Central

Awate, Suyash P.; Whitaker, Ross T.

2015-01-01

This paper proposes a novel theoretical framework to model and analyze the statistical characteristics of a wide range of segmentation methods that incorporate a database of label maps or atlases; such methods are termed as label fusion or multiatlas segmentation. We model these multiatlas segmentation problems as nonparametric regression problems in the high-dimensional space of image patches. We analyze the nonparametric estimator’s convergence behavior that characterizes expected segmentation error as a function of the size of the multiatlas database. We show that this error has an analytic form involving several parameters that are fundamental to the specific segmentation problem (determined by the chosen anatomical structure, imaging modality, registration algorithm, and label-fusion algorithm). We describe how to estimate these parameters and show that several human anatomical structures exhibit the trends modeled analytically. We use these parameter estimates to optimize the regression estimator. We show that the expected error for large database sizes is well predicted by models learned on small databases. Thus, a few expert segmentations can help predict the database sizes required to keep the expected error below a specified tolerance level. Such cost-benefit analysis is crucial for deploying clinical multiatlas segmentation systems. PMID:24802528

5. Species abundance in a forest community in South China: A case of poisson lognormal distribution

USGS Publications Warehouse

Yin, Z.-Y.; Ren, H.; Zhang, Q.-M.; Peng, S.-L.; Guo, Q.-F.; Zhou, G.-Y.

2005-01-01

Case studies on Poisson lognormal distribution of species abundance have been rare, especially in forest communities. We propose a numerical method to fit the Poisson lognormal to the species abundance data at an evergreen mixed forest in the Dinghushan Biosphere Reserve, South China. Plants in the tree, shrub and herb layers in 25 quadrats of 20 m??20 m, 5 m??5 m, and 1 m??1 m were surveyed. Results indicated that: (i) for each layer, the observed species abundance with a similarly small median, mode, and a variance larger than the mean was reverse J-shaped and followed well the zero-truncated Poisson lognormal; (ii) the coefficient of variation, skewness and kurtosis of abundance, and two Poisson lognormal parameters (?? and ??) for shrub layer were closer to those for the herb layer than those for the tree layer; and (iii) from the tree to the shrub to the herb layer, the ?? and the coefficient of variation decreased, whereas diversity increased. We suggest that: (i) the species abundance distributions in the three layers reflects the overall community characteristics; (ii) the Poisson lognormal can describe the species abundance distribution in diverse communities with a few abundant species but many rare species; and (iii) 1/?? should be an alternative measure of diversity.

6. An Intrinsic Algorithm for Parallel Poisson Disk Sampling on Arbitrary Surfaces.

PubMed

Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying

2013-03-08

Poisson disk sampling plays an important role in a variety of visual computing, due to its useful statistical property in distribution and the absence of aliasing artifacts. While many effective techniques have been proposed to generate Poisson disk distribution in Euclidean space, relatively few work has been reported to the surface counterpart. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. We propose a new technique for parallelizing the dart throwing. Rather than the conventional approaches that explicitly partition the spatial domain to generate the samples in parallel, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. It is worth noting that our algorithm is accurate as the generated Poisson disks are uniformly and randomly distributed without bias. Our method is intrinsic in that all the computations are based on the intrinsic metric and are independent of the embedding space. This intrinsic feature allows us to generate Poisson disk distributions on arbitrary surfaces. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.

7. Solving the Fluid Pressure Poisson Equation Using Multigrid - Evaluation and Improvements.

PubMed

Dick, Christian; Rogowsky, Marcus; Westermann, Ruediger

2015-12-23

In many numerical simulations of fluids governed by the incompressible Navier-Stokes equations, the pressure Poisson equation needs to be solved to enforce mass conservation. Multigrid solvers show excellent convergence in simple scenarios, yet they can converge slowly in domains where physically separated regions are combined at coarser scales. Moreover, existing multigrid solvers are tailored to specific discretizations of the pressure Poisson equation, and they cannot easily be adapted to other discretizations. In this paper we analyze the convergence properties of existing multigrid solvers for the pressure Poisson equation in different simulation domains, and we show how to further improve the multigrid convergence rate by using a graph-based extension to determine the coarse grid hierarchy. The proposed multigrid solver is generic in that it can be applied to different kinds of discretizations of the pressure Poisson equation, by using solely the specification of the simulation domain and pre-assembled computational stencils. We analyze the proposed solver in combination with finite difference and finite volume discretizations of the pressure Poisson equation. Our evaluations show that, despite the common assumption, multigrid schemes can exploit their potential even in the most complicated simulation scenarios, yet this behavior is obtained at the price of higher memory consumption.

8. Solving the Fluid Pressure Poisson Equation Using Multigrid-Evaluation and Improvements.

PubMed

Dick, Christian; Rogowsky, Marcus; Westermann, Rudiger

2016-11-01

In many numerical simulations of fluids governed by the incompressible Navier-Stokes equations, the pressure Poisson equation needs to be solved to enforce mass conservation. Multigrid solvers show excellent convergence in simple scenarios, yet they can converge slowly in domains where physically separated regions are combined at coarser scales. Moreover, existing multigrid solvers are tailored to specific discretizations of the pressure Poisson equation, and they cannot easily be adapted to other discretizations. In this paper we analyze the convergence properties of existing multigrid solvers for the pressure Poisson equation in different simulation domains, and we show how to further improve the multigrid convergence rate by using a graph-based extension to determine the coarse grid hierarchy. The proposed multigrid solver is generic in that it can be applied to different kinds of discretizations of the pressure Poisson equation, by using solely the specification of the simulation domain and pre-assembled computational stencils. We analyze the proposed solver in combination with finite difference and finite volume discretizations of the pressure Poisson equation. Our evaluations show that, despite the common assumption, multigrid schemes can exploit their potential even in the most complicated simulation scenarios, yet this behavior is obtained at the price of higher memory consumption.

9. Moho Depth and Poisson's Ratio beneath Eastern-Central China and Its Tectonic Implications

Wei, Z.; Chen, L.; Li, Z.; Ling, Y.; Li, J.

2015-12-01

Eastern-central China comprises a complex amalgamation of geotectonic blocks of different ages and undergone significant modification of lithosphere during the Meso-Cenozoic time. To better characterize its deep structure, we estimated the Moho depth and average Poisson's ratio of eastern-central China by H-κ stacking of receiver functions using teleseismic data collected from 1196 broadband stations. A coexistence of modified and preserved crust was revealed in eastern-central China, which was generally in Airy-type isostatic equilibrium. Crust is obviously thicker to the west of the North-South Gravity Lineament but exhibits complex variations in Poisson's ratio with an overall felsic to intermediate bulk crustal composition. Moho depth and Poisson's ratio show striking differences as compared to the surrounding areas in the rifts and tectonic boundary zones, where earthquakes usually occur. Similarities and differences in the Moho depth and average Poisson's ratio were observed among the Northeast China, North China Craton, South China, and the Qinling-Dabie Orogen as well as different areas within these blocks, which may result from their different evolutionary histories and strong tectonic-magma events since the Mesozoic. In addition, we observed an alteration of Moho depth by ~6 km and of Poisson's ratio by ~0.03 as well as striking E-W difference beneath and across the Xuefeng Mountains, suggesting that the Xuefeng Mountains may be a deep tectonic boundary between the eastern Yangtze Craton and western Cathaysia Block.

10. Ultra-soft 100 nm thick zero Poisson's ratio film with 60% reversible compressibility

Nguyen, Chieu; Szalewski, Steve; Saraf, Ravi

2013-03-01

Squeezing films of most solids, liquids and granular materials causes dilation in the lateral dimension which is characterized by a positive Poisson's ratio. Auxetic materials, such as, special foams, crumpled graphite, zeolites, spectrin/actin membrane, and carbon nanotube laminates shrink, i.e., their Poisson's ratio is negative. As a result of Poisson's effect, the force to squeeze an amorphous material, such as a viscous thin film coating adhered to rigid surface increases by over million fold as the thickness decreases from 10 μm to 100 nm due to constrain on lateral deformations and off-plane relaxation. We demonstrate, ultra-soft, 100 nm films of polymer/nanoparticle composite adhered to 1.25 cm diameter glass that can be reversibly squeezed over 60% strain between rigid plates requiring (very) low stresses below 100 KPa. Unlike non-zero Poisson's ratio materials, stiffness decreases with thickness, and the stress distribution is uniform over the film as mapped electro-optically. The high deformability at very low stresses is explained by considering reentrant cellular structure found in cork and the wings of beetles that have Poisson's ratio near zero.

11. Classification of four-dimensional real Lie bialgebras of symplectic type and their Poisson-Lie groups

Abedi-Fardad, J.; Rezaei-Aghdam, A.; Haghighatdoost, Gh.

2017-01-01

We classify all four-dimensional real Lie bialgebras of symplectic type and obtain the classical r-matrices for these Lie bialgebras and Poisson structures on all the associated four-dimensional Poisson-Lie groups. We obtain some new integrable models where a Poisson-Lie group plays the role of the phase space and its dual Lie group plays the role of the symmetry group of the system.

12. A flexible count data regression model for risk analysis.

PubMed

Guikema, Seth D; Coffelt, Jeremy P; Goffelt, Jeremy P

2008-02-01

In many cases, risk and reliability analyses involve estimating the probabilities of discrete events such as hardware failures and occurrences of disease or death. There is often additional information in the form of explanatory variables that can be used to help estimate the likelihood of different numbers of events in the future through the use of an appropriate regression model, such as a generalized linear model. However, existing generalized linear models (GLM) are limited in their ability to handle the types of variance structures often encountered in using count data in risk and reliability analysis. In particular, standard models cannot handle both underdispersed data (variance less than the mean) and overdispersed data (variance greater than the mean) in a single coherent modeling framework. This article presents a new GLM based on a reformulation of the Conway-Maxwell Poisson (COM) distribution that is useful for both underdispersed and overdispersed count data and demonstrates this model by applying it to the assessment of electric power system reliability. The results show that the proposed COM GLM can provide as good of fits to data as the commonly used existing models for overdispered data sets while outperforming these commonly used models for underdispersed data sets.

13. Recognition of caudal regression syndrome.

PubMed

Boulas, Mari M

2009-04-01

Caudal regression syndrome, also referred to as caudal dysplasia and sacral agenesis syndrome, is a rare congenital malformation characterized by varying degrees of developmental failure early in gestation. It involves the lower extremities, the lumbar and coccygeal vertebrae, and corresponding segments of the spinal cord. This is a rare disorder, and true pathogenesis is unclear. The etiology is thought to be related to maternal diabetes, genetic predisposition, and vascular hypoperfusion, but no true causative factor has been determined. Fetal diagnostic tools allow for early recognition of the syndrome, and careful examination of the newborn is essential to determine the extent of the disorder. Associated organ system dysfunction depends on the severity of the disease. Related defects are structural, and systematic problems including respiratory, cardiac, gastrointestinal, urinary, orthopedic, and neurologic can be present in varying degrees of severity and in different combinations. A multidisciplinary approach to management is crucial. Because the primary pathology is irreversible, treatment is only supportive.

14. Practical Session: Multiple Linear Regression

Clausel, M.; Grégoire, G.

2014-12-01

Three exercises are proposed to illustrate the simple linear regression. In the first one investigates the influence of several factors on atmospheric pollution. It has been proposed by D. Chessel and A.B. Dufour in Lyon 1 (see Sect. 6 of http://pbil.univ-lyon1.fr/R/pdf/tdr33.pdf) and is based on data coming from 20 cities of U.S. Exercise 2 is an introduction to model selection whereas Exercise 3 provides a first example of analysis of variance. Exercises 2 and 3 have been proposed by A. Dalalyan at ENPC (see Exercises 2 and 3 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_5.pdf).

15. Novel Two-Dimensional Silicon Dioxide with in-Plane Negative Poisson's Ratio.

PubMed

Gao, Zhibin; Dong, Xiao; Li, Nianbei; Ren, Jie

2017-02-08

Silicon dioxide or silica, normally existing in various bulk crystalline and amorphous forms, was recently found to possess a two-dimensional structure. In this work, we use ab initio calculation and evolutionary algorithm to unveil three new two-dimensional (2D) silica structures whose thermal, dynamical, and mechanical stabilities are compared with many typical bulk silica. In particular, we find that all three of these 2D silica structures have large in-plane negative Poisson's ratios with the largest one being double of penta graphene and three times of borophenes. The negative Poisson's ratio originates from the interplay of lattice symmetry and Si-O tetrahedron symmetry. Slab silica is also an insulating 2D material with the highest electronic band gap (>7 eV) among reported 2D structures. These exotic 2D silica with in-plane negative Poisson's ratios and widest band gaps are expected to have great potential applications in nanomechanics and nanoelectronics.

16. Energy conserving discontinuous Galerkin spectral element method for the Vlasov-Poisson system

Madaule, Éric; Restelli, Marco; Sonnendrücker, Eric

2014-12-01

We propose a new, energy conserving, spectral element, discontinuous Galerkin method for the approximation of the Vlasov-Poisson system in arbitrary dimension, using Cartesian grids. The method is derived from the one proposed in [4], with two modifications: energy conservation is obtained by a suitable projection operator acting on the solution of the Poisson problem, rather than by solving multiple Poisson problems, and all the integrals appearing in the finite element formulation are approximated with Gauss-Lobatto quadrature, thereby yielding a spectral element formulation. The resulting method has the following properties: exact energy conservation (up to errors introduced by the time discretization), stability (thanks to the use of upwind numerical fluxes), high order accuracy and high locality. For the time discretization, we consider both Runge-Kutta methods and exponential integrators, and show results for 1D and 2D cases (2D and 4D in phase space, respectively).

17. A special relation between Young's modulus, Rayleigh-wave velocity, and Poisson's ratio.

PubMed

Malischewsky, Peter G; Tuan, Tran Thanh

2009-12-01

Bayon et al. [(2005). J. Acoust. Soc. Am. 117, 3469-3477] described a method for the determination of Young's modulus by measuring the Rayleigh-wave velocity and the ellipticity of Rayleigh waves, and found a peculiar almost linear relation between a non-dimensional quantity connecting Young's modulus, Rayleigh-wave velocity and density, and Poisson's ratio. The analytical reason for this special behavior remained unclear. It is demonstrated here that this behavior is a simple consequence of the mathematical form of the Rayleigh-wave velocity as a function of Poisson's ratio. The consequences for auxetic materials (those materials for which Poisson's ratio is negative) are discussed, as well as the determination of the shear and bulk moduli.

18. Heterogeneous PVA hydrogels with micro-cells of both positive and negative Poisson's ratios.

PubMed

Ma, Yanxuan; Zheng, Yudong; Meng, Haoye; Song, Wenhui; Yao, Xuefeng; Lv, Hexiang

2013-07-01

Many models describing the deformation of general foam or auxetic materials are based on the assumption of homogeneity and order within the materials. However, non-uniform heterogeneity is often an inherent nature in many porous materials and composites, but difficult to measure. In this work, inspired by the structures of auxetic materials, the porous PVA hydrogels with internal inby-concave pores (IICP) or interconnected pores (ICP) were designed and processed. The deformation of the PVA hydrogels under compression was tested and their Poisson's ratio was characterized. The results indicated that the size, shape and distribution of the pores in the hydrogel matrix had strong influence on the local Poisson's ratio, which varying from positive to negative at micro-scale. The size-dependency of their local Poisson's ratio reflected and quantified the uniformity and heterogeneity of the micro-porous structures in the PVA hydrogels.

19. Lumbar herniated disc: spontaneous regression

PubMed Central

Yüksel, Kasım Zafer

2017-01-01

Background Low back pain is a frequent condition that results in substantial disability and causes admission of patients to neurosurgery clinics. To evaluate and present the therapeutic outcomes in lumbar disc hernia (LDH) patients treated by means of a conservative approach, consisting of bed rest and medical therapy. Methods This retrospective cohort was carried out in the neurosurgery departments of hospitals in Kahramanmaraş city and 23 patients diagnosed with LDH at the levels of L3−L4, L4−L5 or L5−S1 were enrolled. Results The average age was 38.4 ± 8.0 and the chief complaint was low back pain and sciatica radiating to one or both lower extremities. Conservative treatment was administered. Neurological examination findings, durations of treatment and intervals until symptomatic recovery were recorded. Laségue tests and neurosensory examination revealed that mild neurological deficits existed in 16 of our patients. Previously, 5 patients had received physiotherapy and 7 patients had been on medical treatment. The number of patients with LDH at the level of L3−L4, L4−L5, and L5−S1 were 1, 13, and 9, respectively. All patients reported that they had benefit from medical treatment and bed rest, and radiologic improvement was observed simultaneously on MRI scans. The average duration until symptomatic recovery and/or regression of LDH symptoms was 13.6 ± 5.4 months (range: 5−22). Conclusions It should be kept in mind that lumbar disc hernias could regress with medical treatment and rest without surgery, and there should be an awareness that these patients could recover radiologically. This condition must be taken into account during decision making for surgical intervention in LDH patients devoid of indications for emergent surgery. PMID:28119770

20. A Conway-Maxwell-Poisson (CMP) model to address data dispersion on positron emission tomography.

PubMed

Santarelli, Maria Filomena; Della Latta, Daniele; Scipioni, Michele; Positano, Vincenzo; Landini, Luigi

2016-10-01

Positron emission tomography (PET) in medicine exploits the properties of positron-emitting unstable nuclei. The pairs of γ- rays emitted after annihilation are revealed by coincidence detectors and stored as projections in a sinogram. It is well known that radioactive decay follows a Poisson distribution; however, deviation from Poisson statistics occurs on PET projection data prior to reconstruction due to physical effects, measurement errors, correction of deadtime, scatter, and random coincidences. A model that describes the statistical behavior of measured and corrected PET data can aid in understanding the statistical nature of the data: it is a prerequisite to develop efficient reconstruction and processing methods and to reduce noise. The deviation from Poisson statistics in PET data could be described by the Conway-Maxwell-Poisson (CMP) distribution model, which is characterized by the centring parameter λ and the dispersion parameter ν, the latter quantifying the deviation from a Poisson distribution model. In particular, the parameter ν allows quantifying over-dispersion (ν<1) or under-dispersion (ν>1) of data. A simple and efficient method for λ and ν parameters estimation is introduced and assessed using Monte Carlo simulation for a wide range of activity values. The application of the method to simulated and experimental PET phantom data demonstrated that the CMP distribution parameters could detect deviation from the Poisson distribution both in raw and corrected PET data. It may be usefully implemented in image reconstruction algorithms and quantitative PET data analysis, especially in low counting emission data, as in dynamic PET data, where the method demonstrated the best accuracy.

1. Understanding the changes in ductility and Poisson's ratio of metallic glasses during annealing from microscopic dynamics

Wang, Z.; Ngai, K. L.; Wang, W. H.

2015-07-01

In the paper K. L. Ngai et al., [J. Chem. 140, 044511 (2014)], the empirical correlation of ductility with the Poisson's ratio, νPoisson, found in metallic glasses was theoretically explained by microscopic dynamic processes which link on the one hand ductility, and on the other hand the Poisson's ratio. Specifically, the dynamic processes are the primitive relaxation in the Coupling Model which is the precursor of the Johari-Goldstein β-relaxation, and the caged atoms dynamics characterized by the effective Debye-Waller factor f0 or equivalently the nearly constant loss (NCL) in susceptibility. All these processes and the parameters characterizing them are accessible experimentally except f0 or the NCL of caged atoms; thus, so far, the experimental verification of the explanation of the correlation between ductility and Poisson's ratio is incomplete. In the experimental part of this paper, we report dynamic mechanical measurement of the NCL of the metallic glass La60Ni15Al25 as-cast, and the changes by annealing at temperature below Tg. The observed monotonic decrease of the NCL with aging time, reflecting the corresponding increase of f0, correlates with the decrease of νPoisson. This is important observation because such measurements, not made before, provide the missing link in confirming by experiment the explanation of the correlation of ductility with νPoisson. On aging the metallic glass, also observed in the isochronal loss spectra is the shift of the β-relaxation to higher temperatures and reduction of the relaxation strength. These concomitant changes of the β-relaxation and NCL are the root cause of embrittlement by aging the metallic glass. The NCL of caged atoms is terminated by the onset of the primitive relaxation in the Coupling Model, which is generally supported by experiments. From this relation, the monotonic decrease of the NCL with aging time is caused by the slowing down of the primitive relaxation and β-relaxation on annealing, and

2. Simultaneous estimation of Poisson's ratio and Young's modulus using a single indentation: a finite element study

Zheng, Y. P.; Choi, A. P. C.; Ling, H. Y.; Huang, Y. P.

2009-04-01

Indentation is commonly used to determine the mechanical properties of different kinds of biological tissues and engineering materials. With the force-deformation data obtained from an indentation test, Young's modulus of the tissue can be calculated using a linear elastic indentation model with a known Poisson's ratio. A novel method for simultaneous estimation of Young's modulus and Poisson's ratio of the tissue using a single indentation was proposed in this study. Finite element (FE) analysis using 3D models was first used to establish the relationship between Poisson's ratio and the deformation-dependent indentation stiffness for different aspect ratios (indentor radius/tissue original thickness) in the indentation test. From the FE results, it was found that the deformation-dependent indentation stiffness linearly increased with the deformation. Poisson's ratio could be extracted based on the deformation-dependent indentation stiffness obtained from the force-deformation data. Young's modulus was then further calculated with the estimated Poisson's ratio. The feasibility of this method was demonstrated in virtue of using the indentation models with different material properties in the FE analysis. The numerical results showed that the percentage errors of the estimated Poisson's ratios and the corresponding Young's moduli ranged from -1.7% to -3.2% and 3.0% to 7.2%, respectively, with the aspect ratio (indentor radius/tissue thickness) larger than 1. It is expected that this novel method can be potentially used for quantitative assessment of various kinds of engineering materials and biological tissues, such as articular cartilage.

Cancer.gov

This Infographic shows the National Cancer Institute SEER Incidence Trends. The graphs show the Average Annual Percent Change (AAPC) 2002-2011. For Men, Thyroid: 5.3*,Liver & IBD: 3.6*, Melanoma: 2.3*, Kidney: 2.0*, Myeloma: 1.9*, Pancreas: 1.2*, Leukemia: 0.9*, Oral Cavity: 0.5, Non-Hodgkin Lymphoma: 0.3*, Esophagus: -0.1, Brain & ONS: -0.2*, Bladder: -0.6*, All Sites: -1.1*, Stomach: -1.7*, Larynx: -1.9*, Prostate: -2.1*, Lung & Bronchus: -2.4*, and Colon & Rectum: -3/0*. For Women, Thyroid: 5.8*, Liver & IBD: 2.9*, Myeloma: 1.8*, Kidney: 1.6*, Melanoma: 1.5, Corpus & Uterus: 1.3*, Pancreas: 1.1*, Leukemia: 0.6*, Brain & ONS: 0, Non-Hodgkin Lymphoma: -0.1, All Sites: -0.1, Breast: -0.3, Stomach: -0.7*, Oral Cavity: -0.7*, Bladder: -0.9*, Ovary: -0.9*, Lung & Bronchus: -1.0*, Cervix: -2.4*, and Colon & Rectum: -2.7*. * AAPC is significantly different from zero (p<.05). Rates were adjusted for reporting delay in the registry. www.cancer.gov Source: Special section of the Annual Report to the Nation on the Status of Cancer, 1975-2011.

Bannon, Peter R.

1996-12-01

The final equilibrium state of Lamb's hydrostatic adjustment problem is found for finite amplitude heating. Lamb's problem consists of the response of a compressible atmosphere to an instantaneous, horizontally homogeneous heating. Results are presented for both isothermal and nonisothermal atmospheres.As in the linear problem, the fluid displacements are confined to the heated layer and to the region aloft with no displacement of the fluid below the heating. The region above the heating is displaced uniformly upward for heating and downward for cooling. The amplitudes of the displacements are larger for cooling than for warming.Examination of the energetics reveals that the fraction of the heat deposited into the acoustic modes increases linearly with the amplitude of the heating. This fraction is typically small (e.g., 0.06% for a uniform warming of 1 K) and is essentially independent of the lapse rate of the base-state atmosphere. In contrast a fixed fraction of the available energy generated by the heating goes into the acoustic modes. This fraction (e.g., 12% for a standard tropospheric lapse rate) agrees with the linear result and increases with increasing stability of the base-state atmosphere.The compressible results are compared to solutions using various forms of the soundproof equations. None of the soundproof equations predict the finite amplitude solutions accurately. However, in the small amplitude limit, only the equations for deep convection advanced by Dutton and Fichtl predict the thermodynamic state variables accurately for a nonisothermal base-state atmosphere.

5. Anisotropic elasticity and abnormal Poisson's ratios in super-hard materials

Huang, Chuanwei; Li, Rongpeng; Chen, Lang

2014-12-01

We theoretically investigated the variable mechanical properties such as Young's modulus, Poisson's ratios and compressibility in super-hard materials. Our tensorial analysis reveals that the mechanical properties of super-hard materials are strongly sensitive to the anisotropy index of materials. In sharp contrast to the traditional positive constant as thought before, the Poisson's ratio of super-hard materials could be unexpectedly negative, zero, or even positive with a value much larger than the isotropic upper limit of 0.5 along definite directions. Our results uncover a correlation between compressibility and hardness, which offer insights on the prediction of new super-hard materials.

6. Solution of the nonlinear Poisson-Boltzmann equation: Application to ionic diffusion in cementitious materials

SciTech Connect

Arnold, J.; Kosson, D.S.; Garrabrants, A.; Meeussen, J.C.L.; Sloot, H.A. van der

2013-02-15

A robust numerical solution of the nonlinear Poisson-Boltzmann equation for asymmetric polyelectrolyte solutions in discrete pore geometries is presented. Comparisons to the linearized approximation of the Poisson-Boltzmann equation reveal that the assumptions leading to linearization may not be appropriate for the electrochemical regime in many cementitious materials. Implications of the electric double layer on both partitioning of species and on diffusive release are discussed. The influence of the electric double layer on anion diffusion relative to cation diffusion is examined.

7. Incorporation of solvation effects into the fragment molecular orbital calculations with the Poisson-Boltzmann equation

Watanabe, Hirofumi; Okiyama, Yoshio; Nakano, Tatsuya; Tanaka, Shigenori

2010-11-01

We developed FMO-PB method, which incorporates solvation effects into the Fragment Molecular Orbital calculation with the Poisson-Boltzmann equation. This method retains good accuracy in energy calculations with reduced computational time. We calculated the solvation free energies for polyalanines, Alpha-1 peptide, tryptophan cage, and complex of estrogen receptor and 17 β-estradiol to show the applicability of this method for practical systems. From the calculated results, it has been confirmed that the FMO-PB method is useful for large biomolecules in solution. We also discussed the electric charges which are used in solving the Poisson-Boltzmann equation.

8. On Poisson's ratio for metal matrix composite laminates. [aluminum boron composites

NASA Technical Reports Server (NTRS)

Herakovich, C. T.; Shuart, M. J.

1978-01-01

The definition of Poisson's ratio for nonlinear behavior of metal matrix composite laminates is discussed and experimental results for tensile and compressive loading of five different boron-aluminum laminates are presented. It is shown that there may be considerable difference in the value of Poisson's ratio as defined by a total strain or an incremental strain definition. It is argued that the incremental definition is more appropriate for nonlinear material behavior. Results from a (0) laminate indicate that the incremental definition provides a precursor to failure which is not evident if the total strain definition is used.

9. A Bayesian approach to parameter and reliability estimation in the Poisson distribution.

NASA Technical Reports Server (NTRS)

Canavos, G. C.

1972-01-01

For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.

10. A Poisson equation formulation for pressure calculations in penalty finite element models for viscous incompressible flows

NASA Technical Reports Server (NTRS)

Sohn, J. L.; Heinrich, J. C.

1990-01-01

The calculation of pressures when the penalty-function approximation is used in finite-element solutions of laminar incompressible flows is addressed. A Poisson equation for the pressure is formulated that involves third derivatives of the velocity field. The second derivatives appearing in the weak formulation of the Poisson equation are calculated from the C0 velocity approximation using a least-squares method. The present scheme is shown to be efficient, free of spurious oscillations, and accurate. Examples of applications are given and compared with results obtained using mixed formulations.

11. The Z2-graded Schouten-Nijenhuis bracket and generalized super-Poisson structures

de Azcárraga, J. A.; Izquierdo, J. M.; Perelomov, A. M.; Pérez-Bueno, J. C.

1997-07-01

The super or Z2-graded Schouten-Nijenhuis bracket is introduced. Using it, new generalized super-Poisson structures are found which are given in terms of certain graded-skew-symmetric contravariant tensors Λ of even order. The corresponding super "Jacobi identities" are expressed by stating that these tensors have a zero super Schouten-Nijenhuis bracket with themselves [Λ,Λ]=0. As a particular case, we provide the linear generalized super-Poisson structures which can be constructed on the dual spaces of simple superalgebras with a non-degenerate Killing metric. The su(3,1) superalgebra is given as a representative example.

12. The noncommutative Poisson bracket and the deformation of the family algebras

SciTech Connect

Wei, Zhaoting

2015-07-15

The family algebras are introduced by Kirillov in 2000. In this paper, we study the noncommutative Poisson bracket P on the classical family algebra C{sub τ}(g). We show that P controls the first-order 1-parameter formal deformation from C{sub τ}(g) to Q{sub τ}(g) where the latter is the quantum family algebra. Moreover, we will prove that the noncommutative Poisson bracket is in fact a Hochschild 2-coboundary, and therefore, the deformation is infinitesimally trivial. In the last part of this paper, we discuss the relation between Mackey’s analogue and the quantization problem of the family algebras.

13. Reentrant Origami-Based Metamaterials with Negative Poisson's Ratio and Bistability

Yasuda, H.; Yang, J.

2015-05-01

We investigate the unique mechanical properties of reentrant 3D origami structures based on the Tachi-Miura polyhedron (TMP). We explore the potential usage as mechanical metamaterials that exhibit tunable negative Poisson's ratio and structural bistability simultaneously. We show analytically and experimentally that the Poisson's ratio changes from positive to negative and vice versa during its folding motion. In addition, we verify the bistable mechanism of the reentrant 3D TMP under rigid origami configurations without relying on the buckling motions of planar origami surfaces. This study forms a foundation in designing and constructing TMP-based metamaterials in the form of bellowslike structures for engineering applications.

14. Newton's and Poisson's Impact Law for the Non-Convex Case of Reentrant Corners

Glocker, Christoph

The paper reviews the frictionless collision problem in rigid body dynamics. Newton's and Poisson's impact laws are stated in inequality form for one collision point and extended by superposition to multicontact configurations. One special case within this framework are impacts with global dissipation index, for which it is shown that Newton's impact law reduces to Moreau's impact rule and that both of them coincide with Poisson's law when a certain kinematic compatibility condition is met. A geometrical interpretation of this impact law is given for a tangentially regular boundary and then extended to re-entrant corners.

15. iAPBS: a programming interface to Adaptive Poisson-Boltzmann Solver

SciTech Connect

Konecny, Robert; Baker, Nathan A.; McCammon, J. A.

2012-07-26

The Adaptive Poisson-Boltzmann Solver (APBS) is a state-of-the-art suite for performing Poisson-Boltzmann electrostatic calculations on biomolecules. The iAPBS package provides a modular programmatic interface to the APBS library of electrostatic calculation routines. The iAPBS interface library can be linked with a Fortran or C/C++ program thus making all of the APBS functionality available from within the application. Several application modules for popular molecular dynamics simulation packages -- Amber, NAMD and CHARMM are distributed with iAPBS allowing users of these packages to perform implicit solvent electrostatic calculations with APBS.

16. Poisson's ratio from polarization of acoustic zero-group velocity Lamb mode.

PubMed

Baggens, Oskar; Ryden, Nils

2015-07-01

Poisson's ratio of an isotropic and free elastic plate is estimated from the polarization of the first symmetric acoustic zero-group velocity Lamb mode. This polarization is interpreted as the ratio of the absolute amplitudes of the surface normal and surface in-plane components of the acoustic mode. Results from the evaluation of simulated datasets indicate that the presented relation, which links the polarization and Poisson's ratio, can be extended to incorporate plates with material damping. Furthermore, the proposed application of the polarization is demonstrated in a practical field case, where an increased accuracy of estimated nominal thickness is obtained.

17. A regression model analysis of longitudinal dental caries data.

PubMed

Ringelberg, M L; Tonascia, J A

1976-03-01

Longitudinal data on caries experience were derived from the reexamination and interview of a cohort of 306 subjects with an average follow-up period of 33 years after the baseline examination. Analysis of the data was accomplished by the use of contingency tables utilizing enumeration statistics compared with a multiple regression analysis. The analyses indicated a strong association of caries experience at one point in time with the caries experience of that same person earlier in life. The regression model approach offers adjustment of any given independent variable for the effect of all other independent variables, providing a powerful means of bias reduction. The model is also useful in separating out the specific effect of an independent variable over and above the contribution of other variables. The model used explained 35% of the variability in the DMFS scores recorded. Similar models could be useful adjuncts in the analyses of dental epidemiologic data.

18. Model building strategy for logistic regression: purposeful selection.

PubMed

Zhang, Zhongheng

2016-03-01

Logistic regression is one of the most commonly used models to account for confounders in medical literature. The article introduces how to perform purposeful selection model building strategy with R. I stress on the use of likelihood ratio test to see whether deleting a variable will have significant impact on model fit. A deleted variable should also be checked for whether it is an important adjustment of remaining covariates. Interaction should be checked to disentangle complex relationship between covariates and their synergistic effect on response variable. Model should be checked for the goodness-of-fit (GOF). In other words, how the fitted model reflects the real data. Hosmer-Lemeshow GOF test is the most widely used for logistic regression model.

19. Genetics Home Reference: caudal regression syndrome

MedlinePlus

... of a genetic condition? Genetic and Rare Diseases Information Center Frequency Caudal regression syndrome is estimated to occur in 1 to ... parts of the skeleton, gastrointestinal system, and genitourinary ... caudal regression syndrome results from the presence of an abnormal ...

20. Regressions during reading: The cost depends on the cause.

PubMed

Eskenazi, Michael A; Folk, Jocelyn R

2016-11-21

The direction and duration of eye movements during reading is predominantly determined by cognitive and linguistic processing, but some low-level oculomotor effects also influence the duration and direction of eye movements. One such effect is inhibition of return (IOR), which results in an increased latency to return attention to a target that has been previously attended (Posner & Cohen, Attention and Performance X: Control of Language Processes, 32, 531-556, 1984). Although this is a low level effect, it has also been found in the complex task of reading (Henderson & Luke, Psychonomic Bulletin & Review, 19(6), 1101-1107, 2012; Rayner, Juhasz, Ashby, & Clifton, Vision Research, 43(9), 1027-1034, 2003). The purpose of the current study was to isolate the potentially different causes of regressive eye movements: to adjust for oculomotor error and to assist with comprehension difficulties. We found that readers demonstrated an IOR effect when regressions were caused by oculomotor error, but not when regressions were caused by comprehension difficulties. The results suggest that IOR is primarily associated with low-level oculomotor control of eye movements, and that regressive eye movements that are controlled by comprehension processes are not subject to IOR effects. The results have implications for understanding the relationship between oculomotor and cognitive control of eye movements and for models of eye movement control.

1. Adjusting for mortality effects in chronic toxicity testing: Mixture model approach

SciTech Connect

Wang, S.C.D.; Smith, E.P.

2000-01-01

Chronic toxicity tests, such as the Ceriodaphnia dubia 7-d test are typically analyzed using standard statistical methods such as analysis of variance or regression. Recent research has emphasized the use of Poisson regression or more generalized regression for the analysis of the fecundity data from these studies. A possible problem in using standard statistical techniques is that mortality may occur from toxicant effects as well as reduced fecundity. A mixture model that accounts for fecundity and mortality is proposed for the analysis of data arising from these studies. Inferences about key parameters in the model are discussed. A joint estimate of the inhibition concentration is proposed based on the model. Confidence interval estimations via the bootstrap method is discussed. An example is given for a study involving copper and mercury.

2. Semiparametric regression during 2003–2007*

PubMed Central

Ruppert, David; Wand, M.P.; Carroll, Raymond J.

2010-01-01

Semiparametric regression is a fusion between parametric regression and nonparametric regression that integrates low-rank penalized splines, mixed model and hierarchical Bayesian methodology – thus allowing more streamlined handling of longitudinal and spatial correlation. We review progress in the field over the five-year period between 2003 and 2007. We find semiparametric regression to be a vibrant field with substantial involvement and activity, continual enhancement and widespread application. PMID:20305800

3. A statistical test for the equality of differently adjusted incidence rate ratios.

PubMed

Hoffmann, Kurt; Pischon, Tobias; Schulz, Mandy; Schulze, Matthias B; Ray, Jennifer; Boeing, Heiner

2008-03-01

4. Bayesian Unimodal Density Regression for Causal Inference

ERIC Educational Resources Information Center

Karabatsos, George; Walker, Stephen G.

2011-01-01

Karabatsos and Walker (2011) introduced a new Bayesian nonparametric (BNP) regression model. Through analyses of real and simulated data, they showed that the BNP regression model outperforms other parametric and nonparametric regression models of common use, in terms of predictive accuracy of the outcome (dependent) variable. The other,…

5. Developmental Regression in Autism Spectrum Disorders

ERIC Educational Resources Information Center

Rogers, Sally J.

2004-01-01

The occurrence of developmental regression in autism is one of the more puzzling features of this disorder. Although several studies have documented the validity of parental reports of regression using home videos, accumulating data suggest that most children who demonstrate regression also demonstrated previous, subtle, developmental differences.…

6. Standards for Standardized Logistic Regression Coefficients

ERIC Educational Resources Information Center

Menard, Scott

2011-01-01

Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…

7. Regression Analysis by Example. 5th Edition

ERIC Educational Resources Information Center

2012-01-01

Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. "Regression Analysis by Example, Fifth Edition" has been expanded and thoroughly…

8. Synthesizing Regression Results: A Factored Likelihood Method

ERIC Educational Resources Information Center

Wu, Meng-Jia; Becker, Betsy Jane

2013-01-01

Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported…

9. Streamflow forecasting using functional regression

Masselot, Pierre; Dabo-Niang, Sophie; Chebana, Fateh; Ouarda, Taha B. M. J.

2016-07-01

Streamflow, as a natural phenomenon, is continuous in time and so are the meteorological variables which influence its variability. In practice, it can be of interest to forecast the whole flow curve instead of points (daily or hourly). To this end, this paper introduces the functional linear models and adapts it to hydrological forecasting. More precisely, functional linear models are regression models based on curves instead of single values. They allow to consider the whole process instead of a limited number of time points or features. We apply these models to analyse the flow volume and the whole streamflow curve during a given period by using precipitations curves. The functional model is shown to lead to encouraging results. The potential of functional linear models to detect special features that would have been hard to see otherwise is pointed out. The functional model is also compared to the artificial neural network approach and the advantages and disadvantages of both models are discussed. Finally, future research directions involving the functional model in hydrology are presented.

10. Survival analysis and Cox regression.

PubMed

Benítez-Parejo, N; Rodríguez del Águila, M M; Pérez-Vicente, S

2011-01-01

The data provided by clinical trials are often expressed in terms of survival. The analysis of survival comprises a series of statistical analytical techniques in which the measurements analysed represent the time elapsed between a given exposure and the outcome of a certain event. Despite the name of these techniques, the outcome in question does not necessarily have to be either survival or death, and may be healing versus no healing, relief versus pain, complication versus no complication, relapse versus no relapse, etc. The present article describes the analysis of survival from both a descriptive perspective, based on the Kaplan-Meier estimation method, and in terms of bivariate comparisons using the log-rank statistic. Likewise, a description is provided of the Cox regression models for the study of risk factors or covariables associated to the probability of survival. These models are defined in both simple and multiple forms, and a description is provided of how they are calculated and how the postulates for application are checked - accompanied by illustrating examples with the shareware application R.

11. Estimating equivalence with quantile regression

USGS Publications Warehouse

2011-01-01

Equivalence testing and corresponding confidence interval estimates are used to provide more enlightened statistical statements about parameter estimates by relating them to intervals of effect sizes deemed to be of scientific or practical importance rather than just to an effect size of zero. Equivalence tests and confidence interval estimates are based on a null hypothesis that a parameter estimate is either outside (inequivalence hypothesis) or inside (equivalence hypothesis) an equivalence region, depending on the question of interest and assignment of risk. The former approach, often referred to as bioequivalence testing, is often used in regulatory settings because it reverses the burden of proof compared to a standard test of significance, following a precautionary principle for environmental protection. Unfortunately, many applications of equivalence testing focus on establishing average equivalence by estimating differences in means of distributions that do not have homogeneous variances. I discuss how to compare equivalence across quantiles of distributions using confidence intervals on quantile regression estimates that detect differences in heterogeneous distributions missed by focusing on means. I used one-tailed confidence intervals based on inequivalence hypotheses in a two-group treatment-control design for estimating bioequivalence of arsenic concentrations in soils at an old ammunition testing site and bioequivalence of vegetation biomass at a reclaimed mining site. Two-tailed confidence intervals based both on inequivalence and equivalence hypotheses were used to examine quantile equivalence for negligible trends over time for a continuous exponential model of amphibian abundance. ?? 2011 by the Ecological Society of America.

12. A Negative Binomial Regression Model for Accuracy Tests

ERIC Educational Resources Information Center

Hung, Lai-Fa

2012-01-01

Rasch used a Poisson model to analyze errors and speed in reading tests. An important property of the Poisson distribution is that the mean and variance are equal. However, in social science research, it is very common for the variance to be greater than the mean (i.e., the data are overdispersed). This study embeds the Rasch model within an…

13. A comparison of least squares regression and geographically weighted regression modeling of West Nile virus risk based on environmental parameters

PubMed Central

Kala, Abhishek K.; Tiwari, Chetan; Mikler, Armin R.

2017-01-01

Background The primary aim of the study reported here was to determine the effectiveness of utilizing local spatial variations in environmental data to uncover the statistical relationships between West Nile Virus (WNV) risk and environmental factors. Because least squares regression methods do not account for spatial autocorrelation and non-stationarity of the type of spatial data analyzed for studies that explore the relationship between WNV and environmental determinants, we hypothesized that a geographically weighted regression model would help us better understand how environmental factors are related to WNV risk patterns without the confounding effects of spatial non-stationarity. Methods We examined commonly mapped environmental factors using both ordinary least squares regression (LSR) and geographically weighted regression (GWR). Both types of models were applied to examine the relationship between WNV-infected dead bird counts and various environmental factors for those locations. The goal was to determine which approach yielded a better predictive model. Results LSR efforts lead to identifying three environmental variables that were statistically significantly related to WNV infected dead birds (adjusted R2 = 0.61): stream density, road density, and land surface temperature. GWR efforts increased the explanatory value of these three environmental variables with better spatial precision (adjusted R2 = 0.71). Conclusions The spatial granularity resulting from the geographically weighted approach provides a better understanding of how environmental spatial heterogeneity is related to WNV risk as implied by WNV infected dead birds, which should allow improved planning of public health management strategies. PMID:28367364

14. Poisson Growth Mixture Modeling of Intensive Longitudinal Data: An Application to Smoking Cessation Behavior

ERIC Educational Resources Information Center

Shiyko, Mariya P.; Li, Yuelin; Rindskopf, David

2012-01-01

Intensive longitudinal data (ILD) have become increasingly common in the social and behavioral sciences; count variables, such as the number of daily smoked cigarettes, are frequently used outcomes in many ILD studies. We demonstrate a generalized extension of growth mixture modeling (GMM) to Poisson-distributed ILD for identifying qualitatively…

15. About solvability of some boundary value problems for Poisson equation in a ball

Koshanova, Maira D.; Usmanov, Kairat I.; Turmetov, Batirkhan Kh.

2016-08-01

In the present paper, we study properties of some integro-differential operators of fractional order. As an application of the properties of these operators for Poisson equation we examine questions on solvability of a fractional analogue of the Neumann problem and analogues of periodic boundary value problems for circular domains. The exact conditions for solvability of these problems are found.

16. The Dependent Poisson Race Model and Modeling Dependence in Conjoint Choice Experiments

ERIC Educational Resources Information Center

Ruan, Shiling; MacEachern, Steven N.; Otter, Thomas; Dean, Angela M.

2008-01-01

Conjoint choice experiments are used widely in marketing to study consumer preferences amongst alternative products. We develop a class of choice models, belonging to the class of Poisson race models, that describe a "random utility" which lends itself to a process-based description of choice. The models incorporate a dependence structure which…

17. Non-Gaussian inference from non-linear and non-Poisson biased distributed data

Ata, Metin; Kitaura, Francisco-Shu; Müller, Volker

2014-05-01

We study the statistical inference of the cosmological dark matter density field from non-Gaussian, non-linear and non-Poisson biased distributed tracers. We have implemented a Bayesian posterior sampling computer-code solving this problem and tested it with mock data based on N-body simulations.

18. The Poisson-Boltzmann theory for the two-plates problem: some exact results.

PubMed

Xing, Xiang-Jun

2011-12-01

The general solution to the nonlinear Poisson-Boltzmann equation for two parallel charged plates, either inside a symmetric electrolyte, or inside a 2q:-q asymmetric electrolyte, is found in terms of Weierstrass elliptic functions. From this we derive some exact asymptotic results for the interaction between charged plates, as well as the exact form of the renormalized surface charge density.

19. A parallel 3D poisson solver for space charge simulation in cylindrical coordinates.

SciTech Connect

Xu, J.; Ostroumov, P. N.; Nolen, J.; Physics

2008-02-01

This paper presents the development of a parallel three-dimensional Poisson solver in cylindrical coordinate system for the electrostatic potential of a charged particle beam in a circular tube. The Poisson solver uses Fourier expansions in the longitudinal and azimuthal directions, and Spectral Element discretization in the radial direction. A Dirichlet boundary condition is used on the cylinder wall, a natural boundary condition is used on the cylinder axis and a Dirichlet or periodic boundary condition is used in the longitudinal direction. A parallel 2D domain decomposition was implemented in the (r,{theta}) plane. This solver was incorporated into the parallel code PTRACK for beam dynamics simulations. Detailed benchmark results for the parallel solver and a beam dynamics simulation in a high-intensity proton LINAC are presented. When the transverse beam size is small relative to the aperture of the accelerator line, using the Poisson solver in a Cartesian coordinate system and a Cylindrical coordinate system produced similar results. When the transverse beam size is large or beam center located off-axis, the result from Poisson solver in Cartesian coordinate system is not accurate because different boundary condition used. While using the new solver, we can apply circular boundary condition easily and accurately for beam dynamic simulations in accelerator devices.

20. A generalized Poisson equation and short-range self-interaction energies.

PubMed

Varganov, Sergey A; Gilbert, Andrew T B; Gill, Peter M W

2008-06-28

We generalize the Poisson equation to attenuated Newtonian potentials. If the attenuation is at least exponential, the equation provides a local mapping between the density and its potential. We use this to derive several density functionals for the short-range self-interaction energy.

1. Anatomy of the Generalized Inverse Gaussian-Poisson Distribution with Special Applications to Bibliometric Studies.

ERIC Educational Resources Information Center

Sichel, H. S.

1992-01-01

Discusses the use of the generalized inverse Gaussian-Poisson (GIGP) distribution in bibliometric studies. The main types of size-frequency distributions are described, bibliometric distributions in logarithms are examined; parameter estimation is discussed; and goodness-of-fit tests are considered. Examples of applications are included. (17…

2. Poisson ratio and excess low-frequency vibrational states in glasses.

PubMed

Duval, Eugène; Deschamps, Thierry; Saviot, Lucien

2013-08-14

In glass, starting from a dependence of the Angell's fragility on the Poisson ratio [V. N. Novikov and A. P. Sokolov, Nature 431, 961 (2004)], and a dependence of the Poisson ratio on the atomic packing density [G. N. Greaves, A. L. Greer, R. S. Lakes, and T. Rouxel, Nature Mater. 10, 823 (2011)], we propose that the heterogeneities are predominantly density fluctuations in strong glasses (lower Poisson ratio) and shear elasticity fluctuations in fragile glasses (higher Poisson ratio). Because the excess of low-frequency vibration modes in comparison with the Debye regime (boson peak) is strongly connected to these fluctuations, we propose that they are breathing-like (with change of volume) in strong glasses and shear-like (without change of volume) in fragile glasses. As a verification, it is confirmed that the excess modes in the strong silica glass are predominantly breathing-like. Moreover, it is shown that the excess breathing-like modes in a strong polymeric glass are replaced by shear-like modes under hydrostatic pressure as the glass becomes more compact.

3. Testing the equality of two Poisson means using the rate ratio.

PubMed

Ng, Hon Keung Tony; Tang, Man-Lai

2005-03-30

In this article, we investigate procedures for comparing two independent Poisson variates that are observed over unequal sampling frames (i.e. time intervals, populations, areas or any combination thereof). We consider two statistics (with and without the logarithmic transformation) for testing the equality of two Poisson rates. Two methods for implementing these statistics are reviewed. They are (1) the sample-based method, and (2) the constrained maximum likelihood estimation (CMLE) method. We conduct an empirical study to evaluate the performance of different statistics and methods. Generally, we find that the CMLE method works satisfactorily only for the statistic without the logarithmic transformation (denoted as W(2)) while sample-based method performs better for the statistic using the logarithmic transformation (denoted as W(3)). It is noteworthy that both statistics perform well for moderate to large Poisson rates (e.g. > or =10). For small Poisson rates (e.g. <10), W(2) can be liberal (e.g. actual type I error rate/nominal level > or =1.2) while W(3) can be conservative (e.g. actual type I error rate/nominal level < or =0.8). The corresponding sample size formulae are provided and valid in the sense that the simulated powers associated with the approximate sample size formulae are generally close to the pre-chosen power level. We illustrate our methodologies with a real example from a breast cancer study.

4. Birth and Death Process Modeling Leads to the Poisson Distribution: A Journey Worth Taking

ERIC Educational Resources Information Center

Rash, Agnes M.; Winkel, Brian J.

2009-01-01

This paper describes details of development of the general birth and death process from which we can extract the Poisson process as a special case. This general process is appropriate for a number of courses and units in courses and can enrich the study of mathematics for students as it touches and uses a diverse set of mathematical topics, e.g.,…

5. Updating a Classic: "The Poisson Distribution and the Supreme Court" Revisited

ERIC Educational Resources Information Center

Cole, Julio H.

2010-01-01

W. A. Wallis studied vacancies in the US Supreme Court over a 96-year period (1837-1932) and found that the distribution of the number of vacancies per year could be characterized by a Poisson model. This note updates this classic study.

6. An unbiased risk estimator for image denoising in the presence of mixed poisson-gaussian noise.

PubMed

Le Montagner, Yoann; Angelini, Elsa D; Olivo-Marin, Jean-Christophe

2014-03-01

The behavior and performance of denoising algorithms are governed by one or several parameters, whose optimal settings depend on the content of the processed image and the characteristics of the noise, and are generally designed to minimize the mean squared error (MSE) between the denoised image returned by the algorithm and a virtual ground truth. In this paper, we introduce a new Poisson-Gaussian unbiased risk estimator (PG-URE) of the MSE applicable to a mixed Poisson-Gaussian noise model that unifies the widely used Gaussian and Poisson noise models in fluorescence bioimaging applications. We propose a stochastic methodology to evaluate this estimator in the case when little is known about the internal machinery of the considered denoising algorithm, and we analyze both theoretically and empirically the characteristics of the PG-URE estimator. Finally, we evaluate the PG-URE-driven parametrization for three standard denoising algorithms, with and without variance stabilizing transforms, and different characteristics of the Poisson-Gaussian noise mixture.

7. Comparison of a hydrogel model to the Poisson-Boltzmann cell model

Claudio, Gil C.; Kremer, Kurt; Holm, Christian

2009-09-01

We have investigated a single charged microgel in aqueous solution with a combined simulational model and Poisson-Boltzmann theory. In the simulations we use a coarse-grained charged bead-spring model in a dielectric continuum, with explicit counterions and full electrostatic interactions under periodic and nonperiodic boundary conditions. The Poisson-Boltzmann hydrogel model is that of a single charged colloid confined to a spherical cell where the counterions are allowed to enter the uniformly charged sphere. In order to investigate the origin of the differences these two models may give, we performed a variety of simulations of different hydrogel models which were designed to test for the influence of charge correlations, excluded volume interactions, arrangement of charges along the polymer chains, and thermal fluctuations in the chains of the gel. These intermediate models systematically allow us to connect the Poisson-Boltzmann cell model to the bead-spring model hydrogel model in a stepwise manner thereby testing various approximations. Overall, the simulational results of all these hydrogel models are in good agreement, especially for the number of confined counterions within the gel. Our results support the applicability of the Poisson-Boltzmann cell model to study ionic properties of hydrogels under dilute conditions.

8. Poisson-Helmholtz-Boltzmann model of the electric double layer: analysis of monovalent ionic mixtures.

PubMed

Bohinc, Klemen; Shrestha, Ahis; Brumen, Milan; May, Sylvio

2012-03-01

In the classical mean-field description of the electric double layer, known as the Poisson-Boltzmann model, ions interact exclusively through their Coulomb potential. Ion specificity can arise through solvent-mediated, nonelectrostatic interactions between ions. We employ the Yukawa pair potential to model the presence of nonelectrostatic interactions. The combination of Yukawa and Coulomb potential on the mean-field level leads to the Poisson-Helmholtz-Boltzmann model, which employs two auxiliary potentials: one electrostatic and the other nonelectrostatic. In the present work we apply the Poisson-Helmholtz-Boltzmann model to ionic mixtures, consisting of monovalent cations and anions that exhibit different Yukawa interaction strengths. As a specific example we consider a single charged surface in contact with a symmetric monovalent electrolyte. From the minimization of the mean-field free energy we derive the Poisson-Boltzmann and Helmholtz-Boltzmann equations. These nonlinear equations can be solved analytically in the weak perturbation limit. This together with numerical solutions in the nonlinear regime suggests an intricate interplay between electrostatic and nonelectrostatic interactions. The structure and free energy of the electric double layer depends sensitively on the Yukawa interaction strengths between the different ion types and on the nonelectrostatic interactions of the mobile ions with the surface.

9. Enhanced Night Vision Via a Combination of Poisson Interpolation and Machine Learning

DTIC Science & Technology

2006-02-01

perceptual cues. We have developed three new image -processing techniques to address these problems. These include non -linear spatio-temporal denoising...methods to low-light visible, near infrared (NIR), and short-wave infrared images (SWIR). In this annual report on the first phase of our research...Poisson Interpolation, Adaptive Filters, Belief Propagation, SWIR, Image Fusion 16. SECURITY CLASSIFICATION OF

10. Analysis of Large Data Logs: An Application of Poisson Sampling on Excite Web Queries.

ERIC Educational Resources Information Center

Ozmutlu, H. Cenk; Spink, Amanda; Ozmutlu, Seda

2002-01-01

Discusses the need for tools that allow effective analysis of search engine queries to provide a greater understanding of Web users' information seeking behavior and describes a study that developed an effective strategy for selecting samples from large-scale data sets. Reports on Poisson sampling with data logs from the Excite search engine.…

11. Effect of storage time and temperature on Poisson ratio of tomato fruit skin

Kuna-Broniowska, I.; Gładyszewska, B.; Ciupak, A.

2012-02-01

The results of studies investigating the effects of storage time and temperature on variations in Poisson ratio of the skin of two greenhouse tomato varieties - Admiro and Encore were presented. In the initial period of the study, Poisson ratio of the skin of tomato fruit cv. Admiro, stored at 13°C, varied between 0.7 and 0.8. After the successive 10 days of the experiment, it decreased to approximately 0.6 and was stabilized until the end of study. By contrast, the skin of tomatoes cv. Encore was characterized by lower values and lower variability of Poisson ratio in the range of 0.4 to 0.5 during storage. The examinations involving tomato fruit cv. Admiro stored at 21°C were completed after 12 days due to fruit softening and progressive difficulty with preparing analytical specimens. The value of Poisson ratio for both varieties stored at room temperature fluctuated throughout the experiment to approximate 0.5.

12. Parallel FFT-based Poisson Solver for Isolated Three-dimensional Systems

SciTech Connect

Budiardja, Reuben D; Cardall, Christian Y

2011-01-01

We describe an implementation to solve Poisson's equation for an isolated system on a unigrid mesh using FFTs. The method solves the equation globally on mesh blocks distributed across multiple processes on a distributed-memory parallel computer. Test results to demonstrate the convergence and scaling properties of the implementation are presented. The solver is offered to interested users as the library PSPFFT.

13. Poisson's ratio for polycrystalline silicon used in disk-shaped microresonators.

PubMed

Meitzler, Allen H

2006-02-01

Integrated circuit technology has been used to fabricate miniature disk resonators of polycrystalline silicon that operate at frequencies above 100 MHz. The ratios of low-order resonant frequencies in these resonators can be used to determine the value of Poisson's ratio and to confirm assumptions regarding homogeneity and isotropy.

14. Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation

ERIC Educational Resources Information Center

Prentice, J. S. C.

2012-01-01

An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…

15. Nonlinear analysis of the cold fluid-Poisson plasma by using the characteristic method

Lee, Hee J.

2016-10-01

We show that the Vlasov and Euler equations can be transformed into each other along the same characteristics on the ( x, t) plane. Therefore, the Vlasov-Poisson plasma may have common features that are contained in the cold fluid equations: the Euler equation, the continuity equation, and the Poisson equation. Here, the cold fluid equation does not mean the moment equation of the Boltzmann equation. We show that the compensated electron fluid equations can be solved linearly along the characteristics. We address an ion plasma with Boltzmann-distributed electrons as a Cauchy initial-boundary value problem for which initial data are provided by compatible solutions of the Poisson equation. In this plasma, the set of nonlinear cold fluid equations can be approached by arranging them in Riemann invariant equations or via a hodograph transform. The result of this arrangement is linear equations, thus suggesting a way to investigate a cold fluid nonlinear plasma without directly engaging the nonlinearity. The Poisson equation corresponds to the entropy equation in the gas dynamic equations. Analogously, a power law similar to the polytropic gas law in gas dynamics is assumed between the electric potential and the density.

16. A proximal iteration for deconvolving Poisson noisy images using sparse representations.

PubMed

Dupé, François-Xavier; Fadili, Jalal M; Starck, Jean-Luc

2009-02-01

We propose an image deconvolution algorithm when the data is contaminated by Poisson noise. The image to restore is assumed to be sparsely represented in a dictionary of waveforms such as the wavelet or curvelet transforms. Our key contributions are as follows. First, we handle the Poisson noise properly by using the Anscombe variance stabilizing transform leading to a nonlinear degradation equation with additive Gaussian noise. Second, the deconvolution problem is formulated as the minimization of a convex functional with a data-fidelity term reflecting the noise properties, and a nonsmooth sparsity-promoting penalty over the image representation coefficients (e.g., l(1) -norm). An additional term is also included in the functional to ensure positivity of the restored image. Third, a fast iterative forward-backward splitting algorithm is proposed to solve the minimization problem. We derive existence and uniqueness conditions of the solution, and establish convergence of the iterative algorithm. Finally, a GCV-based model selection procedure is proposed to objectively select the regularization parameter. Experimental results are carried out to show the striking benefits gained from taking into account the Poisson statistics of the noise. These results also suggest that using sparse-domain regularization may be tractable in many deconvolution applications with Poisson noise such as astronomy and microscopy.

17. The Cauchy Problem for the 3-D Vlasov-Poisson System with Point Charges

Marchioro, Carlo; Miot, Evelyne; Pulvirenti, Mario

2011-07-01

In this paper we establish global existence and uniqueness of the solution to the three-dimensional Vlasov-Poisson system in the presence of point charges with repulsive interaction. The present analysis extends an analogous two-dimensional result (Caprino and Marchioro in Kinet. Relat. Models 3(2):241-254, 2010).

18. C1-continuous Virtual Element Method for Poisson-Kirchhoff plate problem

SciTech Connect

2016-09-20

We present a family of C1-continuous high-order Virtual Element Methods for Poisson-Kirchho plate bending problem. The convergence of the methods is tested on a variety of meshes including rectangular, quadrilateral, and meshes obtained by edge removal (i.e. highly irregular meshes). The convergence rates are presented for all of these tests.

19. Multi-Parameter Linear Least-Squares Fitting to Poisson Data One Count at a Time

NASA Technical Reports Server (NTRS)

Wheaton, W.; Dunklee, A.; Jacobson, A.; Ling, J.; Mahoney, W.; Radocinski, R.

1993-01-01

A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multi-component linear model, with underlying physical count rates or fluxes which are to be estimated from the data.

20. Developmental regression in autism spectrum disorder.

PubMed

Al Backer, Nouf Backer

2015-01-01

The occurrence of developmental regression in autism spectrum disorder (ASD) is one of the most puzzling phenomena of this disorder. A little is known about the nature and mechanism of developmental regression in ASD. About one-third of young children with ASD lose some skills during the preschool period, usually speech, but sometimes also nonverbal communication, social or play skills are also affected. There is a lot of evidence suggesting that most children who demonstrate regression also had previous, subtle, developmental differences. It is difficult to predict the prognosis of autistic children with developmental regression. It seems that the earlier development of social, language, and attachment behaviors followed by regression does not predict the later recovery of skills or better developmental outcomes. The underlying mechanisms that lead to regression in autism are unknown. The role of subclinical epilepsy in the developmental regression of children with autism remains unclear.

1. Spousal Adjustment to Myocardial Infarction.

ERIC Educational Resources Information Center

Ziglar, Elisa J.

This paper reviews the literature on the stresses and coping strategies of spouses of patients with myocardial infarction (MI). It attempts to identify specific problem areas of adjustment for the spouse and to explore the effects of spousal adjustment on patient recovery. Chapter one provides an overview of the importance in examining the…

2. Is the current pertussis incidence only the results of testing? A spatial and space-time analysis of pertussis surveillance data using cluster detection methods and geographically weighted regression modelling

PubMed Central

Kauhl, Boris; Heil, Jeanne; Hoebe, Christian J. P. A.; Schweikart, Jürgen; Krafft, Thomas; Dukers-Muijrers, Nicole H. T. M.

2017-01-01

Background Despite high vaccination coverage, pertussis incidence in the Netherlands is amongst the highest in Europe with a shifting tendency towards adults and elderly. Early detection of outbreaks and preventive actions are necessary to prevent severe complications in infants. Efficient pertussis control requires additional background knowledge about the determinants of testing and possible determinants of the current pertussis incidence. Therefore, the aim of our study is to examine the possibility of locating possible pertussis outbreaks using space-time cluster detection and to examine the determinants of pertussis testing and incidence using geographically weighted regression models. Methods We analysed laboratory registry data including all geocoded pertussis tests in the southern area of the Netherlands between 2007 and 2013. Socio-demographic and infrastructure-related population data were matched to the geo-coded laboratory data. The spatial scan statistic was applied to detect spatial and space-time clusters of testing, incidence and test-positivity. Geographically weighted Poisson regression (GWPR) models were then constructed to model the associations between the age-specific rates of testing and incidence and possible population-based determinants. Results Space-time clusters for pertussis incidence overlapped with space-time clusters for testing, reflecting a strong relationship between testing and incidence, irrespective of the examined age group. Testing for pertussis itself was overall associated with lower socio-economic status, multi-person-households, proximity to primary school and availability of healthcare. The current incidence in contradiction is mainly determined by testing and is not associated with a lower socioeconomic status. Discussion Testing for pertussis follows to an extent the general healthcare seeking behaviour for common respiratory infections, whereas the current pertussis incidence is largely the result of testing. More

3. Understanding the changes in ductility and Poisson's ratio of metallic glasses during annealing from microscopic dynamics

SciTech Connect

Wang, Z.; Ngai, K. L.; Wang, W. H.

2015-07-21

In the paper K. L. Ngai et al., [J. Chem. 140, 044511 (2014)], the empirical correlation of ductility with the Poisson's ratio, ν{sub Poisson}, found in metallic glasses was theoretically explained by microscopic dynamic processes which link on the one hand ductility, and on the other hand the Poisson's ratio. Specifically, the dynamic processes are the primitive relaxation in the Coupling Model which is the precursor of the Johari–Goldstein β-relaxation, and the caged atoms dynamics characterized by the effective Debye–Waller factor f{sub 0} or equivalently the nearly constant loss (NCL) in susceptibility. All these processes and the parameters characterizing them are accessible experimentally except f{sub 0} or the NCL of caged atoms; thus, so far, the experimental verification of the explanation of the correlation between ductility and Poisson's ratio is incomplete. In the experimental part of this paper, we report dynamic mechanical measurement of the NCL of the metallic glass La{sub 60}Ni{sub 15}Al{sub 25} as-cast, and the changes by annealing at temperature below T{sub g}. The observed monotonic decrease of the NCL with aging time, reflecting the corresponding increase of f{sub 0}, correlates with the decrease of ν{sub Poisson}. This is important observation because such measurements, not made before, provide the missing link in confirming by experiment the explanation of the correlation of ductility with ν{sub Poisson}. On aging the metallic glass, also observed in the isochronal loss spectra is the shift of the β-relaxation to higher temperatures and reduction of the relaxation strength. These concomitant changes of the β-relaxation and NCL are the root cause of embrittlement by aging the metallic glass. The NCL of caged atoms is terminated by the onset of the primitive relaxation in the Coupling Model, which is generally supported by experiments. From this relation, the monotonic decrease of the NCL with aging time is caused by the slowing down

4. A Fast Poisson Solver with Periodic Boundary Conditions for GPU Clusters in Various Configurations

Rattermann, Dale Nicholas

Fast Poisson solvers using the Fast Fourier Transform on uniform grids are especially suited for parallel implementation, making them appropriate for portability on graphical processing unit (GPU) devices. The goal of the following work was to implement, test, and evaluate a fast Poisson solver for periodic boundary conditions for use on a variety of GPU configurations. The solver used in this research was FLASH, an immersed-boundary-based method, which is well suited for complex, time-dependent geometries, has robust adaptive mesh refinement/de-refinement capabilities to capture evolving flow structures, and has been successfully implemented on conventional, parallel supercomputers. However, these solvers are still computationally costly to employ, and the total solver time is dominated by the solution of the pressure Poisson equation using state-of-the-art multigrid methods. FLASH improves the performance of its multigrid solvers by integrating a parallel FFT solver on a uniform grid during a coarse level. This hybrid solver could then be theoretically improved by replacing the highly-parallelizable FFT solver with one that utilizes GPUs, and, thus, was the motivation for my research. In the present work, the CPU-utilizing parallel FFT solver (PFFT) used in the base version of FLASH for solving the Poisson equation on uniform grids has been modified to enable parallel execution on CUDA-enabled GPU devices. New algorithms have been implemented to replace the Poisson solver that decompose the computational domain and send each new block to a GPU for parallel computation. One-dimensional (1-D) decomposition of the computational domain minimizes the amount of network traffic involved in this bandwidth-intensive computation by limiting the amount of all-to-all communication required between processes. Advanced techniques have been incorporated and implemented in a GPU-centric code design, while allowing end users the flexibility of parameter control at runtime in

5. Process modeling with the regression network.

PubMed

van der Walt, T; Barnard, E; van Deventer, J

1995-01-01

A new connectionist network topology called the regression network is proposed. The structural and underlying mathematical features of the regression network are investigated. Emphasis is placed on the intricacies of the optimization process for the regression network and some measures to alleviate these difficulties of optimization are proposed and investigated. The ability of the regression network algorithm to perform either nonparametric or parametric optimization, as well as a combination of both, is also highlighted. It is further shown how the regression network can be used to model systems which are poorly understood on the basis of sparse data. A semi-empirical regression network model is developed for a metallurgical processing operation (a hydrocyclone classifier) by building mechanistic knowledge into the connectionist structure of the regression network model. Poorly understood aspects of the process are provided for by use of nonparametric regions within the structure of the semi-empirical connectionist model. The performance of the regression network model is compared to the corresponding generalization performance results obtained by some other nonparametric regression techniques.

6. Quantile regression applied to spectral distance decay

USGS Publications Warehouse

2008-01-01

Remotely sensed imagery has long been recognized as a powerful support for characterizing and estimating biodiversity. Spectral distance among sites has proven to be a powerful approach for detecting species composition variability. Regression analysis of species similarity versus spectral distance allows us to quantitatively estimate the amount of turnover in species composition with respect to spectral and ecological variability. In classical regression analysis, the residual sum of squares is minimized for the mean of the dependent variable distribution. However, many ecological data sets are characterized by a high number of zeroes that add noise to the regression model. Quantile regressions can be used to evaluate trend in the upper quantiles rather than a mean trend across the whole distribution of the dependent variable. In this letter, we used ordinary least squares (OLS) and quantile regressions to estimate the decay of species similarity versus spectral distance. The achieved decay rates were statistically nonzero (p < 0.01), considering both OLS and quantile regressions. Nonetheless, the OLS regression estimate of the mean decay rate was only half the decay rate indicated by the upper quantiles. Moreover, the intercept value, representing the similarity reached when the spectral distance approaches zero, was very low compared with the intercepts of the upper quantiles, which detected high species similarity when habitats are more similar. In this letter, we demonstrated the power of using quantile regressions applied to spectral distance decay to reveal species diversity patterns otherwise lost or underestimated by OLS regression. ?? 2008 IEEE.

7. [From clinical judgment to linear regression model.

PubMed

Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O

2013-01-01

When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R(2)) indicates the importance of independent variables in the outcome.

8. Geodesic least squares regression on information manifolds

SciTech Connect

Verdoolaege, Geert

2014-12-05

We present a novel regression method targeted at situations with significant uncertainty on both the dependent and independent variables or with non-Gaussian distribution models. Unlike the classic regression model, the conditional distribution of the response variable suggested by the data need not be the same as the modeled distribution. Instead they are matched by minimizing the Rao geodesic distance between them. This yields a more flexible regression method that is less constrained by the assumptions imposed through the regression model. As an example, we demonstrate the improved resistance of our method against some flawed model assumptions and we apply this to scaling laws in magnetic confinement fusion.

9. The mechanical influences of the graded distribution in the cross-sectional shape, the stiffness and Poisson׳s ratio of palm branches.

PubMed

Liu, Wangyu; Wang, Ningling; Jiang, Xiaoyong; Peng, Yujian

2016-07-01

The branching system plays an important role in maintaining the survival of palm trees. Due to the nature of monocots, no additional vascular bundles can be added in the palm tree tissue as it ages. Therefore, the changing of the cross-sectional area in the palm branch creates a graded distribution in the mechanical properties of the tissue. In the present work, this graded distribution in the tissue mechanical properties from sheath to petiole were studied with a multi-scale modeling approach. Then, the entire palm branch was reconstructed and analyzed using finite element methods. The variation of the elastic modulus can lower the level of mechanical stress in the sheath and also allow the branch to have smaller values of pressure on the other branches. Under impact loading, the enhanced frictional dissipation at the surfaces of adjacent branches benefits from the large Poisson׳s ratio of the sheath tissue. These findings can help to link the wind resistance ability of palm trees to their graded materials distribution in the branching system.

NASA Technical Reports Server (NTRS)

Ellis, Rod; Bartolotta, Paul

1990-01-01

Improved design for induction-heating work coil facilitates optimization of heating in different metal specimens. Three segments adjusted independently to obtain desired distribution of temperature. Reduces time needed to achieve required temperature profiles.

11. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

SciTech Connect

Laurence, T; Chromy, B

2009-11-10

Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE

12. Life adjustment correlates of physical self-concepts.

PubMed

Sonstroem, R J; Potts, S A

1996-05-01

This research tested relationships between physical self-concepts and contemporary measures of life adjustment. University students (119 females, 126 males) completed the Physical Self-Perception Profile assessing self-concepts of sport competence, physical condition, attractive body, strength, and general physical self-worth. Multiple regression found significant associations (P < 0.05 to P < 0.001) in hypothesized directions between physical self-concepts and positive affect, negative affect, depression, and health complaints in 17 of 20 analyses. Thirteen of these relationships remained significant when controlling for the Bonferroni effect. Hierarchical multiple regression examined the unique contribution of physical self-perceptions in predicting each adjustment variable after accounting for the effects of global self-esteem and two measures of social desirability. Physical self-concepts significantly improved associations with life adjustment (P < 0.05 to P < 0.05) in three of the eight analyses across gender and approached significance in three others. These data demonstrate that self-perceptions of physical competence in college students are essentially related to life adjustment, independent of the effects of social desirability and global self-esteem. These links are mainly with perceptions of sport competence in males and with perceptions of physical condition, attractive body, and general physical self-worth in both males and females.

PubMed Central

McGuire, Thomas G.; Glazer, Jacob; Newhouse, Joseph P.; Normand, Sharon-Lise; Shi, Julie; Sinaiko, Anna D.; Zuvekas, Samuel

2013-01-01

In two important health policy contexts – private plans in Medicare and the new state-run “Exchanges” created as part of the Affordable Care Act (ACA) – plan payments come from two sources: risk-adjusted payments from a Regulator and premiums charged to individual enrollees. This paper derives principles for integrating risk-adjusted payments and premium policy in individual health insurance markets based on fitting total plan payments to health plan costs per person as closely as possible. A least squares regression including both health status and variables used in premiums reveals the weights a Regulator should put on risk adjusters when markets determine premiums. We apply the methods to an Exchange-eligible population drawn from the Medical Expenditure Panel Survey (MEPS). PMID:24308878

PubMed Central

Moosmann, Danyel A.V.; Roosa, Mark W.

2015-01-01

Although Mexican Americans are the largest ethnic minority group in the nation, knowledge is limited regarding this population's adolescent romantic relationships. This study explored whether 12th grade Mexican Americans’ (N = 218; 54% female) romantic relationship characteristics, cultural values, and gender created unique latent classes and if so, whether they were linked to adjustment. Latent class analyses suggested three profiles including, relatively speaking, higher, satisfactory, and lower quality romantic relationships. Regression analyses indicated these profiles had distinct associations with adjustment. Specifically, adolescents with higher and satisfactory quality romantic relationships reported greater future family expectations, higher self-esteem, and fewer externalizing symptoms than those with lower quality romantic relationships. Similarly, adolescents with higher quality romantic relationships reported greater academic self-efficacy and fewer sexual partners than those with lower quality romantic relationships. Overall, results suggested higher quality romantic relationships were most optimal for adjustment. Future research directions and implications are discussed. PMID:26141198

15. Slits, plates, and Poisson-Boltzmann theory in a local formulation of nonlocal electrostatics.

PubMed

Paillusson, Fabien; Blossey, Ralf

2010-11-01

Polar liquids like water carry a characteristic nanometric length scale, the correlation length of orientation polarizations. Continuum theories that can capture this feature commonly run under the name of "nonlocal" electrostatics since their dielectric response is characterized by a scale-dependent dielectric function ε(q), where q is the wave vector; the Poisson(-Boltzmann) equation then turns into an integro-differential equation. Recently, "local" formulations have been put forward for these theories and applied to water, solvated ions, and proteins. We review the local formalism and show how it can be applied to a structured liquid in slit and plate geometries, and solve the Poisson-Boltzmann theory for a charged plate in a structured solvent with counterions. Our results establish a coherent picture of the local version of nonlocal electrostatics and show its ease of use when compared to the original formulation.

16. Beyond Poisson-Boltzmann: fluctuations and fluid structure in a self-consistent theory

2016-09-01

Poisson-Boltzmann (PB) theory is the classic approach to soft matter electrostatics and has been applied to numerous physical chemistry and biophysics problems. Its essential limitations are in its neglect of correlation effects and fluid structure. Recently, several theoretical insights have allowed the formulation of approaches that go beyond PB theory in a systematic way. In this topical review, we provide an update on the developments achieved in the self-consistent formulations of correlation-corrected Poisson-Boltzmann theory. We introduce a corresponding system of coupled non-linear equations for both continuum electrostatics with a uniform dielectric constant, and a structured solvent—a dipolar Coulomb fluid—including non-local effects. While the approach is only approximate and also limited to corrections in the so-called weak fluctuation regime, it allows us to include physically relevant effects, as we show for a range of applications of these equations.

17. Erratum: Poisson's ratio in layered two-dimensional crystals [Phys. Rev. B 93, 075420 (2016)

Woo, Sungjong; Park, Hee Chul; Son, Young-Woo

2016-12-01

We present first-principles calculations of elastic properties of multilayered two-dimensional crystals such as graphene, h-BN and 2H-MoS2 which shows that their Poisson's ratios along out-of-plane direction are negative, near zero and positive, respectively, spanning all possibilities for sign of the ratios. While the in-plane Poisson's ratios are all positive regardless of their disparate electronic and structural properties, the characteristic interlayer interactions as well as layer stacking structures are shown to determine the sign of their out-of-plane ratios. Thorough investigation of elastic properties as a function of the number of layers for each system is also provided, highlighting their intertwined nature between elastic and electronic properties.

18. PB-AM: An open-source, fully analytical linear poisson-boltzmann solver.

PubMed

Felberg, Lisa E; Brookes, David H; Yap, Eng-Hui; Jurrus, Elizabeth; Baker, Nathan A; Head-Gordon, Teresa

2016-11-02

We present the open source distributed software package Poisson-Boltzmann Analytical Method (PB-AM), a fully analytical solution to the linearized PB equation, for molecules represented as non-overlapping spherical cavities. The PB-AM software package includes the generation of outputs files appropriate for visualization using visual molecular dynamics, a Brownian dynamics scheme that uses periodic boundary conditions to simulate dynamics, the ability to specify docking criteria, and offers two different kinetics schemes to evaluate biomolecular association rate constants. Given that PB-AM defines mutual polarization completely and accurately, it can be refactored as a many-body expansion to explore 2- and 3-body polarization. Additionally, the software has been integrated into the Adaptive Poisson-Boltzmann Solver (APBS) software package to make it more accessible to a larger group of scientists, educators, and students that are more familiar with the APBS framework. © 2016 Wiley Periodicals, Inc.

19. Statistical shape analysis using 3D Poisson equation--A quantitatively validated approach.

PubMed

Gao, Yi; Bouix, Sylvain

2016-05-01

Statistical shape analysis has been an important area of research with applications in biology, anatomy, neuroscience, agriculture, paleontology, etc. Unfortunately, the proposed methods are rarely quantitatively evaluated, and as shown in recent studies, when they are evaluated, significant discrepancies exist in their outputs. In this work, we concentrate on the problem of finding the consistent location of deformation between two population of shapes. We propose a new shape analysis algorithm along with a framework to perform a quantitative evaluation of its performance. Specifically, the algorithm constructs a Signed Poisson Map (SPoM) by solving two Poisson equations on the volumetric shapes of arbitrary topology, and statistical analysis is then carried out on the SPoMs. The method is quantitatively evaluated on synthetic shapes and applied on real shape data sets in brain structures.

20. Multitasking domain decomposition fast Poisson solvers on the Cray Y-MP

NASA Technical Reports Server (NTRS)

Chan, Tony F.; Fatoohi, Rod A.

1990-01-01

The results of multitasking implementation of a domain decomposition fast Poisson solver on eight processors of the Cray Y-MP are presented. The object of this research is to study the performance of domain decomposition methods on a Cray supercomputer and to analyze the performance of different multitasking techniques using highly parallel algorithms. Two implementations of multitasking are considered: macrotasking (parallelism at the subroutine level) and microtasking (parallelism at the do-loop level). A conventional FFT-based fast Poisson solver is also multitasked. The results of different implementations are compared and analyzed. A speedup of over 7.4 on the Cray Y-MP running in a dedicated environment is achieved for all cases.