NASA Astrophysics Data System (ADS)
Darnah
2016-04-01
Poisson regression has been used if the response variable is count data that based on the Poisson distribution. The Poisson distribution assumed equal dispersion. In fact, a situation where count data are over dispersion or under dispersion so that Poisson regression inappropriate because it may underestimate the standard errors and overstate the significance of the regression parameters, and consequently, giving misleading inference about the regression parameters. This paper suggests the generalized Poisson regression model to handling over dispersion and under dispersion on the Poisson regression model. The Poisson regression model and generalized Poisson regression model will be applied the number of filariasis cases in East Java. Based regression Poisson model the factors influence of filariasis are the percentage of families who don't behave clean and healthy living and the percentage of families who don't have a healthy house. The Poisson regression model occurs over dispersion so that we using generalized Poisson regression. The best generalized Poisson regression model showing the factor influence of filariasis is percentage of families who don't have healthy house. Interpretation of result the model is each additional 1 percentage of families who don't have healthy house will add 1 people filariasis patient.
NASA Astrophysics Data System (ADS)
Zamani, Hossein; Faroughi, Pouya; Ismail, Noriszura
2014-06-01
This study relates the Poisson, mixed Poisson (MP), generalized Poisson (GP) and finite Poisson mixture (FPM) regression models through mean-variance relationship, and suggests the application of these models for overdispersed count data. As an illustration, the regression models are fitted to the US skin care count data. The results indicate that FPM regression model is the best model since it provides the largest log likelihood and the smallest AIC, followed by Poisson-Inverse Gaussion (PIG), GP and negative binomial (NB) regression models. The results also show that NB, PIG and GP regression models provide similar results.
Analyzing Historical Count Data: Poisson and Negative Binomial Regression Models.
ERIC Educational Resources Information Center
Beck, E. M.; Tolnay, Stewart E.
1995-01-01
Asserts that traditional approaches to multivariate analysis, including standard linear regression techniques, ignore the special character of count data. Explicates three suitable alternatives to standard regression techniques, a simple Poisson regression, a modified Poisson regression, and a negative binomial model. (MJP)
Poisson Regression Analysis of Illness and Injury Surveillance Data
Frome E.L., Watkins J.P., Ellis E.D.
2012-12-12
The Department of Energy (DOE) uses illness and injury surveillance to monitor morbidity and assess the overall health of the work force. Data collected from each participating site include health events and a roster file with demographic information. The source data files are maintained in a relational data base, and are used to obtain stratified tables of health event counts and person time at risk that serve as the starting point for Poisson regression analysis. The explanatory variables that define these tables are age, gender, occupational group, and time. Typical response variables of interest are the number of absences due to illness or injury, i.e., the response variable is a count. Poisson regression methods are used to describe the effect of the explanatory variables on the health event rates using a log-linear main effects model. Results of fitting the main effects model are summarized in a tabular and graphical form and interpretation of model parameters is provided. An analysis of deviance table is used to evaluate the importance of each of the explanatory variables on the event rate of interest and to determine if interaction terms should be considered in the analysis. Although Poisson regression methods are widely used in the analysis of count data, there are situations in which over-dispersion occurs. This could be due to lack-of-fit of the regression model, extra-Poisson variation, or both. A score test statistic and regression diagnostics are used to identify over-dispersion. A quasi-likelihood method of moments procedure is used to evaluate and adjust for extra-Poisson variation when necessary. Two examples are presented using respiratory disease absence rates at two DOE sites to illustrate the methods and interpretation of the results. In the first example the Poisson main effects model is adequate. In the second example the score test indicates considerable over-dispersion and a more detailed analysis attributes the over-dispersion to extra-Poisson
Geographically weighted Poisson regression for disease association mapping.
Nakaya, T; Fotheringham, A S; Brunsdon, C; Charlton, M
2005-09-15
This paper describes geographically weighted Poisson regression (GWPR) and its semi-parametric variant as a new statistical tool for analysing disease maps arising from spatially non-stationary processes. The method is a type of conditional kernel regression which uses a spatial weighting function to estimate spatial variations in Poisson regression parameters. It enables us to draw surfaces of local parameter estimates which depict spatial variations in the relationships between disease rates and socio-economic characteristics. The method therefore can be used to test the general assumption made, often without question, in the global modelling of spatial data that the processes being modelled are stationary over space. Equally, it can be used to identify parts of the study region in which 'interesting' relationships might be occurring and where further investigation might be warranted. Such exceptions can easily be missed in traditional global modelling and therefore GWPR provides disease analysts with an important new set of statistical tools. We demonstrate the GWPR approach applied to a data set of working-age deaths in the Tokyo metropolitan area, Japan. The results indicate that there are significant spatial variations (that is, variation beyond that expected from random sampling) in the relationships between working-age mortality and occupational segregation and between working-age mortality and unemployment throughout the Tokyo metropolitan area and that, consequently, the application of traditional 'global' models would yield misleading results. PMID:16118814
ALMASI, Afshin; RAHIMIFOROUSHANI, Abbas; ESHRAGHIAN, Mohammad Reza; MOHAMMAD, Kazem; PASDAR, Yahya; TARRAHI, Mohammad Javad; MOGHIMBEIGI, Abbas; AHMADI JOUYBARI, Touraj
2016-01-01
Background: The aim of this study was to assess the associations between nutrition and dental caries in permanent dentition among schoolchildren. Methods: A cross-sectional survey was undertaken on 698 schoolchildren aged 10 to 12 yr from a random sample of primary schools in Kermanshah, western Iran, in 2014. The study was based on the data obtained from the questionnaire containing information on nutritional habits and the outcome of decayed/missing/filled teeth (DMFT) index. The association between predictors and dental caries was modeled using the Zero Inflated Generalized Poisson (ZIGP) regression model. Results: Fourteen percent of the children were caries free. The model was shown that in female children, the odds of being in a caries susceptible sub-group was 1.23 (95% CI: 1.08–1.51) times more likely than boys (P=0.041). Additionally, mean caries count in children who consumed the fizzy soft beverages and sweet biscuits more than once daily was 1.41 (95% CI: 1.19–1.63) and 1.27 (95% CI: 1.18–1.37) times more than children that were in category of less than 3 times a week or never, respectively. Conclusions: Girls were at a higher risk of caries than boys were. Since our study showed that nutritional status may have significant effect on caries in permanent teeth, we recommend that health promotion activities in school should be emphasized on healthful eating practices; especially limiting beverages containing sugar to only occasionally between meals. PMID:27141498
Background stratified Poisson regression analysis of cohort data
Langholz, Bryan
2012-01-01
Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as ‘nuisance’ variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this ‘conditional’ regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. PMID:22193911
Background stratified Poisson regression analysis of cohort data.
Richardson, David B; Langholz, Bryan
2012-03-01
Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. PMID:22193911
Collision prediction models using multivariate Poisson-lognormal regression.
El-Basyouny, Karim; Sayed, Tarek
2009-07-01
This paper advocates the use of multivariate Poisson-lognormal (MVPLN) regression to develop models for collision count data. The MVPLN approach presents an opportunity to incorporate the correlations across collision severity levels and their influence on safety analyses. The paper introduces a new multivariate hazardous location identification technique, which generalizes the univariate posterior probability of excess that has been commonly proposed and applied in the literature. In addition, the paper presents an alternative approach for quantifying the effect of the multivariate structure on the precision of expected collision frequency. The MVPLN approach is compared with the independent (separate) univariate Poisson-lognormal (PLN) models with respect to model inference, goodness-of-fit, identification of hot spots and precision of expected collision frequency. The MVPLN is modeled using the WinBUGS platform which facilitates computation of posterior distributions as well as providing a goodness-of-fit measure for model comparisons. The results indicate that the estimates of the extra Poisson variation parameters were considerably smaller under MVPLN leading to higher precision. The improvement in precision is due mainly to the fact that MVPLN accounts for the correlation between the latent variables representing property damage only (PDO) and injuries plus fatalities (I+F). This correlation was estimated at 0.758, which is highly significant, suggesting that higher PDO rates are associated with higher I+F rates, as the collision likelihood for both types is likely to rise due to similar deficiencies in roadway design and/or other unobserved factors. In terms of goodness-of-fit, the MVPLN model provided a superior fit than the independent univariate models. The multivariate hazardous location identification results demonstrated that some hazardous locations could be overlooked if the analysis was restricted to the univariate models. PMID:19540972
Mixed-effects Poisson regression analysis of adverse event reports
Gibbons, Robert D.; Segawa, Eisuke; Karabatsos, George; Amatya, Anup K.; Bhaumik, Dulal K.; Brown, C. Hendricks; Kapur, Kush; Marcus, Sue M.; Hur, Kwan; Mann, J. John
2008-01-01
SUMMARY A new statistical methodology is developed for the analysis of spontaneous adverse event (AE) reports from post-marketing drug surveillance data. The method involves both empirical Bayes (EB) and fully Bayes estimation of rate multipliers for each drug within a class of drugs, for a particular AE, based on a mixed-effects Poisson regression model. Both parametric and semiparametric models for the random-effect distribution are examined. The method is applied to data from Food and Drug Administration (FDA)’s Adverse Event Reporting System (AERS) on the relationship between antidepressants and suicide. We obtain point estimates and 95 per cent confidence (posterior) intervals for the rate multiplier for each drug (e.g. antidepressants), which can be used to determine whether a particular drug has an increased risk of association with a particular AE (e.g. suicide). Confidence (posterior) intervals that do not include 1.0 provide evidence for either significant protective or harmful associations of the drug and the adverse effect. We also examine EB, parametric Bayes, and semiparametric Bayes estimators of the rate multipliers and associated confidence (posterior) intervals. Results of our analysis of the FDA AERS data revealed that newer antidepressants are associated with lower rates of suicide adverse event reports compared with older antidepressants. We recommend improvements to the existing AERS system, which are likely to improve its public health value as an early warning system. PMID:18404622
Poisson regression analysis of mortality among male workers at a thorium-processing plant
Liu, Zhiyuan; Lee, Tze-San; Kotek, T.J.
1991-12-31
Analyses of mortality among a cohort of 3119 male workers employed between 1915 and 1973 at a thorium-processing plant were updated to the end of 1982. Of the whole group, 761 men were deceased and 2161 men were still alive, while 197 men were lost to follow-up. A total of 250 deaths was added to the 511 deaths observed in the previous study. The standardized mortality ratio (SMR) for all causes of death was 1.12 with 95% confidence interval (CI) of 1.05-1.21. The SMRs were also significantly increased for all malignant neoplasms (SMR = 1.23, 95% CI = 1.04-1.43) and lung cancer (SMR = 1.36, 95% CI = 1.02-1.78). Poisson regression analysis was employed to evaluate the joint effects of job classification, duration of employment, time since first employment, age and year at first employment on mortality of all malignant neoplasms and lung cancer. A comparison of internal and external analyses with the Poisson regression model was also conducted and showed no obvious difference in fitting the data on lung cancer mortality of the thorium workers. The results of the multivariate analysis showed that there was no significant effect of all the study factors on mortality due to all malignant neoplasms and lung cancer. Therefore, further study is needed for the former thorium workers.
Miaou, Shaw-Pin
1993-07-01
This paper evaluates the performance of Poisson and negative binomial (NB) regression models in establishing the relationship between truck accidents and geometric design of road sections. Three types of models are considered. Poisson regression, zero-inflated Poisson (ZIP) regression, and NB regression. Maximum likelihood (ML) method is used to estimate the unknown parameters of these models. Two other feasible estimators for estimating the dispersion parameter in the NB regression model are also examined: a moment estimator and a regression-based estimator. These models and estimators are evaluated based on their (1) estimated regression parameters, (2) overall goodness-of-fit, (3) estimated relative frequency of truck accident involvements across road sections, (4) sensitivity to the inclusion of short mad sections, and (5) estimated total number of truck accident involvements. Data from the highway Safety Information System (HSIS) are employed to examine the performance of these models in developing such relationships. The evaluation results suggest that the NB regression model estimated using the moment and regression-based methods should be used with caution. Also, under the ML method, the estimated regression parameters from all three models are quite consistent and no particular model outperforms the other two models in terms of the estimated relative frequencies of truck accident involvements across road sections. It is recommended that the Poisson regression model be used as an initial model for developing the relationship. If the overdispersion of accident data is found to be moderate or high, both the NB and ZIP regression model could be explored. Overall, the ZIP regression model appears to be a serious candidate model when data exhibit excess zeros due, e.g., to underreporting.
Fuzzy classifier based support vector regression framework for Poisson ratio determination
NASA Astrophysics Data System (ADS)
Asoodeh, Mojtaba; Bagheripour, Parisa
2013-09-01
Poisson ratio is considered as one of the most important rock mechanical properties of hydrocarbon reservoirs. Determination of this parameter through laboratory measurement is time, cost, and labor intensive. Furthermore, laboratory measurements do not provide continuous data along the reservoir intervals. Hence, a fast, accurate, and inexpensive way of determining Poisson ratio which produces continuous data over the whole reservoir interval is desirable. For this purpose, support vector regression (SVR) method based on statistical learning theory (SLT) was employed as a supervised learning algorithm to estimate Poisson ratio from conventional well log data. SVR is capable of accurately extracting the implicit knowledge contained in conventional well logs and converting the gained knowledge into Poisson ratio data. Structural risk minimization (SRM) principle which is embedded in the SVR structure in addition to empirical risk minimization (EMR) principle provides a robust model for finding quantitative formulation between conventional well log data and Poisson ratio. Although satisfying results were obtained from an individual SVR model, it had flaws of overestimation in low Poisson ratios and underestimation in high Poisson ratios. These errors were eliminated through implementation of fuzzy classifier based SVR (FCBSVR). The FCBSVR significantly improved accuracy of the final prediction. This strategy was successfully applied to data from carbonate reservoir rocks of an Iranian Oil Field. Results indicated that SVR predicted Poisson ratio values are in good agreement with measured values.
Lord, Dominique; Washington, Simon P; Ivan, John N
2005-01-01
how crash data give rise to "excess" zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed-and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros. PMID:15607273
Longevity Is Linked to Mitochondrial Mutation Rates in Rockfish: A Test Using Poisson Regression.
Hua, Xia; Cowman, Peter; Warren, Dan; Bromham, Lindell
2015-10-01
The mitochondrial theory of ageing proposes that the cumulative effect of biochemical damage in mitochondria causes mitochondrial mutations and plays a key role in ageing. Numerous studies have applied comparative approaches to test one of the predictions of the theory: That the rate of mitochondrial mutations is negatively correlated with longevity. Comparative studies face three challenges in detecting correlates of mutation rate: Covariation of mutation rates between species due to ancestry, covariation between life-history traits, and difficulty obtaining accurate estimates of mutation rate. We address these challenges using a novel Poisson regression method to examine the link between mutation rate and lifespan in rockfish (Sebastes). This method has better performance than traditional sister-species comparisons when sister species are too recently diverged to give reliable estimates of mutation rate. Rockfish are an ideal model system: They have long life spans with indeterminate growth and little evidence of senescence, which minimizes the confounding tradeoffs between lifespan and fecundity. We show that lifespan in rockfish is negatively correlated to rate of mitochondrial mutation, but not the rate of nuclear mutation. The life history of rockfish allows us to conclude that this relationship is unlikely to be driven by the tradeoffs between longevity and fecundity, or by the frequency of DNA replications in the germline. Instead, the relationship is compatible with the hypothesis that mutation rates are reduced by selection in long-lived taxa to reduce the chance of mitochondrial damage over its lifespan, consistent with the mitochondrial theory of ageing. PMID:26048547
Montanaro, Fabio; Ceppi, Marcello; Puntoni, Riccardo; Silvano, Stefania; Gennaro, Valerio
2004-04-01
The authors investigated the relationship between asbestos exposure and respiratory cancer mortality among maintenance workers and other blue-collar workers at an Italian oil refinery. The cohort contained 931 men, 29,511 person-years, and 489 deaths. Poisson regression analysis using white-collar workers as an internal referent group provided relative risk estimates (RRs) for main causes of death, adjusted for age, age at hiring, calendar period, length of exposure, and latency. Among maintenance workers, RRs for all tumors (RR = 1.50), digestive system cancers (RR = 1.41), lung cancers (RR = 1.53), and nonmalignant respiratory diseases (RR = 1.71) were significantly increased (p < 0.05); no significant excess was found for all causes and among maintenance (RR = 1.12) and other blue-collar workers (RR = 1.01). Results confirm the increased risk of death from respiratory diseases and cancer among maintenance workers exposed to asbestos, whereas other smoking-related diseases (circulatory system) were not statistically different among groups. PMID:16189991
Effect of air pollution on lung cancer: A poisson regression model based on vital statistics
Tango, Toshiro
1994-11-01
This article describes a Poisson regression model for time trends of mortality to detect the long-term effects of common levels of air pollution on lung cancer, in which the adjustment for cigarette smoking is not always necessary. The main hypothesis to be tested in the model is that if the long-term and common-level air pollution had an effect on lung cancer, the death rate from lung cancer could be expected to increase gradually at a higher rate in the region with relatively high levels of air pollution than in the region with low levels, and that this trend would not be expected for other control diseases in which cigarette smoking is a risk factor. Using this approach, we analyzed the trend of mortality in females aged 40 to 79, from lung cancer and two control diseases, ischemic heart disease and cerebrovascular disease, based on vital statistics in 23 wards of the Tokyo metropolitan area for 1972 to 1988. Ward-specific mean levels per day of SO{sub 2} and NO{sub 2} from 1974 through 1976 estimated by Makino (1978) were used as the ward-specific exposure measure of air pollution. No data on tobacco consumption in each ward is available. Our analysis supported the existence of long-term effects of air pollution on lung cancer. 14 refs., 5 figs., 2 tabs.
A marginalized zero-inflated Poisson regression model with overall exposure effects.
Long, D Leann; Preisser, John S; Herring, Amy H; Golin, Carol E
2014-12-20
The zero-inflated Poisson (ZIP) regression model is often employed in public health research to examine the relationships between exposures of interest and a count outcome exhibiting many zeros, in excess of the amount expected under sampling from a Poisson distribution. The regression coefficients of the ZIP model have latent class interpretations, which correspond to a susceptible subpopulation at risk for the condition with counts generated from a Poisson distribution and a non-susceptible subpopulation that provides the extra or excess zeros. The ZIP model parameters, however, are not well suited for inference targeted at marginal means, specifically, in quantifying the effect of an explanatory variable in the overall mixture population. We develop a marginalized ZIP model approach for independent responses to model the population mean count directly, allowing straightforward inference for overall exposure effects and empirical robust variance estimation for overall log-incidence density ratios. Through simulation studies, the performance of maximum likelihood estimation of the marginalized ZIP model is assessed and compared with other methods of estimating overall exposure effects. The marginalized ZIP model is applied to a recent study of a motivational interviewing-based safer sex counseling intervention, designed to reduce unprotected sexual act counts. PMID:25220537
Knafl, George J; Fennie, Kristopher P; Bova, Carol; Dieckhaus, Kevin; Williams, Ann B
2004-03-15
An adaptive approach to Poisson regression modelling is presented for analysing event data from electronic devices monitoring medication-taking. The emphasis is on applying this approach to data for individual subjects although it also applies to data for multiple subjects. This approach provides for visualization of adherence patterns as well as for objective comparison of actual device use with prescribed medication-taking. Example analyses are presented using data on openings of electronic pill bottle caps monitoring adherence of subjects with HIV undergoing highly active antiretroviral therapies. The modelling approach consists of partitioning the observation period, computing grouped event counts/rates for intervals in this partition, and modelling these event counts/rates in terms of elapsed time after entry into the study using Poisson regression. These models are based on adaptively selected sets of power transforms of elapsed time determined by rule-based heuristic search through arbitrary sets of parametric models, thereby effectively generating a smooth non-parametric regression fit to the data. Models are compared using k-fold likelihood cross-validation. PMID:14981675
Gibbons, Robert D; Segawa, Eisuke; Karabatsos, George; Amatya, Anup K; Bhaumik, Dulal K; Brown, C Hendricks; Kapur, Kush; Marcus, Sue M; Hur, Kwan; Mann, J John
2008-05-20
A new statistical methodology is developed for the analysis of spontaneous adverse event (AE) reports from post-marketing drug surveillance data. The method involves both empirical Bayes (EB) and fully Bayes estimation of rate multipliers for each drug within a class of drugs, for a particular AE, based on a mixed-effects Poisson regression model. Both parametric and semiparametric models for the random-effect distribution are examined. The method is applied to data from Food and Drug Administration (FDA)'s Adverse Event Reporting System (AERS) on the relationship between antidepressants and suicide. We obtain point estimates and 95 per cent confidence (posterior) intervals for the rate multiplier for each drug (e.g. antidepressants), which can be used to determine whether a particular drug has an increased risk of association with a particular AE (e.g. suicide). Confidence (posterior) intervals that do not include 1.0 provide evidence for either significant protective or harmful associations of the drug and the adverse effect. We also examine EB, parametric Bayes, and semiparametric Bayes estimators of the rate multipliers and associated confidence (posterior) intervals. Results of our analysis of the FDA AERS data revealed that newer antidepressants are associated with lower rates of suicide adverse event reports compared with older antidepressants. We recommend improvements to the existing AERS system, which are likely to improve its public health value as an early warning system. PMID:18404622
Association between large strongyle genera in larval cultures--using rare-event poisson regression.
Cao, X; Vidyashankar, A N; Nielsen, M K
2013-09-01
Decades of intensive anthelmintic treatment has caused equine large strongyles to become quite rare, while the cyathostomins have developed resistance to several drug classes. The larval culture has been associated with low to moderate negative predictive values for detecting Strongylus vulgaris infection. It is unknown whether detection of other large strongyle species can be statistically associated with presence of S. vulgaris. This remains a statistical challenge because of the rare occurrence of large strongyle species. This study used a modified Poisson regression to analyse a dataset for associations between S. vulgaris infection and simultaneous occurrence of Strongylus edentatus and Triodontophorus spp. In 663 horses on 42 Danish farms, the individual prevalences of S. vulgaris, S. edentatus and Triodontophorus spp. were 12%, 3% and 12%, respectively. Both S. edentatus and Triodontophorus spp. were significantly associated with S. vulgaris infection with relative risks above 1. Further, S. edentatus was associated with use of selective therapy on the farms, as well as negatively associated with anthelmintic treatment carried out within 6 months prior to the study. The findings illustrate that occurrence of S. vulgaris in larval cultures can be interpreted as indicative of other large strongyles being likely to be present. PMID:23731556
NASA Astrophysics Data System (ADS)
Winahju, W. S.; Mukarromah, A.; Putri, S.
2015-03-01
Leprosy is a chronic infectious disease caused by bacteria of leprosy (Mycobacterium leprae). Leprosy has become an important thing in Indonesia because its morbidity is quite high. Based on WHO data in 2014, in 2012 Indonesia has the highest number of new leprosy patients after India and Brazil with a contribution of 18.994 people (8.7% of the world). This number makes Indonesia automatically placed as the country with the highest number of leprosy morbidity of ASEAN countries. The province that most contributes to the number of leprosy patients in Indonesia is East Java. There are two kind of leprosy. They consist of pausibacillary and multibacillary. The morbidity of multibacillary leprosy is higher than pausibacillary leprosy. This paper will discuss modeling both of the number of multibacillary and pausibacillary leprosy patients as responses variables. These responses are count variables, so modeling will be conducted by using bivariate poisson regression method. Unit experiment used is in East Java, and predictors involved are: environment, demography, and poverty. The model uses data in 2012, and the result indicates that all predictors influence significantly.
Spontaneous hypnotic age regression: case report.
Spiegel, D; Rosenfeld, A
1984-12-01
Age regression--reliving the past as though it were occurring in the present, with age appropriate vocabulary, mental content, and affect--can occur with instruction in highly hypnotizable individuals, but has rarely been reported to occur spontaneously, especially as a primary symptom. The psychiatric presentation and treatment of a 16-year-old girl with spontaneous age regressions accessible and controllable with hypnosis and psychotherapy are described. Areas of overlap and divergence between this patient's symptoms and those found in patients with hysterical fugue and multiple personality syndrome are also discussed. PMID:6501240
Park, Taeyoung; Krafty, Robert T.; Sánchez, Alvaro I.
2012-01-01
A Poisson regression model with an offset assumes a constant baseline rate after accounting for measured covariates, which may lead to biased estimates of coefficients in an inhomogeneous Poisson process. To correctly estimate the effect of time-dependent covariates, we propose a Poisson change-point regression model with an offset that allows a time-varying baseline rate. When the nonconstant pattern of a log baseline rate is modeled with a nonparametric step function, the resulting semi-parametric model involves a model component of varying dimension and thus requires a sophisticated varying-dimensional inference to obtain correct estimates of model parameters of fixed dimension. To fit the proposed varying-dimensional model, we devise a state-of-the-art MCMC-type algorithm based on partial collapse. The proposed model and methods are used to investigate an association between daily homicide rates in Cali, Colombia and policies that restrict the hours during which the legal sale of alcoholic beverages is permitted. While simultaneously identifying the latent changes in the baseline homicide rate which correspond to the incidence of sociopolitical events, we explore the effect of policies governing the sale of alcohol on homicide rates and seek a policy that balances the economic and cultural dependencies on alcohol sales to the health of the public. PMID:23393408
AL-Hashimi, Muzahem Mohammed Yahya; Wang, XiangJun
2013-01-01
Background: Iraq fought three wars in three consecutive decades, Iran-Iraq war (1980-1988), Persian Gulf War in 1991, and the Iraq's war in 2003. In the nineties of the last century and up to the present time, there have been anecdotal reports of increase in cancer in Ninawa as in all provinces of Iraq, possibly as a result of exposure to depleted uranium used by American troops in the last two wars. This paper deals with cancer incidence in Ninawa, the most importance province in Iraq, where many of her sons were soldiers in the Iraqi army, and they have participated in the wars. Materials and Methods: The data was derived from the Directorate of Health in Ninawa. The data was divided into three sub periods: 1980-1990, 1991-2000, and 2001-2010. The analyses are performed using Poisson regressions. The response variable is the cancer incidence number. Cancer cases, age, sex, and years were considered as the explanatory variables. The logarithm of the population of Ninawa is used as an offset. The aim of this paper is to model the cancer incidence data and estimate the cancer incidence rate ratio (IRR) to illustrate the changes that have occurred of incidence cancer in Ninawa in these three periods. Results: There is evidence of a reduction in the cancer IRR in Ninawa in the third period as well as in the second period. Our analyses found that breast cancer remained the first common cancer; while the lung, trachea, and bronchus the second in spite of decreasing as dramatically. Modest increases in incidence of prostate, penis, and other male genitals for the duration of the study period and stability in incidence of colon in the second and third periods. Modest increases in incidence of placenta and metastatic tumors, while the highest increase was in leukemia in the third period relates to the second period but not to the first period. The cancer IRR in men was decreased from more than 33% than those of females in the first period, more than 39% in the second
Cui, Yuehua; Yang, Wenzhao
2009-01-21
Phenotypes measured in counts are commonly observed in nature. Statistical methods for mapping quantitative trait loci (QTL) underlying count traits are documented in the literature. The majority of them assume that the count phenotype follows a Poisson distribution with appropriate techniques being applied to handle data dispersion. When a count trait has a genetic basis, "naturally occurring" zero status also reflects the underlying gene effects. Simply ignoring or miss-handling the zero data may lead to wrong QTL inference. In this article, we propose an interval mapping approach for mapping QTL underlying count phenotypes containing many zeros. The effects of QTLs on the zero-inflated count trait are modelled through the zero-inflated generalized Poisson regression mixture model, which can handle the zero inflation and Poisson dispersion in the same distribution. We implement the approach using the EM algorithm with the Newton-Raphson algorithm embedded in the M-step, and provide a genome-wide scan for testing and estimating the QTL effects. The performance of the proposed method is evaluated through extensive simulation studies. Extensions to composite and multiple interval mapping are discussed. The utility of the developed approach is illustrated through a mouse F(2) intercross data set. Significant QTLs are detected to control mouse cholesterol gallstone formation. PMID:18977361
2013-01-01
Background Malnutrition is one of the principal causes of child mortality in developing countries including Bangladesh. According to our knowledge, most of the available studies, that addressed the issue of malnutrition among under-five children, considered the categorical (dichotomous/polychotomous) outcome variables and applied logistic regression (binary/multinomial) to find their predictors. In this study malnutrition variable (i.e. outcome) is defined as the number of under-five malnourished children in a family, which is a non-negative count variable. The purposes of the study are (i) to demonstrate the applicability of the generalized Poisson regression (GPR) model as an alternative of other statistical methods and (ii) to find some predictors of this outcome variable. Methods The data is extracted from the Bangladesh Demographic and Health Survey (BDHS) 2007. Briefly, this survey employs a nationally representative sample which is based on a two-stage stratified sample of households. A total of 4,460 under-five children is analysed using various statistical techniques namely Chi-square test and GPR model. Results The GPR model (as compared to the standard Poisson regression and negative Binomial regression) is found to be justified to study the above-mentioned outcome variable because of its under-dispersion (variance < mean) property. Our study also identify several significant predictors of the outcome variable namely mother’s education, father’s education, wealth index, sanitation status, source of drinking water, and total number of children ever born to a woman. Conclusions Consistencies of our findings in light of many other studies suggest that the GPR model is an ideal alternative of other statistical models to analyse the number of under-five malnourished children in a family. Strategies based on significant predictors may improve the nutritional status of children in Bangladesh. PMID:23297699
Choo-Wosoba, Hyoyoung; Levy, Steven M; Datta, Somnath
2016-06-01
Community water fluoridation is an important public health measure to prevent dental caries, but it continues to be somewhat controversial. The Iowa Fluoride Study (IFS) is a longitudinal study on a cohort of Iowa children that began in 1991. The main purposes of this study (http://www.dentistry.uiowa.edu/preventive-fluoride-study) were to quantify fluoride exposures from both dietary and nondietary sources and to associate longitudinal fluoride exposures with dental fluorosis (spots on teeth) and dental caries (cavities). We analyze a subset of the IFS data by a marginal regression model with a zero-inflated version of the Conway-Maxwell-Poisson distribution for count data exhibiting excessive zeros and a wide range of dispersion patterns. In general, we introduce two estimation methods for fitting a ZICMP marginal regression model. Finite sample behaviors of the estimators and the resulting confidence intervals are studied using extensive simulation studies. We apply our methodologies to the dental caries data. Our novel modeling incorporating zero inflation, clustering, and overdispersion sheds some new light on the effect of community water fluoridation and other factors. We also include a second application of our methodology to a genomic (next-generation sequencing) dataset that exhibits underdispersion. PMID:26575079
Kauhl, Boris; Heil, Jeanne; Hoebe, Christian J. P. A.; Schweikart, Jürgen; Krafft, Thomas; Dukers-Muijrers, Nicole H. T. M.
2015-01-01
Background Hepatitis C Virus (HCV) infections are a major cause for liver diseases. A large proportion of these infections remain hidden to care due to its mostly asymptomatic nature. Population-based screening and screening targeted on behavioural risk groups had not proven to be effective in revealing these hidden infections. Therefore, more practically applicable approaches to target screenings are necessary. Geographic Information Systems (GIS) and spatial epidemiological methods may provide a more feasible basis for screening interventions through the identification of hotspots as well as demographic and socio-economic determinants. Methods Analysed data included all HCV tests (n = 23,800) performed in the southern area of the Netherlands between 2002–2008. HCV positivity was defined as a positive immunoblot or polymerase chain reaction test. Population data were matched to the geocoded HCV test data. The spatial scan statistic was applied to detect areas with elevated HCV risk. We applied global regression models to determine associations between population-based determinants and HCV risk. Geographically weighted Poisson regression models were then constructed to determine local differences of the association between HCV risk and population-based determinants. Results HCV prevalence varied geographically and clustered in urban areas. The main population at risk were middle-aged males, non-western immigrants and divorced persons. Socio-economic determinants consisted of one-person households, persons with low income and mean property value. However, the association between HCV risk and demographic as well as socio-economic determinants displayed strong regional and intra-urban differences. Discussion The detection of local hotspots in our study may serve as a basis for prioritization of areas for future targeted interventions. Demographic and socio-economic determinants associated with HCV risk show regional differences underlining that a one
Bramness, Jørgen G; Walby, Fredrik A; Morken, Gunnar; Røislien, Jo
2015-08-01
Seasonal variation in the number of suicides has long been acknowledged. It has been suggested that this seasonality has declined in recent years, but studies have generally used statistical methods incapable of confirming this. We examined all suicides occurring in Norway during 1969-2007 (more than 20,000 suicides in total) to establish whether seasonality decreased over time. Fitting of additive Fourier Poisson time-series regression models allowed for formal testing of a possible linear decrease in seasonality, or a reduction at a specific point in time, while adjusting for a possible smooth nonlinear long-term change without having to categorize time into discrete yearly units. The models were compared using Akaike's Information Criterion and analysis of variance. A model with a seasonal pattern was significantly superior to a model without one. There was a reduction in seasonality during the period. Both the model assuming a linear decrease in seasonality and the model assuming a change at a specific point in time were both superior to a model assuming constant seasonality, thus confirming by formal statistical testing that the magnitude of the seasonality in suicides has diminished. The additive Fourier Poisson time-series regression model would also be useful for studying other temporal phenomena with seasonal components. PMID:26081677
Ma, Jianming; Kockelman, Kara M; Damien, Paul
2008-05-01
Numerous efforts have been devoted to investigating crash occurrence as related to roadway design features, environmental factors and traffic conditions. However, most of the research has relied on univariate count models; that is, traffic crash counts at different levels of severity are estimated separately, which may neglect shared information in unobserved error terms, reduce efficiency in parameter estimates, and lead to potential biases in sample databases. This paper offers a multivariate Poisson-lognormal (MVPLN) specification that simultaneously models crash counts by injury severity. The MVPLN specification allows for a more general correlation structure as well as overdispersion. This approach addresses several questions that are difficult to answer when estimating crash counts separately. Thanks to recent advances in crash modeling and Bayesian statistics, parameter estimation is done within the Bayesian paradigm, using a Gibbs Sampler and the Metropolis-Hastings (M-H) algorithms for crashes on Washington State rural two-lane highways. Estimation results from the MVPLN approach show statistically significant correlations between crash counts at different levels of injury severity. The non-zero diagonal elements suggest overdispersion in crash counts at all levels of severity. The results lend themselves to several recommendations for highway safety treatments and design policies. For example, wide lanes and shoulders are key for reducing crash frequencies, as are longer vertical curves. PMID:18460364
Change with age in regression construction of fat percentage for BMI in school-age children.
Fujii, Katsunori; Mishima, Takaaki; Watanabe, Eiji; Seki, Kazuyoshi
2011-01-01
In this study, curvilinear regression was applied to the relationship between BMI and body fat percentage, and an analysis was done to see whether there are characteristic changes in that curvilinear regression from elementary to middle school. Then, by simultaneously investigating the changes with age in BMI and body fat percentage, the essential differences in BMI and body fat percentage were demonstrated. The subjects were 789 boys and girls (469 boys, 320 girls) aged 7.5 to 14.5 years from all parts of Japan who participated in regular sports activities. Body weight, total body water (TBW), soft lean mass (SLM), body fat percentage, and fat mass were measured with a body composition analyzer (Tanita BC-521 Inner Scan), using segmental bioelectrical impedance analysis & multi-frequency bioelectrical impedance analysis. Height was measured with a digital height measurer. Body mass index (BMI) was calculated as body weight (km) divided by the square of height (m). The results for the validity of regression polynomials of body fat percentage against BMI showed that, for both boys and girls, first-order polynomials were valid in all school years. With regard to changes with age in BMI and body fat percentage, the results showed a temporary drop at 9 years in the aging distance curve in boys, followed by an increasing trend. Peaks were seen in the velocity curve at 9.7 and 11.9 years, but the MPV was presumed to be at 11.9 years. Among girls, a decreasing trend was seen in the aging distance curve, which was opposite to the changes in the aging distance curve for body fat percentage. PMID:21483178
Poisson`s ratio and crustal seismology
Christensen, N.I.
1996-02-10
This report discusses the use of Poisson`s ratio to place constraints on continental crustal composition. A summary of Poisson`s ratios for many common rock formations is also included with emphasis on igneous and metamorphic rock properties.
Silva, Fabyano Fonseca; Tunin, Karen P; Rosa, Guilherme J M; da Silva, Marcos V B; Azevedo, Ana Luisa Souza; da Silva Verneque, Rui; Machado, Marco Antonio; Packer, Irineu Umberto
2011-10-01
Now a days, an important and interesting alternative in the control of tick-infestation in cattle is to select resistant animals, and identify the respective quantitative trait loci (QTLs) and DNA markers, for posterior use in breeding programs. The number of ticks/animal is characterized as a discrete-counting trait, which could potentially follow Poisson distribution. However, in the case of an excess of zeros, due to the occurrence of several noninfected animals, zero-inflated Poisson and generalized zero-inflated distribution (GZIP) may provide a better description of the data. Thus, the objective here was to compare through simulation, Poisson and ZIP models (simple and generalized) with classical approaches, for QTL mapping with counting phenotypes under different scenarios, and to apply these approaches to a QTL study of tick resistance in an F2 cattle (Gyr × Holstein) population. It was concluded that, when working with zero-inflated data, it is recommendable to use the generalized and simple ZIP model for analysis. On the other hand, when working with data with zeros, but not zero-inflated, the Poisson model or a data-transformation-approach, such as square-root or Box-Cox transformation, are applicable. PMID:22215960
Silva, Fabyano Fonseca; Tunin, Karen P.; Rosa, Guilherme J.M.; da Silva, Marcos V.B.; Azevedo, Ana Luisa Souza; da Silva Verneque, Rui; Machado, Marco Antonio; Packer, Irineu Umberto
2011-01-01
Now a days, an important and interesting alternative in the control of tick-infestation in cattle is to select resistant animals, and identify the respective quantitative trait loci (QTLs) and DNA markers, for posterior use in breeding programs. The number of ticks/animal is characterized as a discrete-counting trait, which could potentially follow Poisson distribution. However, in the case of an excess of zeros, due to the occurrence of several noninfected animals, zero-inflated Poisson and generalized zero-inflated distribution (GZIP) may provide a better description of the data. Thus, the objective here was to compare through simulation, Poisson and ZIP models (simple and generalized) with classical approaches, for QTL mapping with counting phenotypes under different scenarios, and to apply these approaches to a QTL study of tick resistance in an F2 cattle (Gyr × Holstein) population. It was concluded that, when working with zero-inflated data, it is recommendable to use the generalized and simple ZIP model for analysis. On the other hand, when working with data with zeros, but not zero-inflated, the Poisson model or a data-transformation-approach, such as square-root or Box-Cox transformation, are applicable. PMID:22215960
Age Regression in the Treatment of Anger in a Prison Setting.
ERIC Educational Resources Information Center
Eisel, Harry E.
1988-01-01
Incorporated hypnotherapy with age regression into cognitive therapeutic approach with prisoners having history of anger. Technique involved age regression to establish first significant event causing current anger, catharsis of feelings for original event, and reorientation of event while under hypnosis. Results indicated decrease in acting-out…
Ishigami, Hideaki
2016-01-01
Relative age effect (RAE) in sports has been well documented. Recent studies investigate the effect of birthplace in addition to the RAE. The first objective of this study was to show the magnitude of the RAE in two major professional sports in Japan, baseball and soccer. Second, we examined the birthplace effect and compared its magnitude with that of the RAE. The effect sizes were estimated using a Bayesian hierarchical Poisson model with the number of players as dependent variable. The RAEs were 9.0% and 7.7% per month for soccer and baseball, respectively. These estimates imply that children born in the first month of a school year have about three times greater chance of becoming a professional player than those born in the last month of the year. Over half of the difference in likelihoods of becoming a professional player between birthplaces was accounted for by weather conditions, with the likelihood decreasing by 1% per snow day. An effect of population size was not detected in the data. By investigating different samples, we demonstrated that using quarterly data leads to underestimation and that the age range of sampled athletes should be set carefully. PMID:25917193
Nie, Lei; Wu, Gang; Brockman, Fred J.; Zhang, Weiwen
2006-05-04
Abstract Advances in DNA microarray and proteomics technologies have enabled high-throughput measurement of mRNA expression and protein abundance. Parallel profiling of mRNA and protein on a global scale and integrative analysis of these two data types could provide additional insight into the metabolic mechanisms underlying complex biological systems. However, because protein abundance and mRNA expression are affected by many cellular and physical processes, there have been conflicting results on the correlation of these two measurements. In addition, as current proteomic methods can detect only a small fraction of proteins present in cells, no correlation study of these two data types has been done thus far at the whole-genome level. In this study, we describe a novel data-driven statistical model to integrate whole-genome microarray and proteomic data collected from Desulfovibrio vulgaris grown under three different conditions. Based on the Poisson distribution pattern of proteomic data and the fact that a large number of proteins were undetected (excess zeros), Zero-inflated Poisson models were used to define the correlation pattern of mRNA and protein abundance. The models assumed that there is a probability mass at zero representing some of the undetected proteins because of technical limitations. The models thus use abundance measurements of transcripts and proteins experimentally detected as input to generate predictions of protein abundances as output for all genes in the genome. We demonstrated the statistical models by comparatively analyzing D. vulgaris grown on lactate-based versus formate-based media. The increased expressions of Ech hydrogenase and alcohol dehydrogenase (Adh)-periplasmic Fe-only hydrogenase (Hyd) pathway for ATP synthesis were predicted for D. vulgaris grown on formate.
Yang, Fang; Yang, Min; Hu, Yuehua; Zhang, Juying
2016-01-01
Background Hand, Foot, and Mouth Disease (HFMD) is a worldwide infectious disease. In China, many provinces have reported HFMD cases, especially the south and southwest provinces. Many studies have found a strong association between the incidence of HFMD and climatic factors such as temperature, rainfall, and relative humidity. However, few studies have analyzed cluster effects between various geographical units. Methods The nonlinear relationships and lag effects between weekly HFMD cases and climatic variables were estimated for the period of 2008–2013 using a polynomial distributed lag model. The extra-Poisson multilevel spatial polynomial model was used to model the exact relationship between weekly HFMD incidence and climatic variables after considering cluster effects, provincial correlated structure of HFMD incidence and overdispersion. The smoothing spline methods were used to detect threshold effects between climatic factors and HFMD incidence. Results The HFMD incidence spatial heterogeneity distributed among provinces, and the scale measurement of overdispersion was 548.077. After controlling for long-term trends, spatial heterogeneity and overdispersion, temperature was highly associated with HFMD incidence. Weekly average temperature and weekly temperature difference approximate inverse “V” shape and “V” shape relationships associated with HFMD incidence. The lag effects for weekly average temperature and weekly temperature difference were 3 weeks and 2 weeks. High spatial correlated HFMD incidence were detected in northern, central and southern province. Temperature can be used to explain most of variation of HFMD incidence in southern and northeastern provinces. After adjustment for temperature, eastern and Northern provinces still had high variation HFMD incidence. Conclusion We found a relatively strong association between weekly HFMD incidence and weekly average temperature. The association between the HFMD incidence and climatic
A novel strategy for forensic age prediction by DNA methylation and support vector regression model
Xu, Cheng; Qu, Hongzhu; Wang, Guangyu; Xie, Bingbing; Shi, Yi; Yang, Yaran; Zhao, Zhao; Hu, Lan; Fang, Xiangdong; Yan, Jiangwei; Feng, Lei
2015-01-01
High deviations resulting from prediction model, gender and population difference have limited age estimation application of DNA methylation markers. Here we identified 2,957 novel age-associated DNA methylation sites (P < 0.01 and R2 > 0.5) in blood of eight pairs of Chinese Han female monozygotic twins. Among them, nine novel sites (false discovery rate < 0.01), along with three other reported sites, were further validated in 49 unrelated female volunteers with ages of 20–80 years by Sequenom Massarray. A total of 95 CpGs were covered in the PCR products and 11 of them were built the age prediction models. After comparing four different models including, multivariate linear regression, multivariate nonlinear regression, back propagation neural network and support vector regression, SVR was identified as the most robust model with the least mean absolute deviation from real chronological age (2.8 years) and an average accuracy of 4.7 years predicted by only six loci from the 11 loci, as well as an less cross-validated error compared with linear regression model. Our novel strategy provides an accurate measurement that is highly useful in estimating the individual age in forensic practice as well as in tracking the aging process in other related applications. PMID:26635134
Flood, D G; Coleman, P D
1993-01-01
As neurons are lost in normal aging, the dendrites of surviving neighbor neurons may proliferate, regress, or remain unchanged. In the case of age-related dendritic regression, it has been difficult to distinguish whether the regression precedes neuronal death or whether it is a consequence of loss of afferent supply. The rat supraoptic nucleus (SON) represents a model system in which there is no age-related loss of neurons, but in which there is an age-related loss of afferents. The magnocellular neurosecretory neurons of the SON, that produce vasopressin and oxytocin for release in the posterior pituitary, were studied in male Fischer 344 rats at 3, 12, 20, 27, 30, and 32 months of age. Counts in Nissl-stained sections showed no neuronal loss with age, and confirmed similar findings in other strains of rat and in mouse and human. Nucleolar size increased between 3 and 12 months of age, due, in part, to nucleolar fusion, and was unchanged between 12 and 32 months of age, indicating maintenance of general cellular function in old age. Dendritic extent quantified in Golgi-stained tissue increased between 3 and 12 months of age, was stable between 12 and 20 months, and decreased between 20 and 27 months. We interpret the increase between 3 and 12 months as a late maturational change. Dendritic regression between 20 and 27 months was probably the result of deafferentation due to the preceding age-related loss of the noradrenergic input to the SON from the ventral medulla. PMID:7507575
Age estimation based on pelvic ossification using regression models from conventional radiography.
Zhang, Kui; Dong, Xiao-Ai; Fan, Fei; Deng, Zhen-Hua
2016-07-01
To establish regression models for age estimation from the combination of the ossification of iliac crest and ischial tuberosity. One thousand three hundred and seventy-nine conventional pelvic radiographs at the West China Hospital of Sichuan University between January 2010 and June 2012 were evaluated retrospectively. The receiver operating characteristic analysis was performed to measure the value of estimation of 18 years of age with the classification scheme for the iliac crest and ischial tuberosity. Regression analysis was performed, and formulas for calculating approximate chronological age according to the combination developmental status of the ossification for the iliac crest and ischial tuberosity were developed. The areas under the receiver operating characteristic (ROC) curves were above 0.9 (p < 0.001), indicating a good prediction of the grading systems, and the cubic regression model was found to have the highest R-square value (R (2) = 0.744 for female and R (2) = 0.753 for male). The present classification scheme for apophyseal iliac crest ossification and the ischial tuberosity may be used for age estimation. And the present established cubic regression model according to the combination developmental status of the ossification for the iliac crest and ischial tuberosity can be used for age estimation. PMID:27169673
Somkantha, Krit; Theera-Umpon, Nipon; Auephanwiriyakul, Sansanee
2011-12-01
Boundary extraction of carpal bone images is a critical operation of the automatic bone age assessment system, since the contrast between the bony structure and soft tissue are very poor. In this paper, we present an edge following technique for boundary extraction in carpal bone images and apply it to assess bone age in young children. Our proposed technique can detect the boundaries of carpal bones in X-ray images by using the information from the vector image model and the edge map. Feature analysis of the carpal bones can reveal the important information for bone age assessment. Five features for bone age assessment are calculated from the boundary extraction result of each carpal bone. All features are taken as input into the support vector regression (SVR) that assesses the bone age. We compare the SVR with the neural network regression (NNR). We use 180 images of carpal bone from a digital hand atlas to assess the bone age of young children from 0 to 6 years old. Leave-one-out cross validation is used for testing the efficiency of the techniques. The opinions of the skilled radiologists provided in the atlas are used as the ground truth in bone age assessment. The SVR is able to provide more accurate bone age assessment results than the NNR. The experimental results from SVR are very close to the bone age assessment by skilled radiologists. PMID:21347746
2015-01-01
Objective Evidence collected in many parts of the world suggests that, compared to older students, students who are relatively younger at school entry tend to have worse academic performance and lower levels of income. This study examined how relative age in a grade affects suicide rates of adolescents and young adults between 15 and 25 years of age using data from Japan. Method We examined individual death records in the Vital Statistics of Japan from 1989 to 2010. In contrast to other countries, late entry to primary school is not allowed in Japan. We took advantage of the school entry cutoff date to implement a regression discontinuity (RD) design, assuming that the timing of births around the school entry cutoff date was randomly determined and therefore that individuals who were born just before and after the cutoff date have similar baseline characteristics. Results We found that those who were born right before the school cutoff day and thus youngest in their cohort have higher mortality rates by suicide, compared to their peers who were born right after the cutoff date and thus older. We also found that those with relative age disadvantage tend to follow a different career path than those with relative age advantage, which may explain their higher suicide mortality rates. Conclusion Relative age effects have broader consequences than was previously supposed. This study suggests that policy intervention that alleviates the relative age effect can be important. PMID:26309241
Duarte, Elisa; de Sousa, Bruno; Cadarso-Suarez, Carmen; Rodrigues, Vitor; Kneib, Thomas
2014-05-01
Breast cancer risk is believed to be associated with several reproductive factors, such as early menarche and late menopause. This study is based on the registries of the first time a woman enters the screening program, and presents a spatio-temporal analysis of the variables age of menarche and age of menopause along with other reproductive and socioeconomic factors. The database was provided by the Portuguese Cancer League (LPCC), a private nonprofit organization dealing with multiple issues related to oncology of which the Breast Cancer Screening Program is one of its main activities. The registry consists of 259,652 records of women who entered the screening program for the first time between 1990 and 2007 (45-69-year age group). Structured Additive Regression (STAR) models were used to explore spatial and temporal correlations with a wide range of covariates. These models are flexible enough to deal with a variety of complex datasets, allowing us to reveal possible relationships among the variables considered in this study. The analysis shows that early menarche occurs in younger women and in municipalities located in the interior of central Portugal. Women living in inland municipalities register later ages for menopause, and those born in central Portugal after 1933 show a decreasing trend in the age of menopause. Younger ages of menarche and late menopause are observed in municipalities with a higher purchasing power index. The analysis performed in this study portrays the time evolution of the age of menarche and age of menopause and their spatial characterization, adding to the identification of factors that could be of the utmost importance in future breast cancer incidence research. PMID:24615881
Stefanello, C; Vieira, S L; Xue, P; Ajuwon, K M; Adeola, O
2016-07-01
A study was conducted to determine the ileal digestible energy (IDE), ME, and MEn contents of bakery meal using the regression method and to evaluate whether the energy values are age-dependent in broiler chickens from zero to 21 d post hatching. Seven hundred and eighty male Ross 708 chicks were fed 3 experimental diets in which bakery meal was incorporated into a corn-soybean meal-based reference diet at zero, 100, or 200 g/kg by replacing the energy-yielding ingredients. A 3 × 3 factorial arrangement of 3 ages (1, 2, or 3 wk) and 3 dietary bakery meal levels were used. Birds were fed the same experimental diets in these 3 evaluated ages. Birds were grouped by weight into 10 replicates per treatment in a randomized complete block design. Apparent ileal digestibility and total tract retention of DM, N, and energy were calculated. Expression of mucin (MUC2), sodium-dependent phosphate transporter (NaPi-IIb), solute carrier family 7 (cationic amino acid transporter, Y(+) system, SLC7A2), glucose (GLUT2), and sodium-glucose linked transporter (SGLT1) genes were measured at each age in the jejunum by real-time PCR. Addition of bakery meal to the reference diet resulted in a linear decrease in retention of DM, N, and energy, and a quadratic reduction (P < 0.05) in N retention and ME. There was a linear increase in DM, N, and energy as birds' ages increased from 1 to 3 wk. Dietary bakery meal did not affect jejunal gene expression. Expression of genes encoding MUC2, NaPi-IIb, and SLC7A2 linearly increased (P < 0.05) with age. Regression-derived MEn of bakery meal linearly increased (P < 0.05) as the age of birds increased, with values of 2,710, 2,820, and 2,923 kcal/kg DM for 1, 2, and 3 wk, respectively. Based on these results, utilization of energy and nitrogen in the basal diet decreased when bakery meal was included and increased with age of broiler chickens. PMID:26944962
Zoche-Golob, V; Heuwieser, W; Krömker, V
2015-09-01
The objective of the present study was to investigate the association between the milk fat-protein ratio and the incidence rate of clinical mastitis including repeated cases of clinical mastitis to determine the usefulness of this association to monitor metabolic disorders as risk factors for udder health. Herd records from 10 dairy herds of Holstein cows in Saxony, Germany, from September 2005-2011 (36,827 lactations of 17,657 cows) were used for statistical analysis. A mixed Poisson regression model with the weekly incidence rate of clinical mastitis as outcome variable was fitted. The model included repeated events of the outcome, time-varying covariates and multilevel clustering. Because the recording of clinical mastitis might have been imperfect, a probabilistic bias analysis was conducted to assess the impact of the misclassification of clinical mastitis on the conventional results. The lactational incidence of clinical mastitis was 38.2%. In 36.2% and 34.9% of the lactations, there was at least one dairy herd test day with a fat-protein ratio of <1.0 or >1.5, respectively. Misclassification of clinical mastitis was assumed to have resulted in bias towards the null. A clinical mastitis case increased the incidence rate of following cases of the same cow. Fat-protein ratios of <1.0 and >1.5 were associated with higher incidence rates of clinical mastitis depending on week in milk. The effect of a fat-protein ratio >1.5 on the incidence rate of clinical mastitis increased considerably over the course of lactation, whereas the effect of a fat-protein ratio <1.0 decreased. Fat-protein ratios <1.0 or >1.5 on the precedent test days of all cows irrespective of their time in milk seemed to be better predictors for clinical mastitis than the first test day results per lactation. PMID:26164530
Palmer, Michael; Mitra, Sophie; Mont, Daniel; Groce, Nora
2015-11-01
Accessing health services at an early age is important to future health and life outcomes. Yet, little is currently known on the role of health insurance in facilitating access to care for children. Exploiting a regression discontinuity design made possible through a policy to provide health insurance to pre-school aged children in Vietnam, this paper evaluates the impact of health insurance on the health care utilization outcomes of children at the eligibility threshold of six years. Using three rounds of the Vietnam Household Living Standards Survey, the study finds a positive impact on inpatient and outpatient visits and no significant impact on expenditures per visit at public facilities. We find moderately high use of private outpatient services and no evidence of a switch from private to covered public facilities under insurance. Results suggest that adopting public health insurance programs for children under age 6 may be an important vehicle to improving service utilization in a low- and middle-income country context. Challenges remain in providing adequate protections from the costs and other barriers to care. PMID:25147057
Chung, Moo K; Schaefer, Stacey M; Van Reekum, Carien M; Peschke-Schmitz, Lara; Sutterer, Mattew J; Davidson, Richard J
2014-01-01
We present a new unified kernel regression framework on manifolds. Starting with a symmetric positive definite kernel, we formulate a new bivariate kernel regression framework that is related to heat diffusion, kernel smoothing and recently popular diffusion wavelets. Various properties and performance of the proposed kernel regression framework are demonstrated. The method is subsequently applied in investigating the influence of age and gender on the human amygdala and hippocampus shapes. We detected a significant age effect on the posterior regions of hippocampi while there is no gender effect present. PMID:25485452
Reisinger, Thomas; Holst, Bodil; Patel, Amil A.; Smith, Henry I.; Reingruber, Herbert; Fladischer, Katrin; Ernst, Wolfgang E.; Bracco, Gianangelo
2009-05-15
In the Poisson-spot experiment, waves emanating from a source are blocked by a circular obstacle. Due to their positive on-axis interference an image of the source (the Poisson spot) is observed within the geometrical shadow of the obstacle. In this paper we report the observation of Poisson's spot using a beam of neutral deuterium molecules. The wavelength independence and the weak constraints on angular alignment and position of the circular obstacle make Poisson's spot a promising candidate for applications ranging from the study of large molecule diffraction to patterning with molecules.
Brain trauma in aged transgenic mice induces regression of established abeta deposits.
Nakagawa, Y; Reed, L; Nakamura, M; McIntosh, T K; Smith, D H; Saatman, K E; Raghupathi, R; Clemens, J; Saido, T C; Lee, V M; Trojanowski, J Q
2000-05-01
Traumatic brain injury (TBI) increases susceptibility to Alzheimer's disease (AD), but it is not known if TBI affects the progression of AD. To address this question, we studied the neuropathological consequences of TBI in transgenic (TG) mice with a mutant human Abeta precursor protein (APP) mini-gene driven by a platelet-derived (PD) growth factor promoter resulting in overexpression of mutant APP (V717F), elevated brain Abeta levels, and AD-like amyloidosis. Since brain Abeta deposits first appear in 6-month-old TG (PDAPP) mice and accumulate with age, 2-year-old PDAPP and wild-type (WT) mice were subjected to controlled cortical impact (CCI) TBI or sham treatment. At 1, 9, and 16 weeks after TBI, neuron loss, gliosis, and atrophy were most prominent near the CCI site in PDAPP and WT mice. However, there also was a remarkable regression in the Abeta amyloid plaque burden in the hippocampus ipsilateral to TBI compared to the contralateral hippocampus of the PDAPP mice by 16 weeks postinjury. Thus, these data suggest that previously accumulated Abeta plaques resulting from progressive amyloidosis in the AD brain also may be reversible. PMID:10785464
Tandon, Ankita; Agarwal, Vartika; Arora, Varun
2015-01-01
Objectives The study was conducted to check the reliability of India-specific regression formula for age estimation of population in and around Bahadurgarh, Haryana (India). Materials and methods The study was conducted using digital orthopantomograms (OPGs) of 464 subjects (253 males and 211 females). Chronologic age (CA) was derived from that mentioned on the OPG. Each tooth in the left mandibular segment was scored using Demirjian's scoring and age was calculated using the regression formulas derived by Acharya. The difference of the chronologic and estimated age was used to check the reliability of India-specific regression formula. Results The mean estimated age was found to be significantly higher as compared to CA for overall as well as both the genders independently (p < 0.001). Difference (in ±) between estimated and CA ranged from 0 to 4.2 years. Mean difference in age was 0.85 ± 0.73 years for males and 0.87 ± 0.76 years for females. Conclusion The published India-specific regression formula does not have reliability in the population of Bahadurgarh, Haryana and hence cannot be universally applied. PMID:26587381
NASA Astrophysics Data System (ADS)
Reisinger, Thomas; Patel, Amil; Reingruber, Herbert; Fladischer, Katrin; Ernst, Wolfgang E.; Bracco, Gianangelo; Smith, Henry I.; Holst, Bodil
2009-03-01
In the Poisson-Spot experiment, waves emanating from a source are blocked by a circular obstacle. Due to their positive on-axis interference an image of the source (the Poisson spot) is observed within the geometrical shadow of the obstacle. The Poisson spot is the last of the classical optics experiments to be realized with neutral matter waves. In this paper we report the observation of Poisson's Spot using a beam of neutral deuterium molecules. The wavelength-independence and the weak constraints on angular alignment and position of the circular obstacle make Poisson's spot a promising candidate for applications ranging from the study of large-molecule diffraction and coherence in atom-lasers to patterning with large molecules.
Cumulative Poisson Distribution Program
NASA Technical Reports Server (NTRS)
Bowerman, Paul N.; Scheuer, Ernest M.; Nolty, Robert
1990-01-01
Overflow and underflow in sums prevented. Cumulative Poisson Distribution Program, CUMPOIS, one of two computer programs that make calculations involving cumulative Poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), used independently of one another. CUMPOIS determines cumulative Poisson distribution, used to evaluate cumulative distribution function (cdf) for gamma distributions with integer shape parameters and cdf for X (sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Written in C.
Scaling the Poisson Distribution
ERIC Educational Resources Information Center
Farnsworth, David L.
2014-01-01
We derive the additive property of Poisson random variables directly from the probability mass function. An important application of the additive property to quality testing of computer chips is presented.
Essential Variational Poisson Cohomology
NASA Astrophysics Data System (ADS)
De Sole, Alberto; Kac, Victor G.
2012-08-01
In our recent paper "The variational Poisson cohomology" (2011) we computed the dimension of the variational Poisson cohomology {{{H}^bullet_K({V})}} for any quasiconstant coefficient ℓ × ℓ matrix differential operator K of order N with invertible leading coefficient, provided that {{{V}}} is a normal algebra of differential functions over a linearly closed differential field. In the present paper we show that, for K skewadjoint, the {{{Z}}} -graded Lie superalgebra {{{H}^bullet_K({V})}} is isomorphic to the finite dimensional Lie superalgebra {{widetilde{H}(Nell,S)}} . We also prove that the subalgebra of "essential" variational Poisson cohomology, consisting of classes vanishing on the Casimirs of K, is zero. This vanishing result has applications to the theory of bi-Hamiltonian structures and their deformations. At the end of the paper we consider also the translation invariant case.
ERIC Educational Resources Information Center
Pedrini, D. T.; Pedrini, Bonnie C.
Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…
ERIC Educational Resources Information Center
Newsom, Jason T.; Prigerson, Holly G.; Schulz, Richard; Reynolds, Charles F., III
2003-01-01
Many topics in aging research address questions about group differences in prediction. Such questions can be viewed in terms of interaction or moderator effects, and use of appropriate methods to test these hypotheses are necessary to arrive at accurate conclusions about age differences. This article discusses the conceptual, methodological, and…
NASA Astrophysics Data System (ADS)
Reisinger, Thomas; Patel, Amil A.; Reingruber, Herbert; Fladischer, Katrin; Ernst, Wolfgang E.; Bracco, Gianangelo; Smith, Henry I.; Holst, Bodil
2009-05-01
In the Poisson-spot experiment, waves emanating from a source are blocked by a circular obstacle. Due to their positive on-axis interference an image of the source (the Poisson spot) is observed within the geometrical shadow of the obstacle. In this paper we report the observation of Poisson’s spot using a beam of neutral deuterium molecules. The wavelength independence and the weak constraints on angular alignment and position of the circular obstacle make Poisson’s spot a promising candidate for applications ranging from the study of large molecule diffraction to patterning with molecules.
Wang, Guo-xiang; Wang, Hai-yan; Wang, Hu; Zhang, Zheng-yong; Liu, Jun
2016-03-01
It is an important and difficult research point to recognize the age of Chinese liquor rapidly and exactly in the field of liquor analyzing, which is also of great significance to the healthy development of the liquor industry and protection of the legitimate rights and interests of consumers. Spectroscopy together with the pattern recognition technology is a preferred method of achieving rapid identification of wine quality, in which the Raman Spectroscopy is promising because of its little affection of water and little or free of sample pretreatment. So, in this paper, Raman spectra and support vector regression (SVR) are used to recognize different ages and different storing time of the liquor of the same age. The innovation of this paper is mainly reflected in the following three aspects. First, the application of Raman in the area of liquor analysis is rarely reported till now. Second, the concentration of studying the recognition of wine age, while most studies focus on studying specific components of liquor and studies together with the pattern recognition method focus more on the identification of brands or different types of base wine. The third one is the application of regression analysis framework, which cannot be only used to identify different years of liquor, but also can be used to analyze different storing time, which has theoretical and practical significance to the research and quality control of liquor. Three kinds of experiments are conducted in this paper. Firstly, SVR is used to recognize different ages of 5, 8, 16 and 26 years of the Gujing Liquor; secondly, SVR is also used to classify the storing time of the 8-years liquor; thirdly, certain group of train data is deleted form the train set and put into the test set to simulate the actual situation of liquor age recognition. Results show that the SVR model has good train and predict performance in these experiments, and it has better performance than other non-liner regression method such
Reboussin, Beth A; Preisser, John S; Song, Eun-Young; Wolfson, Mark
2012-07-01
Under-age drinking is an enormous public health issue in the USA. Evidence that community level structures may impact on under-age drinking has led to a proliferation of efforts to change the environment surrounding the use of alcohol. Although the focus of these efforts is to reduce drinking by individual youths, environmental interventions are typically implemented at the community level with entire communities randomized to the same intervention condition. A distinct feature of these trials is the tendency of the behaviours of individuals residing in the same community to be more alike than that of others residing in different communities, which is herein called 'clustering'. Statistical analyses and sample size calculations must account for this clustering to avoid type I errors and to ensure an appropriately powered trial. Clustering itself may also be of scientific interest. We consider the alternating logistic regressions procedure within the population-averaged modelling framework to estimate the effect of a law enforcement intervention on the prevalence of under-age drinking behaviours while modelling the clustering at multiple levels, e.g. within communities and within neighbourhoods nested within communities, by using pairwise odds ratios. We then derive sample size formulae for estimating intervention effects when planning a post-test-only or repeated cross-sectional community-randomized trial using the alternating logistic regressions procedure. PMID:24347839
Manchia, Mirko; Zai, Clement C; Squassina, Alessio; Vincent, John B; De Luca, Vincenzo; Kennedy, James L
2010-09-01
Bipolar Disorder (BPD) is a complex psychiatric disease with a relevant underlying genetic basis. HTR2A T102C, HTR2C Cys23Ser, SLC6A4 5-HTTLPR and rs25531 polymorphisms were genotyped in 230 BPD patients and inserted as covariates in a mixture regression model of age at onset (AAO). 5-HTTLPR polymorphism associated with early onset component under recessive and additive model. HTR2A T102C, HTR2C Cys23Ser and 5-HTTLPR interaction terms associated with early onset component under dominant, recessive and additive model. These findings suggest a role of genes codifying for elements of the serotonergic system in influencing the AAO in BPD. PMID:20452754
Allodji, Rodrigue S; Thiébaut, Anne C M; Leuraud, Klervi; Rage, Estelle; Henry, Stéphane; Laurier, Dominique; Bénichou, Jacques
2012-12-30
A broad variety of methods for measurement error (ME) correction have been developed, but these methods have rarely been applied possibly because their ability to correct ME is poorly understood. We carried out a simulation study to assess the performance of three error-correction methods: two variants of regression calibration (the substitution method and the estimation calibration method) and the simulation extrapolation (SIMEX) method. Features of the simulated cohorts were borrowed from the French Uranium Miners' Cohort in which exposure to radon had been documented from 1946 to 1999. In the absence of ME correction, we observed a severe attenuation of the true effect of radon exposure, with a negative relative bias of the order of 60% on the excess relative risk of lung cancer death. In the main scenario considered, that is, when ME characteristics previously determined as most plausible from the French Uranium Miners' Cohort were used both to generate exposure data and to correct for ME at the analysis stage, all three error-correction methods showed a noticeable but partial reduction of the attenuation bias, with a slight advantage for the SIMEX method. However, the performance of the three correction methods highly depended on the accurate determination of the characteristics of ME. In particular, we encountered severe overestimation in some scenarios with the SIMEX method, and we observed lack of correction with the three methods in some other scenarios. For illustration, we also applied and compared the proposed methods on the real data set from the French Uranium Miners' Cohort study. PMID:22996087
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2008-09-01
The Central Limit Theorem (CLT) and Extreme Value Theory (EVT) study, respectively, the stochastic limit-laws of sums and maxima of sequences of independent and identically distributed (i.i.d.) random variables via an affine scaling scheme. In this research we study the stochastic limit-laws of populations of i.i.d. random variables via nonlinear scaling schemes. The stochastic population-limits obtained are fractal Poisson processes which are statistically self-similar with respect to the scaling scheme applied, and which are characterized by two elemental structures: (i) a universal power-law structure common to all limits, and independent of the scaling scheme applied; (ii) a specific structure contingent on the scaling scheme applied. The sum-projection and the maximum-projection of the population-limits obtained are generalizations of the classic CLT and EVT results - extending them from affine to general nonlinear scaling schemes.
Vavvas, Demetrios G.; Daniels, Anthony B.; Kapsala, Zoi G.; Goldfarb, Jeremy W.; Ganotakis, Emmanuel; Loewenstein, John I.; Young, Lucy H.; Gragoudas, Evangelos S.; Eliott, Dean; Kim, Ivana K.; Tsilimbaris, Miltiadis K.; Miller, Joan W.
2016-01-01
Importance Age-related macular degeneration (AMD) remains the leading cause of blindness in developed countries, and affects more than 150 million worldwide. Despite effective anti-angiogenic therapies for the less prevalent neovascular form of AMD, treatments are lacking for the more prevalent dry form. Similarities in risk factors and pathogenesis between AMD and atherosclerosis have led investigators to study the effects of statins on AMD incidence and progression with mixed results. A limitation of these studies has been the heterogeneity of AMD disease and the lack of standardization in statin dosage. Objective We were interested in studying the effects of high-dose statins, similar to those showing regression of atherosclerotic plaques, in AMD. Design Pilot multicenter open-label prospective clinical study of 26 patients with diagnosis of AMD and the presence of many large, soft drusenoid deposits. Patients received 80 mg of atorvastatin daily and were monitored at baseline and every 3 months with complete ophthalmologic exam, best corrected visual acuity (VA), fundus photographs, optical coherence tomography (OCT), and blood work (AST, ALT, CPK, total cholesterol, TSH, creatinine, as well as a pregnancy test for premenopausal women). Results Twenty-three subjects completed a minimum follow-up of 12 months. High-dose atorvastatin resulted in regression of drusen deposits associated with vision gain (+ 3.3 letters, p = 0.06) in 10 patients. No subjects progressed to advanced neovascular AMD. Conclusions High-dose statins may result in resolution of drusenoid pigment epithelial detachments (PEDs) and improvement in VA, without atrophy or neovascularization in a high-risk subgroup of AMD patients. Confirmation from larger studies is warranted. PMID:27077128
Sukits, Alison L.; McCrory, Jean L.; Cham, Rakié
2016-01-01
Age, obesity, and gender can have a significant impact on the anthropometrics of adults aged 65 and older. The aim of this study was to investigate differences in body segment parameters derived using two methods: (1) a dual-energy x-ray absorptiometry (DXA) subject-specific method (Chambers et al., 2010) and (2) traditional regression models (de Leva, 1996). The impact of aging, gender, and obesity on the potential differences between these methods was examined. Eighty-three healthy older adults were recruited for participation. Participants underwent a whole-body DXA scan (Hologic QDR 1000/W). Mass, length, center of mass, and radius of gyration were determined for each segment. In addition, traditional regressions were used to estimate these parameters (de Leva, 1996). A mixed linear regression model was performed (α = 0.05). Method type was significant in every variable of interest except forearm segment mass. The obesity and gender differences that we observed translate into differences associated with using traditional regressions to predict anthropometric variables in an aging population. Our data point to a need to consider age, obesity, and gender when utilizing anthropometric data sets and to develop regression models that accurately predict body segment parameters in the geriatric population, considering gender and obesity. PMID:21844608
Chambers, April J; Sukits, Alison L; McCrory, Jean L; Cham, Rakie
2011-08-01
Age, obesity, and gender can have a significant impact on the anthropometrics of adults aged 65 and older. The aim of this study was to investigate differences in body segment parameters derived using two methods: (1) a dual-energy x-ray absorptiometry (DXA) subject-specific method (Chambers et al., 2010) and (2) traditional regression models (de Leva, 1996). The impact of aging, gender, and obesity on the potential differences between these methods was examined. Eighty-three healthy older adults were recruited for participation. Participants underwent a whole-body DXA scan (Hologic QDR 1000/W). Mass, length, center of mass, and radius of gyration were determined for each segment. In addition, traditional regressions were used to estimate these parameters (de Leva, 1996). A mixed linear regression model was performed (α = 0.05). Method type was significant in every variable of interest except forearm segment mass. The obesity and gender differences that we observed translate into differences associated with using traditional regressions to predict anthropometric variables in an aging population. Our data point to a need to consider age, obesity, and gender when utilizing anthropometric data sets and to develop regression models that accurately predict body segment parameters in the geriatric population, considering gender and obesity. PMID:21844608
Poisson Spot with Magnetic Levitation
ERIC Educational Resources Information Center
Hoover, Matthew; Everhart, Michael; D'Arruda, Jose
2010-01-01
In this paper we describe a unique method for obtaining the famous Poisson spot without adding obstacles to the light path, which could interfere with the effect. A Poisson spot is the interference effect from parallel rays of light diffracting around a solid spherical object, creating a bright spot in the center of the shadow.
Poisson spot with magnetic levitation
NASA Astrophysics Data System (ADS)
Hoover, Matthew; Everhart, Michael; D'Arruda, Jose
2010-02-01
In this paper we describe a unique method for obtaining the famous Poisson spot without adding obstacles to the light path, which could interfere with the effect. A Poisson spot is the interference effect from parallel rays of light diffracting around a solid spherical object, creating a bright spot in the center of the shadow.
Darmawan, M F; Yusuf, Suhaila M; Abdul Kadir, M R; Haron, H
2015-03-01
Age estimation was used in forensic anthropology to help in the identification of individual remains and living person. However, the estimation methods tend to be unique and applicable only to a certain population. This paper analyzed age estimation using twelve regression models carried out on X-ray images of the left hand taken from an Asian data set for subjects under the age of 19. All the nineteen bones of the left hand were measured using free image software and the statistical analysis were performed using SPSS. There are two methods to determine age in this study which are single bone method and all bones method. For single bone method, S-curve regression model was found to have the highest R-square value using second metacarpal for males, and third proximal phalanx for females. For age estimation using single bone, fifth metacarpal from males and fifth proximal phalanx from females can be used due to the lowest mean square error (MSE) value. To conclude, multiple linear regressions is the best techniques for age estimation in cases where all bones are available, but if not, S-curve regression can be used using single bone method. PMID:25456051
Regression versus No Regression in the Autistic Disorder: Developmental Trajectories
ERIC Educational Resources Information Center
Bernabei, P.; Cerquiglini, A.; Cortesi, F.; D' Ardia, C.
2007-01-01
Developmental regression is a complex phenomenon which occurs in 20-49% of the autistic population. Aim of the study was to assess possible differences in the development of regressed and non-regressed autistic preschoolers. We longitudinally studied 40 autistic children (18 regressed, 22 non-regressed) aged 2-6 years. The following developmental…
How does Poisson kriging compare to the popular BYM model for mapping disease risks?
Goovaerts, Pierre; Gebreab, Samson
2008-01-01
Background Geostatistical techniques are now available to account for spatially varying population sizes and spatial patterns in the mapping of disease rates. At first glance, Poisson kriging represents an attractive alternative to increasingly popular Bayesian spatial models in that: 1) it is easier to implement and less CPU intensive, and 2) it accounts for the size and shape of geographical units, avoiding the limitations of conditional auto-regressive (CAR) models commonly used in Bayesian algorithms while allowing for the creation of isopleth risk maps. Both approaches, however, have never been compared in simulation studies, and there is a need to better understand their merits in terms of accuracy and precision of disease risk estimates. Results Besag, York and Mollie's (BYM) model and Poisson kriging (point and area-to-area implementations) were applied to age-adjusted lung and cervix cancer mortality rates recorded for white females in two contrasted county geographies: 1) state of Indiana that consists of 92 counties of fairly similar size and shape, and 2) four states in the Western US (Arizona, California, Nevada and Utah) forming a set of 118 counties that are vastly different geographical units. The spatial support (i.e. point versus area) has a much smaller impact on the results than the statistical methodology (i.e. geostatistical versus Bayesian models). Differences between methods are particularly pronounced in the Western US dataset: BYM model yields smoother risk surface and prediction variance that changes mainly as a function of the predicted risk, while the Poisson kriging variance increases in large sparsely populated counties. Simulation studies showed that the geostatistical approach yields smaller prediction errors, more precise and accurate probability intervals, and allows a better discrimination between counties with high and low mortality risks. The benefit of area-to-area Poisson kriging increases as the county geography becomes more
Relaxed Poisson cure rate models.
Rodrigues, Josemar; Cordeiro, Gauss M; Cancho, Vicente G; Balakrishnan, N
2016-03-01
The purpose of this article is to make the standard promotion cure rate model (Yakovlev and Tsodikov, ) more flexible by assuming that the number of lesions or altered cells after a treatment follows a fractional Poisson distribution (Laskin, ). It is proved that the well-known Mittag-Leffler relaxation function (Berberan-Santos, ) is a simple way to obtain a new cure rate model that is a compromise between the promotion and geometric cure rate models allowing for superdispersion. So, the relaxed cure rate model developed here can be considered as a natural and less restrictive extension of the popular Poisson cure rate model at the cost of an additional parameter, but a competitor to negative-binomial cure rate models (Rodrigues et al., ). Some mathematical properties of a proper relaxed Poisson density are explored. A simulation study and an illustration of the proposed cure rate model from the Bayesian point of view are finally presented. PMID:26686485
Mala, A; Ravichandran, B; Raghavan, S; Rajmohan, H R
2010-08-01
There are only a few studies performed on multinomial logistic regression on the benzene-exposed occupational group. A study was carried out to assess the relationship between the benzene concentration and trans-trans-muconic acid (t,t-MA), biomarkers in urine samples from petrol filling workers. A total of 117 workers involved in this occupation were selected for this current study. Generally, logistic regression analysis (LR) is a common statistical technique that could be used to predict the likelihood of categorical or binary or dichotomous outcome variables. The multinomial logistic regression equations were used to predict the relationship between benzene concentration and t,t-MA. The results showed a significant correlation between benzene and t,t-MA among the petrol fillers. Prediction equations were estimated by adopting the physical characteristic viz., age, experience in years and job categories of petrol filling station workers. Interestingly, there was no significant difference observed among experience in years. Petrol fillers and cashiers having a higher occupational risk were in the age group of ≤24 and between 25 and 34 years. Among the petrol fillers, the t,t-MA levels with exceeding ACGIH TWA-TLV level was showing to be more significant. This study demonstrated that multinomial logistic regression is an effective model for profiling the greatest risk of the benzene-exposed group caused by different explanatory variables. PMID:21120078
Mala, A.; Ravichandran, B.; Raghavan, S.; Rajmohan, H. R.
2010-01-01
There are only a few studies performed on multinomial logistic regression on the benzene-exposed occupational group. A study was carried out to assess the relationship between the benzene concentration and trans-trans-muconic acid (t,t-MA), biomarkers in urine samples from petrol filling workers. A total of 117 workers involved in this occupation were selected for this current study. Generally, logistic regression analysis (LR) is a common statistical technique that could be used to predict the likelihood of categorical or binary or dichotomous outcome variables. The multinomial logistic regression equations were used to predict the relationship between benzene concentration and t,t-MA. The results showed a significant correlation between benzene and t,t-MA among the petrol fillers. Prediction equations were estimated by adopting the physical characteristic viz., age, experience in years and job categories of petrol filling station workers. Interestingly, there was no significant difference observed among experience in years. Petrol fillers and cashiers having a higher occupational risk were in the age group of ≤24 and between 25 and 34 years. Among the petrol fillers, the t,t-MA levels with exceeding ACGIH TWA-TLV level was showing to be more significant. This study demonstrated that multinomial logistic regression is an effective model for profiling the greatest risk of the benzene-exposed group caused by different explanatory variables. PMID:21120078
Graded geometry and Poisson reduction
Cattaneo, A. S.; Zambon, M.
2009-02-02
The main result extends the Marsden-Ratiu reduction theorem in Poisson geometry, and is proven by means of graded geometry. In this note we provide the background material about graded geometry necessary for the proof. Further, we provide an alternative algebraic proof for the main result.
Ma, Lu; Yan, Xuedong
2014-06-01
This study seeks to inspect the nonparametric characteristics connecting the age of the driver to the relative risk of being an at-fault vehicle, in order to discover a more precise and smooth pattern of age impact, which has commonly been neglected in past studies. Records of drivers in two-vehicle rear-end collisions are selected from the general estimates system (GES) 2011 dataset. These extracted observations in fact constitute inherently matched driver pairs under certain matching variables including weather conditions, pavement conditions and road geometry design characteristics that are shared by pairs of drivers in rear-end accidents. The introduced data structure is able to guarantee that the variance of the response variable will not depend on the matching variables and hence provides a high power of statistical modeling. The estimation results exhibit a smooth cubic spline function for examining the nonlinear relationship between the age of the driver and the log odds of being at fault in a rear-end accident. The results are presented with respect to the main effect of age, the interaction effect between age and sex, and the effects of age under different scenarios of pre-crash actions by the leading vehicle. Compared to the conventional specification in which age is categorized into several predefined groups, the proposed method is more flexible and able to produce quantitatively explicit results. First, it confirms the U-shaped pattern of the age effect, and further shows that the risks of young and old drivers change rapidly with age. Second, the interaction effects between age and sex show that female and male drivers behave differently in rear-end accidents. Third, it is found that the pattern of age impact varies according to the type of pre-crash actions exhibited by the leading vehicle. PMID:24642249
Hajian-Tilaki, Karimollah; Heidari, Behzad
2015-01-01
Background: Background and Objectives: The biological variation of body mass index (BMI) and waist circumference (WC) with age may vary by gender. The objective of this study was to investigate the functional relationship of anthropometric measures with age and sex. Methods: The data were collected from a population-based cross-sectional study of 1800 men and 1800 women aged 20-70 years in northern Iran. The linear and quadratic pattern of age on weight, height, BMI and WC and WHR were tested statistically and the interaction effect of age and gender was also formally tested. Results: The quadratic model (age2) provided a significantly better fit than simple linear model for weight, BMI and WC. BMI, WC and weight explained a greater variance using quadratic form for women compared with men (for BMI, R2=0.18, p<0.001 vs R2=0.059, p<0.001 and for WC, R2=0.17, p<0.001 vs R2=0.047, p<0.001). For height, there is an inverse linear relationship while for WHR, a positive linear association was apparent by aging, the quadratic form did not add to better fit. Conclusion: These findings indicate the different patterns of weight gain, fat accumulation for visceral adiposity and loss of muscle mass between men and women in the early and middle adulthood. PMID:26644878
Calculation of the Poisson cumulative distribution function
NASA Technical Reports Server (NTRS)
Bowerman, Paul N.; Nolty, Robert G.; Scheuer, Ernest M.
1990-01-01
A method for calculating the Poisson cdf (cumulative distribution function) is presented. The method avoids computer underflow and overflow during the process. The computer program uses this technique to calculate the Poisson cdf for arbitrary inputs. An algorithm that determines the Poisson parameter required to yield a specified value of the cdf is presented.
A generalized gyrokinetic Poisson solver
Lin, Z.; Lee, W.W.
1995-03-01
A generalized gyrokinetic Poisson solver has been developed, which employs local operations in the configuration space to compute the polarization density response. The new technique is based on the actual physical process of gyrophase-averaging. It is useful for nonlocal simulations using general geometry equilibrium. Since it utilizes local operations rather than the global ones such as FFT, the new method is most amenable to massively parallel algorithms.
Freitas, M S; Freitas, L S; Weber, T; Yamaki, M; Cantão, M E; Peixoto, J O; Ledur, M C
2015-10-01
The effects of modified single-step genomic best linear unbiased prediction (ssGBLUP) iterations on GEBV and SNP were investigated using 85,388 age at 100 kg phenotypes from the BRF SA breeding program Landrace pure line animals, off-tested between 2002 and 2013. Pedigree data comprised animals born between 1999 and 2013. A total of 1,068 animals were assigned to the training population, in which all of them had genotypes, original and corrected age at 100 kg phenotypes, and weighted deregressed proof records. A total of 100 genotyped animals, with high accuracy age at 100 kg estimated breeding values, were assigned to the validation population. After applying the quality control workflow, a set of 41,042 SNP was used for the analysis. Standard and modified ssGBLUP, BayesCπ, and Bayesian Lasso were compared, and their predictive abilities were accessed by approximate true and GEBV correlations. Modified ssGBLUP iteration effects on SNP estimates and GEBV were relevant, in which assigned differential weights and shrinkage caused important losses on ssGBLUP predictive ability for age at 100 kg GEBV. Even though ssGBLUP accuracy can be equal or better than the compared Bayesian methods, additional gains can be obtained by correctly identifying the number of iterations required for best ssGBLUP performance. PMID:26523560
Gonzalez, Laura Diez; Vera-Badillo, Francisco E.; Tibau, Ariadna; Goldstein, Robyn; Šeruga, Boštjan; Srikanthan, Amirrtha; Pandiella, Atanasio; Amir, Eitan; Ocana, Alberto
2016-01-01
Background Germline mutations in the BRCA1 and BRCA2 genes are the most frequent known hereditary causes of familial breast cancer. Little is known about the interaction of age at diagnosis, estrogen receptor (ER) and progesterone receptor (PgR) expression and outcomes in patients with BRCA1 or BRCA2 mutations. Methods A PubMed search identified publications exploring the association between BRCA mutations and clinical outcome. Hazard ratios (HR) for overall survival were extracted from multivariable analyses. Hazard ratios were weighted and pooled using generic inverse-variance and random-effect modeling. Meta-regression weighted by total study sample size was conducted to explore the influence of age, ER and PgR expression on the association between BRCA mutations and overall survival. Results A total of 16 studies comprising 10,180 patients were included in the analyses. BRCA mutations were not associated with worse overall survival (HR 1.06, 95% CI 0.84–1.34, p = 0.61). A similar finding was observed when evaluating the influence of BRCA1 and BRCA2 mutations on overall survival independently (BRCA1: HR 1.20, 95% CI 0.89–1.61, p = 0.24; BRCA2: HR 1.01, 95% CI 0.80–1.27, p = 0.95). Meta-regression identified an inverse association between ER expression and overall survival (β = -0.75, p = 0.02) in BRCA1 mutation carriers but no association with age or PgR expression (β = -0.45, p = 0.23 and β = 0.02, p = 0.97, respectively). No association was found for BRCA2 mutation status and age, ER, or PgR expression. Conclusion ER-expression appears to be an effect modifier in patients with BRCA1 mutations, but not among those with BRCA2 mutations. PMID:27149669
NASA Astrophysics Data System (ADS)
Grégoire, G.
2014-12-01
The logistic regression originally is intended to explain the relationship between the probability of an event and a set of covariables. The model's coefficients can be interpreted via the odds and odds ratio, which are presented in introduction of the chapter. The observations are possibly got individually, then we speak of binary logistic regression. When they are grouped, the logistic regression is said binomial. In our presentation we mainly focus on the binary case. For statistical inference the main tool is the maximum likelihood methodology: we present the Wald, Rao and likelihoods ratio results and their use to compare nested models. The problems we intend to deal with are essentially the same as in multiple linear regression: testing global effect, individual effect, selection of variables to build a model, measure of the fitness of the model, prediction of new values… . The methods are demonstrated on data sets using R. Finally we briefly consider the binomial case and the situation where we are interested in several events, that is the polytomous (multinomial) logistic regression and the particular case of ordinal logistic regression.
NGUYEN, QUAN DONG; SHAH, SYED MAHMOOD; HAFIZ, GULNAR; DO, DIANA V.; HALLER, JULIA A.; PILI, ROBERTO; ZIMMER-GALLER, INGRID E.; JANJUA, KASHIF; SYMONS, R. C. ANDREW; CAMPOCHIARO, PETER A.
2016-01-01
PURPOSE To investigate the safety, tolerability, and bioactivity of intravenous infusions of bevacizumab in patients with choroidal neovascularization (CNV) attributable to causes other than age-related macular degeneration. DESIGN Nonrandomized clinical trial. METHODS Ten patients with CNV received infusions of 5 mg/kg of bevacizumab. The primary efficacy outcome measure was change in visual acuity (VA; Early Treatment Diabetic Retinopathy Study letters read at 4 meters) at 24 weeks and secondary measures were changes from baseline in excess foveal thickness (center subfield thickness), area of fluorescein leakage, and area of CNV. RESULTS Infusions were well tolerated and there were no ocular or systemic adverse events. At baseline, median VA was 25.5 letters read at 4 meters (20/80) and median foveal thickness was 346 μm. At the primary endpoint (24 weeks), median VA was 48.5 letters (20/32), representing four lines of improvement from baseline (P = .005), median foveal thickness was 248 μm representing a 72% reduction in excess foveal thickness (P = .007). Four of nine patients had complete elimination of fluorescein leakage, three had near complete elimination (reductions of 91%, 88%, and 87%), two had modest reductions, and one had no reduction. All patients except one showed a reduction in area of CNV with a median reduction of 43%. CONCLUSIONS Despite the small number of patients studied, the marked improvement in VA accompanied by prominent reductions in foveal thickness, fluorescein leakage, and area of CNV suggest a beneficial effect. It may be worthwhile to consider further evaluation of systemic bevacizumab in young patients with CNV. PMID:18054887
Tunable negative Poisson's ratio in hydrogenated graphene.
Jiang, Jin-Wu; Chang, Tienchong; Guo, Xingming
2016-09-21
We perform molecular dynamics simulations to investigate the effect of hydrogenation on the Poisson's ratio of graphene. It is found that the value of the Poisson's ratio of graphene can be effectively tuned from positive to negative by varying the percentage of hydrogenation. Specifically, the Poisson's ratio decreases with an increase in the percentage of hydrogenation, and reaches a minimum value of -0.04 when the percentage of hydrogenation is about 50%. The Poisson's ratio starts to increase upon a further increase of the percentage of hydrogenation. The appearance of a minimum negative Poisson's ratio in the hydrogenated graphene is attributed to the suppression of the hydrogenation-induced ripples during the stretching of graphene. Our results demonstrate that hydrogenation is a valuable approach for tuning the Poisson's ratio from positive to negative in graphene. PMID:27536878
Holmes, Tyson H; He, Xiao-Song
2016-10-01
Small, wide data sets are commonplace in human immunophenotyping research. As defined here, a small, wide data set is constructed by sampling a small to modest quantity n,1
Yörük, Barış K; Yörük, Ceren Ertan
2011-07-01
This paper uses a regression discontinuity design to estimate the impact of the minimum legal drinking age laws on alcohol consumption, smoking, and marijuana use among young adults. Using data from the National Longitudinal Survey of Youth (1997 Cohort), we find that granting legal access to alcohol at age 21 leads to an increase in several measures of alcohol consumption, including an up to a 13 percentage point increase in the probability of drinking. Furthermore, this effect is robust under several different parametric and non-parametric models. We also find some evidence that the discrete jump in alcohol consumption at age 21 has negative spillover effects on marijuana use but does not affect the smoking habits of young adults. Our results indicate that although the change in alcohol consumption habits of young adults following their 21st birthday is less severe than previously known, policies that are designed to reduce drinking among young adults may have desirable impacts and can create public health benefits. PMID:21719131
CUMPOIS- CUMULATIVE POISSON DISTRIBUTION PROGRAM
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
The Cumulative Poisson distribution program, CUMPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), can be used independently of one another. CUMPOIS determines the approximate cumulative binomial distribution, evaluates the cumulative distribution function (cdf) for gamma distributions with integer shape parameters, and evaluates the cdf for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. CUMPOIS calculates the probability that n or less events (ie. cumulative) will occur within any unit when the expected number of events is given as lambda. Normally, this probability is calculated by a direct summation, from i=0 to n, of terms involving the exponential function, lambda, and inverse factorials. This approach, however, eventually fails due to underflow for sufficiently large values of n. Additionally, when the exponential term is moved outside of the summation for simplification purposes, there is a risk that the terms remaining within the summation, and the summation itself, will overflow for certain values of i and lambda. CUMPOIS eliminates these possibilities by multiplying an additional exponential factor into the summation terms and the partial sum whenever overflow/underflow situations threaten. The reciprocal of this term is then multiplied into the completed sum giving the cumulative probability. The CUMPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting lambda and n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMPOIS was
Alternative Derivations for the Poisson Integral Formula
ERIC Educational Resources Information Center
Chen, J. T.; Wu, C. S.
2006-01-01
Poisson integral formula is revisited. The kernel in the Poisson integral formula can be derived in a series form through the direct BEM free of the concept of image point by using the null-field integral equation in conjunction with the degenerate kernels. The degenerate kernels for the closed-form Green's function and the series form of Poisson…
Metal [100] Nanowires with Negative Poisson's Ratio.
Ho, Duc Tam; Kwon, Soon-Yong; Kim, Sung Youb
2016-01-01
When materials are under stretching, occurrence of lateral contraction of materials is commonly observed. This is because Poisson's ratio, the quantity describes the relationship between a lateral strain and applied strain, is positive for nearly all materials. There are some reported structures and materials having negative Poisson's ratio. However, most of them are at macroscale, and reentrant structures and rigid rotating units are the main mechanisms for their negative Poisson's ratio behavior. Here, with numerical and theoretical evidence, we show that metal [100] nanowires with asymmetric cross-sections such as rectangle or ellipse can exhibit negative Poisson's ratio behavior. Furthermore, the negative Poisson's ratio behavior can be further improved by introducing a hole inside the asymmetric nanowires. We show that the surface effect inducing the asymmetric stresses inside the nanowires is a main origin of the superior property. PMID:27282358
Huang, Dong; Cabral, Ricardo; De la Torre, Fernando
2016-02-01
Discriminative methods (e.g., kernel regression, SVM) have been extensively used to solve problems such as object recognition, image alignment and pose estimation from images. These methods typically map image features ( X) to continuous (e.g., pose) or discrete (e.g., object category) values. A major drawback of existing discriminative methods is that samples are directly projected onto a subspace and hence fail to account for outliers common in realistic training sets due to occlusion, specular reflections or noise. It is important to notice that existing discriminative approaches assume the input variables X to be noise free. Thus, discriminative methods experience significant performance degradation when gross outliers are present. Despite its obvious importance, the problem of robust discriminative learning has been relatively unexplored in computer vision. This paper develops the theory of robust regression (RR) and presents an effective convex approach that uses recent advances on rank minimization. The framework applies to a variety of problems in computer vision including robust linear discriminant analysis, regression with missing data, and multi-label classification. Several synthetic and real examples with applications to head pose estimation from images, image and video classification and facial attribute classification with missing data are used to illustrate the benefits of RR. PMID:26761740
Supervised Gamma Process Poisson Factorization
Anderson, Dylan Zachary
2015-05-01
This thesis develops the supervised gamma process Poisson factorization (S- GPPF) framework, a novel supervised topic model for joint modeling of count matrices and document labels. S-GPPF is fully generative and nonparametric: document labels and count matrices are modeled under a uni ed probabilistic framework and the number of latent topics is controlled automatically via a gamma process prior. The framework provides for multi-class classification of documents using a generative max-margin classifier. Several recent data augmentation techniques are leveraged to provide for exact inference using a Gibbs sampling scheme. The first portion of this thesis reviews supervised topic modeling and several key mathematical devices used in the formulation of S-GPPF. The thesis then introduces the S-GPPF generative model and derives the conditional posterior distributions of the latent variables for posterior inference via Gibbs sampling. The S-GPPF is shown to exhibit state-of-the-art performance for joint topic modeling and document classification on a dataset of conference abstracts, beating out competing supervised topic models. The unique properties of S-GPPF along with its competitive performance make it a novel contribution to supervised topic modeling.
Poisson-Based Inference for Perturbation Models in Adaptive Spelling Training
ERIC Educational Resources Information Center
Baschera, Gian-Marco; Gross, Markus
2010-01-01
We present an inference algorithm for perturbation models based on Poisson regression. The algorithm is designed to handle unclassified input with multiple errors described by independent mal-rules. This knowledge representation provides an intelligent tutoring system with local and global information about a student, such as error classification…
Negative Poisson's ratio materials via isotropic interactions.
Rechtsman, Mikael C; Stillinger, Frank H; Torquato, Salvatore
2008-08-22
We show that under tension a classical many-body system with only isotropic pair interactions in a crystalline state can, counterintuitively, have a negative Poisson's ratio, or auxetic behavior. We derive the conditions under which the triangular lattice in two dimensions and lattices with cubic symmetry in three dimensions exhibit a negative Poisson's ratio. In the former case, the simple Lennard-Jones potential can give rise to auxetic behavior. In the latter case, a negative Poisson's ratio can be exhibited even when the material is constrained to be elastically isotropic. PMID:18764632
Poisson's ratio of high-performance concrete
Persson, B.
1999-10-01
This article outlines an experimental and numerical study on Poisson's ratio of high-performance concrete subjected to air or sealed curing. Eight qualities of concrete (about 100 cylinders and 900 cubes) were studied, both young and in the mature state. The concretes contained between 5 and 10% silica fume, and two concretes in addition contained air-entrainment. Parallel studies of strength and internal relative humidity were carried out. The results indicate that Poisson's ratio of high-performance concrete is slightly smaller than that of normal-strength concrete. Analyses of the influence of maturity, type of aggregate, and moisture on Poisson's ratio are also presented. The project was carried out from 1991 to 1998.
Magnetostrictive contribution to Poisson ratio of galfenol
NASA Astrophysics Data System (ADS)
Paes, V. Z. C.; Mosca, D. H.
2013-09-01
In this work we present a detailed study on the magnetostrictive contribution to Poisson ratio for samples under applied mechanical stress. Magnetic contributions to strain and Poisson ratio for cubic materials were derived by accounting elastic and magneto-elastic anisotropy contributions. We apply our theoretical results for a material of interest in magnetomechanics, namely, galfenol (Fe1-xGax). Our results show that there is a non-negligible magnetic contribution in the linear portion of the curve of stress versus strain. The rotation of the magnetization towards [110] crystallographic direction upon application of mechanical stress leads to an auxetic behavior, i.e., exhibiting Poisson ratio with negative values. This magnetic contribution to auxetic behavior provides a novel insight for the discussion of theoretical and experimental developments of materials that display unusual mechanical properties.
A new inverse regression model applied to radiation biodosimetry
Higueras, Manuel; Puig, Pedro; Ainsbury, Elizabeth A.; Rothkamm, Kai
2015-01-01
Biological dosimetry based on chromosome aberration scoring in peripheral blood lymphocytes enables timely assessment of the ionizing radiation dose absorbed by an individual. Here, new Bayesian-type count data inverse regression methods are introduced for situations where responses are Poisson or two-parameter compound Poisson distributed. Our Poisson models are calculated in a closed form, by means of Hermite and negative binomial (NB) distributions. For compound Poisson responses, complete and simplified models are provided. The simplified models are also expressible in a closed form and involve the use of compound Hermite and compound NB distributions. Three examples of applications are given that demonstrate the usefulness of these methodologies in cytogenetic radiation biodosimetry and in radiotherapy. We provide R and SAS codes which reproduce these examples. PMID:25663804
A new bivariate negative binomial regression model
NASA Astrophysics Data System (ADS)
Faroughi, Pouya; Ismail, Noriszura
2014-12-01
This paper introduces a new form of bivariate negative binomial (BNB-1) regression which can be fitted to bivariate and correlated count data with covariates. The BNB regression discussed in this study can be fitted to bivariate and overdispersed count data with positive, zero or negative correlations. The joint p.m.f. of the BNB1 distribution is derived from the product of two negative binomial marginals with a multiplicative factor parameter. Several testing methods were used to check overdispersion and goodness-of-fit of the model. Application of BNB-1 regression is illustrated on Malaysian motor insurance dataset. The results indicated that BNB-1 regression has better fit than bivariate Poisson and BNB-2 models with regards to Akaike information criterion.
Partial covariate adjusted regression
Şentürk, Damla; Nguyen, Danh V.
2008-01-01
Covariate adjusted regression (CAR) is a recently proposed adjustment method for regression analysis where both the response and predictors are not directly observed (Şentürk and Müller, 2005). The available data has been distorted by unknown functions of an observable confounding covariate. CAR provides consistent estimators for the coefficients of the regression between the variables of interest, adjusted for the confounder. We develop a broader class of partial covariate adjusted regression (PCAR) models to accommodate both distorted and undistorted (adjusted/unadjusted) predictors. The PCAR model allows for unadjusted predictors, such as age, gender and demographic variables, which are common in the analysis of biomedical and epidemiological data. The available estimation and inference procedures for CAR are shown to be invalid for the proposed PCAR model. We propose new estimators and develop new inference tools for the more general PCAR setting. In particular, we establish the asymptotic normality of the proposed estimators and propose consistent estimators of their asymptotic variances. Finite sample properties of the proposed estimators are investigated using simulation studies and the method is also illustrated with a Pima Indians diabetes data set. PMID:20126296
On the Burgers-Poisson equation
NASA Astrophysics Data System (ADS)
Grunert, K.; Nguyen, Khai T.
2016-09-01
In this paper, we prove the existence and uniqueness of weak entropy solutions to the Burgers-Poisson equation for initial data in L1 (R). In addition an Oleinik type estimate is established and some criteria on local smoothness and wave breaking for weak entropy solutions are provided.
Easy Demonstration of the Poisson Spot
ERIC Educational Resources Information Center
Gluck, Paul
2010-01-01
Many physics teachers have a set of slides of single, double and multiple slits to show their students the phenomena of interference and diffraction. Thomas Young's historic experiments with double slits were indeed a milestone in proving the wave nature of light. But another experiment, namely the Poisson spot, was also important historically and…
Graphical user interface for AMOS and POISSON
Swatloski, T.L.
1993-03-02
A graphical user interface (GUI) exists for building model geometry for the time-domain field code, AMOS. This GUI has recently been modified to build models and display the results of the Poisson electrostatic solver maintained by the Los Alamos Accelerator Code Group called POISSON. Included in the GUI is a 2-D graphic editor allowing interactive construction of the model geometry. Polygons may be created by entering points with the mouse, with text input, or by reading coordinates from a file. Circular arcs have recently been added. Once polygons are entered, points may be inserted, moved, or deleted. Materials can be assigned to polygons, and are represented by different colors. The unit scale may be adjusted as well as the viewport. A rectangular mesh may be generated for AMOS or a triangular mesh for POISSON. Potentials from POISSON are represented with a contour plot and the designer is able to mouse click anywhere on the model to display the potential value at that location. This was developed under the X windowing system using the Motif look and feel.
Evolutionary inference via the Poisson Indel Process.
Bouchard-Côté, Alexandre; Jordan, Michael I
2013-01-22
We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114-124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments. PMID:23275296
A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments.
Fisicaro, G; Genovese, L; Andreussi, O; Marzari, N; Goedecker, S
2016-01-01
The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes. PMID:26747797
Brain, music, and non-Poisson renewal processes
NASA Astrophysics Data System (ADS)
Bianco, Simone; Ignaccolo, Massimiliano; Rider, Mark S.; Ross, Mary J.; Winsor, Phil; Grigolini, Paolo
2007-06-01
In this paper we show that both music composition and brain function, as revealed by the electroencephalogram (EEG) analysis, are renewal non-Poisson processes living in the nonergodic dominion. To reach this important conclusion we process the data with the minimum spanning tree method, so as to detect significant events, thereby building a sequence of times, which is the time series to analyze. Then we show that in both cases, EEG and music composition, these significant events are the signature of a non-Poisson renewal process. This conclusion is reached using a technique of statistical analysis recently developed by our group, the aging experiment (AE). First, we find that in both cases the distances between two consecutive events are described by nonexponential histograms, thereby proving the non-Poisson nature of these processes. The corresponding survival probabilities Ψ(t) are well fitted by stretched exponentials [ Ψ(t)∝exp (-(γt)α) , with 0.5<α<1 .] The second step rests on the adoption of AE, which shows that these are renewal processes. We show that the stretched exponential, due to its renewal character, is the emerging tip of an iceberg, whose underwater part has slow tails with an inverse power law structure with power index μ=1+α . Adopting the AE procedure we find that both EEG and music composition yield μ<2 . On the basis of the recently discovered complexity matching effect, according to which a complex system S with μS<2 responds only to a complex driving signal P with μP⩽μS , we conclude that the results of our analysis may explain the influence of music on the human brain.
Poisson filtering of laser ranging data
NASA Technical Reports Server (NTRS)
Ricklefs, Randall L.; Shelus, Peter J.
1993-01-01
The filtering of data in a high noise, low signal strength environment is a situation encountered routinely in lunar laser ranging (LLR) and, to a lesser extent, in artificial satellite laser ranging (SLR). The use of Poisson statistics as one of the tools for filtering LLR data is described first in a historical context. The more recent application of this statistical technique to noisy SLR data is also described.
Stabilities for nonisentropic Euler-Poisson equations.
Cheung, Ka Luen; Wong, Sen
2015-01-01
We establish the stabilities and blowup results for the nonisentropic Euler-Poisson equations by the energy method. By analysing the second inertia, we show that the classical solutions of the system with attractive forces blow up in finite time in some special dimensions when the energy is negative. Moreover, we obtain the stabilities results for the system in the cases of attractive and repulsive forces. PMID:25861676
First- and second-order Poisson spots
NASA Astrophysics Data System (ADS)
Kelly, William R.; Shirley, Eric L.; Migdall, Alan L.; Polyakov, Sergey V.; Hendrix, Kurt
2009-08-01
Although Thomas Young is generally given credit for being the first to provide evidence against Newton's corpuscular theory of light, it was Augustin Fresnel who first stated the modern theory of diffraction. We review the history surrounding Fresnel's 1818 paper and the role of the Poisson spot in the associated controversy. We next discuss the boundary-diffraction-wave approach to calculating diffraction effects and show how it can reduce the complexity of calculating diffraction patterns. We briefly discuss a generalization of this approach that reduces the dimensionality of integrals needed to calculate the complete diffraction pattern of any order diffraction effect. We repeat earlier demonstrations of the conventional Poisson spot and discuss an experimental setup for demonstrating an analogous phenomenon that we call a "second-order Poisson spot." Several features of the diffraction pattern can be explained simply by considering the path lengths of singly and doubly bent paths and distinguishing between first- and second-order diffraction effects related to such paths, respectively.
Poisson's ratio over two centuries: challenging hypotheses
Greaves, G. Neville
2013-01-01
This article explores Poisson's ratio, starting with the controversy concerning its magnitude and uniqueness in the context of the molecular and continuum hypotheses competing in the development of elasticity theory in the nineteenth century, moving on to its place in the development of materials science and engineering in the twentieth century, and concluding with its recent re-emergence as a universal metric for the mechanical performance of materials on any length scale. During these episodes France lost its scientific pre-eminence as paradigms switched from mathematical to observational, and accurate experiments became the prerequisite for scientific advance. The emergence of the engineering of metals followed, and subsequently the invention of composites—both somewhat separated from the discovery of quantum mechanics and crystallography, and illustrating the bifurcation of technology and science. Nowadays disciplines are reconnecting in the face of new scientific demands. During the past two centuries, though, the shape versus volume concept embedded in Poisson's ratio has remained invariant, but its application has exploded from its origins in describing the elastic response of solids and liquids, into areas such as materials with negative Poisson's ratio, brittleness, glass formation, and a re-evaluation of traditional materials. Moreover, the two contentious hypotheses have been reconciled in their complementarity within the hierarchical structure of materials and through computational modelling. PMID:24687094
Regression Models of Atlas Appearance
Rohlfing, Torsten; Sullivan, Edith V.; Pfefferbaum, Adolf
2010-01-01
Models of object appearance based on principal components analysis provide powerful and versatile tools in computer vision and medical image analysis. A major shortcoming is that they rely entirely on the training data to extract principal modes of appearance variation and ignore underlying variables (e.g., subject age, gender). This paper introduces an appearance modeling framework based instead on generalized multi-linear regression. The training of regression appearance models is controlled by independent variables. This makes it straightforward to create model instances for specific values of these variables, which is akin to model interpolation. We demonstrate the new framework by creating an appearance model of the human brain from MR images of 36 subjects. Instances of the model created for different ages are compared with average shape atlases created from age-matched sub-populations. Relative tissue volumes vs. age in models are also compared with tissue volumes vs. subject age in the original images. In both experiments, we found excellent agreement between the regression models and the comparison data. We conclude that regression appearance models are a promising new technique for image analysis, with one potential application being the representation of a continuum of mutually consistent, age-specific atlases of the human brain. PMID:19694260
Regression models of atlas appearance.
Rohlfing, Torsten; Sullivan, Edith V; Pfefferbaum, Adolf
2009-01-01
Models of object appearance based on principal components analysis provide powerful and versatile tools in computer vision and medical image analysis. A major shortcoming is that they rely entirely on the training data to extract principal modes of appearance variation and ignore underlying variables (e.g., subject age, gender). This paper introduces an appearance modeling framework based instead on generalized multi-linear regression. The training of regression appearance models is controlled by independent variables. This makes it straightforward to create model instances for specific values of these variables, which is akin to model interpolation. We demonstrate the new framework by creating an appearance model of the human brain from MR images of 36 subjects. Instances of the model created for different ages are compared with average shape atlases created from age-matched sub-populations. Relative tissue volumes vs. age in models are also compared with tissue volumes vs. subject age in the original images. In both experiments, we found excellent agreement between the regression models and the comparison data. We conclude that regression appearance models are a promising new technique for image analysis, with one potential application being the representation of a continuum of mutually consistent, age-specific atlases of the human brain. PMID:19694260
Nonlocal Poisson-Fermi model for ionic solvent
NASA Astrophysics Data System (ADS)
Xie, Dexuan; Liu, Jinn-Liang; Eisenberg, Bob
2016-07-01
We propose a nonlocal Poisson-Fermi model for ionic solvent that includes ion size effects and polarization correlations among water molecules in the calculation of electrostatic potential. It includes the previous Poisson-Fermi models as special cases, and its solution is the convolution of a solution of the corresponding nonlocal Poisson dielectric model with a Yukawa-like kernel function. The Fermi distribution is shown to be a set of optimal ionic concentration functions in the sense of minimizing an electrostatic potential free energy. Numerical results are reported to show the difference between a Poisson-Fermi solution and a corresponding Poisson solution.
On the singularity of the Vlasov-Poisson system
Zheng, Jian; Qin, Hong
2013-09-15
The Vlasov-Poisson system can be viewed as the collisionless limit of the corresponding Fokker-Planck-Poisson system. It is reasonable to expect that the result of Landau damping can also be obtained from the Fokker-Planck-Poisson system when the collision frequency ν approaches zero. However, we show that the collisionless Vlasov-Poisson system is a singular limit of the collisional Fokker-Planck-Poisson system, and Landau's result can be recovered only as the ν approaches zero from the positive side.
On the Singularity of the Vlasov-Poisson System
and Hong Qin, Jian Zheng
2013-04-26
The Vlasov-Poisson system can be viewed as the collisionless limit of the corresponding Fokker- Planck-Poisson system. It is reasonable to expect that the result of Landau damping can also be obtained from the Fokker-Planck-Poisson system when the collision frequency v approaches zero. However, we show that the colllisionless Vlasov-Poisson system is a singular limit of the collisional Fokker-Planck-Poisson system, and Landau's result can be recovered only as the approaching zero from the positive side.
Nonlocal Poisson-Fermi model for ionic solvent.
Xie, Dexuan; Liu, Jinn-Liang; Eisenberg, Bob
2016-07-01
We propose a nonlocal Poisson-Fermi model for ionic solvent that includes ion size effects and polarization correlations among water molecules in the calculation of electrostatic potential. It includes the previous Poisson-Fermi models as special cases, and its solution is the convolution of a solution of the corresponding nonlocal Poisson dielectric model with a Yukawa-like kernel function. The Fermi distribution is shown to be a set of optimal ionic concentration functions in the sense of minimizing an electrostatic potential free energy. Numerical results are reported to show the difference between a Poisson-Fermi solution and a corresponding Poisson solution. PMID:27575084
Lee, Myung Hee; Liu, Yufeng
2013-12-01
The continuum regression technique provides an appealing regression framework connecting ordinary least squares, partial least squares and principal component regression in one family. It offers some insight on the underlying regression model for a given application. Moreover, it helps to provide deep understanding of various regression techniques. Despite the useful framework, however, the current development on continuum regression is only for linear regression. In many applications, nonlinear regression is necessary. The extension of continuum regression from linear models to nonlinear models using kernel learning is considered. The proposed kernel continuum regression technique is quite general and can handle very flexible regression model estimation. An efficient algorithm is developed for fast implementation. Numerical examples have demonstrated the usefulness of the proposed technique. PMID:24058224
Ductile Titanium Alloy with Low Poisson's Ratio
Hao, Y. L.; Li, S. J.; Sun, B. B.; Sui, M. L.; Yang, R.
2007-05-25
We report a ductile {beta}-type titanium alloy with body-centered cubic (bcc) crystal structure having a low Poisson's ratio of 0.14. The almost identical ultralow bulk and shear moduli of {approx}24 GPa combined with an ultrahigh strength of {approx}0.9 GPa contribute to easy crystal distortion due to much-weakened chemical bonding of atoms in the crystal, leading to significant elastic softening in tension and elastic hardening in compression. The peculiar elastic and plastic deformation behaviors of the alloy are interpreted as a result of approaching the elastic limit of the bcc crystal under applied stress.
Abstract Expression Grammar Symbolic Regression
NASA Astrophysics Data System (ADS)
Korns, Michael F.
This chapter examines the use of Abstract Expression Grammars to perform the entire Symbolic Regression process without the use of Genetic Programming per se. The techniques explored produce a symbolic regression engine which has absolutely no bloat, which allows total user control of the search space and output formulas, which is faster, and more accurate than the engines produced in our previous papers using Genetic Programming. The genome is an all vector structure with four chromosomes plus additional epigenetic and constraint vectors, allowing total user control of the search space and the final output formulas. A combination of specialized compiler techniques, genetic algorithms, particle swarm, aged layered populations, plus discrete and continuous differential evolution are used to produce an improved symbolic regression sytem. Nine base test cases, from the literature, are used to test the improvement in speed and accuracy. The improved results indicate that these techniques move us a big step closer toward future industrial strength symbolic regression systems.
Oliveira, María; Einbeck, Jochen; Higueras, Manuel; Ainsbury, Elizabeth; Puig, Pedro; Rothkamm, Kai
2016-03-01
Within the field of cytogenetic biodosimetry, Poisson regression is the classical approach for modeling the number of chromosome aberrations as a function of radiation dose. However, it is common to find data that exhibit overdispersion. In practice, the assumption of equidispersion may be violated due to unobserved heterogeneity in the cell population, which will render the variance of observed aberration counts larger than their mean, and/or the frequency of zero counts greater than expected for the Poisson distribution. This phenomenon is observable for both full- and partial-body exposure, but more pronounced for the latter. In this work, different methodologies for analyzing cytogenetic chromosomal aberrations datasets are compared, with special focus on zero-inflated Poisson and zero-inflated negative binomial models. A score test for testing for zero inflation in Poisson regression models under the identity link is also developed. PMID:26461836
A technique for determining the Poisson`s ratio of thin films
Krulevitch, P.
1996-04-18
The theory and experimental approach for a new technique used to determine the Poisson`s ratio of thin films are presented. The method involves taking the ratio of curvatures of cantilever beams and plates micromachined out of the film of interest. Curvature is induced by a through-thickness variation in residual stress, or by depositing a thin film under residual stress onto the beams and plates. This approach is made practical by the fact that the two curvatures air, the only required experimental parameters, and small calibration errors cancel when the ratio is taken. To confirm the accuracy of the technique, it was tested on a 2.5 {mu}m thick film of single crystal silicon. Micromachined beams 1 mm long by 100 {mu} wide and plates 700 {mu}m by 700 {mu}m were coated with 35 nm of gold and the curvatures were measured with a scanning optical profilometer. For the orientation tested ([100] film normal, [011] beam axis, [0{bar 1}1] contraction direction) silicon`s Poisson`s ratio is 0.064, and the measured result was 0.066 {+-} 0.043. The uncertainty in this technique is due primarily to variation in the measured curvatures, and should range from {+-} 0.02 to 0.04 with proper measurement technique.
A Poisson model for random multigraphs
Ranola, John M. O.; Ahn, Sangtae; Sehl, Mary; Smith, Desmond J.; Lange, Kenneth
2010-01-01
Motivation: Biological networks are often modeled by random graphs. A better modeling vehicle is a multigraph where each pair of nodes is connected by a Poisson number of edges. In the current model, the mean number of edges equals the product of two propensities, one for each node. In this context it is possible to construct a simple and effective algorithm for rapid maximum likelihood estimation of all propensities. Given estimated propensities, it is then possible to test statistically for functionally connected nodes that show an excess of observed edges over expected edges. The model extends readily to directed multigraphs. Here, propensities are replaced by outgoing and incoming propensities. Results: The theory is applied to real data on neuronal connections, interacting genes in radiation hybrids, interacting proteins in a literature curated database, and letter and word pairs in seven Shaskespearean plays. Availability: All data used are fully available online from their respective sites. Source code and software is available from http://code.google.com/p/poisson-multigraph/ Contact: klange@ucla.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20554690
Ghanbari, Yasser; Smith, Alex R; Schultz, Robert T; Verma, Ragini
2014-12-01
Diffusion tensor imaging (DTI) offers rich insights into the physical characteristics of white matter (WM) fiber tracts and their development in the brain, facilitating a network representation of brain's traffic pathways. Such a network representation of brain connectivity has provided a novel means of investigating brain changes arising from pathology, development or aging. The high dimensionality of these connectivity networks necessitates the development of methods that identify the connectivity building blocks or sub-network components that characterize the underlying variation in the population. In addition, the projection of the subject networks into the basis set provides a low dimensional representation of it, that teases apart different sources of variation in the sample, facilitating variation-specific statistical analysis. We propose a unified framework of non-negative matrix factorization and graph embedding for learning sub-network patterns of connectivity by their projective non-negative decomposition into a reconstructive basis set, as well as, additional basis sets representing variational sources in the population like age and pathology. The proposed framework is applied to a study of diffusion-based connectivity in subjects with autism that shows localized sparse sub-networks which mostly capture the changes related to pathology and developmental variations. PMID:25037933
DG Poisson algebra and its universal enveloping algebra
NASA Astrophysics Data System (ADS)
Lü, JiaFeng; Wang, XingTing; Zhuang, GuangBin
2016-05-01
In this paper, we introduce the notions of differential graded (DG) Poisson algebra and DG Poisson module. Let $A$ be any DG Poisson algebra. We construct the universal enveloping algebra of $A$ explicitly, which is denoted by $A^{ue}$. We show that $A^{ue}$ has a natural DG algebra structure and it satisfies certain universal property. As a consequence of the universal property, it is proved that the category of DG Poisson modules over $A$ is isomorphic to the category of DG modules over $A^{ue}$. Furthermore, we prove that the notion of universal enveloping algebra $A^{ue}$ is well-behaved under opposite algebra and tensor product of DG Poisson algebras. Practical examples of DG Poisson algebras are given throughout the paper including those arising from differential geometry and homological algebra.
Ramis, Rebeca; Vidal, Enrique; García-Pérez, Javier; Lope, Virginia; Aragonés, Nuria; Pérez-Gómez, Beatriz; Pollán, Marina; López-Abente, Gonzalo
2009-01-01
Background Non-Hodgkin's lymphomas (NHLs) have been linked to proximity to industrial areas, but evidence regarding the health risk posed by residence near pollutant industries is very limited. The European Pollutant Emission Register (EPER) is a public register that furnishes valuable information on industries that release pollutants to air and water, along with their geographical location. This study sought to explore the relationship between NHL mortality in small areas in Spain and environmental exposure to pollutant emissions from EPER-registered industries, using three Poisson-regression-based mathematical models. Methods Observed cases were drawn from mortality registries in Spain for the period 1994–2003. Industries were grouped into the following sectors: energy; metal; mineral; organic chemicals; waste; paper; food; and use of solvents. Populations having an industry within a radius of 1, 1.5, or 2 kilometres from the municipal centroid were deemed to be exposed. Municipalities outside those radii were considered as reference populations. The relative risks (RRs) associated with proximity to pollutant industries were estimated using the following methods: Poisson Regression; mixed Poisson model with random provincial effect; and spatial autoregressive modelling (BYM model). Results Only proximity of paper industries to population centres (>2 km) could be associated with a greater risk of NHL mortality (mixed model: RR:1.24, 95% CI:1.09–1.42; BYM model: RR:1.21, 95% CI:1.01–1.45; Poisson model: RR:1.16, 95% CI:1.06–1.27). Spatial models yielded higher estimates. Conclusion The reported association between exposure to air pollution from the paper, pulp and board industry and NHL mortality is independent of the model used. Inclusion of spatial random effects terms in the risk estimate improves the study of associations between environmental exposures and mortality. The EPER could be of great utility when studying the effects of industrial pollution
Anisotropy of Poisson's Ratio in Transversely Isotropic Rocks
NASA Astrophysics Data System (ADS)
Tokmakova, S. P.
2008-06-01
The Poisson's ratio of shales with different clay mineralogy and porosity and for many shale rocks around the world including brine-saturated Africa shales and sands, North Sea shales, gas- and brine-saturated Canadian carbonates were estimated from the values of Thomsen's parameters. Anisotropy of Poisson's ratio for a set of TI samples with "normal" and "anomalous" polarization", with "normal" values of Poisson's ratio and auxetic were calculated.
Stochastic search with Poisson and deterministic resetting
NASA Astrophysics Data System (ADS)
Bhat, Uttam; De Bacco, Caterina; Redner, S.
2016-08-01
We investigate a stochastic search process in one, two, and three dimensions in which N diffusing searchers that all start at x 0 seek a target at the origin. Each of the searchers is also reset to its starting point, either with rate r, or deterministically, with a reset time T. In one dimension and for a small number of searchers, the search time and the search cost are minimized at a non-zero optimal reset rate (or time), while for sufficiently large N, resetting always hinders the search. In general, a single searcher leads to the minimum search cost in one, two, and three dimensions. When the resetting is deterministic, several unexpected feature arise for N searchers, including the search time being independent of T for 1/T\\to 0 and the search cost being independent of N over a suitable range of N. Moreover, deterministic resetting typically leads to a lower search cost than in Poisson resetting.
Surface reconstruction through poisson disk sampling.
Hou, Wenguang; Xu, Zekai; Qin, Nannan; Xiong, Dongping; Ding, Mingyue
2015-01-01
This paper intends to generate the approximate Voronoi diagram in the geodesic metric for some unbiased samples selected from original points. The mesh model of seeds is then constructed on basis of the Voronoi diagram. Rather than constructing the Voronoi diagram for all original points, the proposed strategy is to run around the obstacle that the geodesic distances among neighboring points are sensitive to nearest neighbor definition. It is obvious that the reconstructed model is the level of detail of original points. Hence, our main motivation is to deal with the redundant scattered points. In implementation, Poisson disk sampling is taken to select seeds and helps to produce the Voronoi diagram. Adaptive reconstructions can be achieved by slightly changing the uniform strategy in selecting seeds. Behaviors of this method are investigated and accuracy evaluations are done. Experimental results show the proposed method is reliable and effective. PMID:25915744
Periodic Poisson model for beam dynamics simulation
NASA Astrophysics Data System (ADS)
Dohlus, M.; Henning, Ch.
2016-03-01
A method is described to solve the Poisson problem for a three dimensional source distribution that is periodic into one direction. Perpendicular to the direction of periodicity a free space (or open) boundary condition is realized. In beam physics, this approach allows us to calculate the space charge field of a continualized charged particle distribution with periodic pattern. The method is based on a particle-mesh approach with equidistant grid and fast convolution with a Green's function. The periodic approach uses only one period of the source distribution, but a periodic extension of the Green's function. The approach is numerically efficient and allows the investigation of periodic- and pseudoperiodic structures with period lengths that are small compared to the source dimensions, for instance of laser modulated beams or of the evolution of micro bunch structures. Applications for laser modulated beams are given.
Efficient information transfer by Poisson neurons.
Kostal, Lubomir; Shinomoto, Shigeru
2016-06-01
Recently, it has been suggested that certain neurons with Poissonian spiking statistics may communicate by discontinuously switching between two levels of firing intensity. Such a situation resembles in many ways the optimal information transmission protocol for the continuous-time Poisson channel known from information theory. In this contribution we employ the classical information-theoretic results to analyze the efficiency of such a transmission from different perspectives, emphasising the neurobiological viewpoint. We address both the ultimate limits, in terms of the information capacity under metabolic cost constraints, and the achievable bounds on performance at rates below capacity with fixed decoding error probability. In doing so we discuss optimal values of experimentally measurable quantities that can be compared with the actual neuronal recordings in a future effort. PMID:27106184
Surface Reconstruction through Poisson Disk Sampling
Hou, Wenguang; Xu, Zekai; Qin, Nannan; Xiong, Dongping; Ding, Mingyue
2015-01-01
This paper intends to generate the approximate Voronoi diagram in the geodesic metric for some unbiased samples selected from original points. The mesh model of seeds is then constructed on basis of the Voronoi diagram. Rather than constructing the Voronoi diagram for all original points, the proposed strategy is to run around the obstacle that the geodesic distances among neighboring points are sensitive to nearest neighbor definition. It is obvious that the reconstructed model is the level of detail of original points. Hence, our main motivation is to deal with the redundant scattered points. In implementation, Poisson disk sampling is taken to select seeds and helps to produce the Voronoi diagram. Adaptive reconstructions can be achieved by slightly changing the uniform strategy in selecting seeds. Behaviors of this method are investigated and accuracy evaluations are done. Experimental results show the proposed method is reliable and effective. PMID:25915744
Vinik, Aaron I.; Nevoret, Marie-Laure; Casellini, Carolina
2015-01-01
Sudorimetry technology has evolved dramatically, as a rapid, non-invasive, robust, and accurate biomarker for small fibers that can easily be integrated into clinical practice. Though skin biopsy with quantitation of intraepidermal nerve fiber density is still currently recognized as the gold standard, sudorimetry may yield diagnostic information not only on autonomic dysfunction but also enhance the assessment of the small somatosensory nerves, disease detection, progression, and response to therapy. Sudorimetry can be assessed using Sudoscan™, which measures electrochemical skin conductance (ESC) of hands and feet. It is based on different electrochemical principles (reverse iontophoresis and chronoamperometry) to measure sudomotor function than prior technologies, affording it a much more practical and precise performance profile for routine clinical use with potential as a research tool. Small nerve fiber dysfunction has been found to occur early in metabolic syndrome and diabetes and may also be the only neurological manifestation in small fiber neuropathies, beneath the detection limits of traditional nerve function tests. Test results are robust, accomplished within minutes, require little technical training and no calculations, since established norms have been provided for the effects of age, gender, and ethnicity. Sudomotor testing has been greatly under-utilized in the past, restricted to specialized centers capable of handling the technically demanding and expensive technology. Yet, evaluation of autonomic and somatic nerve function has been shown to be one of the best estimates of cardiovascular risk. Evaluation of sweating has the appeal of quantifiable non-invasive determination of the integrity of the peripheral autonomic nervous system, and can now be accomplished rapidly at point of care clinics with the determination of ESC, allowing intervention for morbid complications prior to permanent structural nerve damage. We review here sudomotor
Eberly, Lynn E
2007-01-01
This chapter describes multiple linear regression, a statistical approach used to describe the simultaneous associations of several variables with one continuous outcome. Important steps in using this approach include estimation and inference, variable selection in model building, and assessing model fit. The special cases of regression with interactions among the variables, polynomial regression, regressions with categorical (grouping) variables, and separate slopes models are also covered. Examples in microbiology are used throughout. PMID:18450050
2015-09-09
The NCCS Regression Test Harness is a software package that provides a framework to perform regression and acceptance testing on NCCS High Performance Computers. The package is written in Python and has only the dependency of a Subversion repository to store the regression tests.
Orthogonal Regression and Equivariance.
ERIC Educational Resources Information Center
Blankmeyer, Eric
Ordinary least-squares regression treats the variables asymmetrically, designating a dependent variable and one or more independent variables. When it is not obvious how to make this distinction, a researcher may prefer to use orthogonal regression, which treats the variables symmetrically. However, the usual procedure for orthogonal regression is…
Unitary Response Regression Models
ERIC Educational Resources Information Center
Lipovetsky, S.
2007-01-01
The dependent variable in a regular linear regression is a numerical variable, and in a logistic regression it is a binary or categorical variable. In these models the dependent variable has varying values. However, there are problems yielding an identity output of a constant value which can also be modelled in a linear or logistic regression with…
Wang, Z.; Ngai, K. L.; Wang, W. H.
2015-07-21
In the paper K. L. Ngai et al., [J. Chem. 140, 044511 (2014)], the empirical correlation of ductility with the Poisson's ratio, ν{sub Poisson}, found in metallic glasses was theoretically explained by microscopic dynamic processes which link on the one hand ductility, and on the other hand the Poisson's ratio. Specifically, the dynamic processes are the primitive relaxation in the Coupling Model which is the precursor of the Johari–Goldstein β-relaxation, and the caged atoms dynamics characterized by the effective Debye–Waller factor f{sub 0} or equivalently the nearly constant loss (NCL) in susceptibility. All these processes and the parameters characterizing them are accessible experimentally except f{sub 0} or the NCL of caged atoms; thus, so far, the experimental verification of the explanation of the correlation between ductility and Poisson's ratio is incomplete. In the experimental part of this paper, we report dynamic mechanical measurement of the NCL of the metallic glass La{sub 60}Ni{sub 15}Al{sub 25} as-cast, and the changes by annealing at temperature below T{sub g}. The observed monotonic decrease of the NCL with aging time, reflecting the corresponding increase of f{sub 0}, correlates with the decrease of ν{sub Poisson}. This is important observation because such measurements, not made before, provide the missing link in confirming by experiment the explanation of the correlation of ductility with ν{sub Poisson}. On aging the metallic glass, also observed in the isochronal loss spectra is the shift of the β-relaxation to higher temperatures and reduction of the relaxation strength. These concomitant changes of the β-relaxation and NCL are the root cause of embrittlement by aging the metallic glass. The NCL of caged atoms is terminated by the onset of the primitive relaxation in the Coupling Model, which is generally supported by experiments. From this relation, the monotonic decrease of the NCL with aging time is caused by the slowing down
Solves Poisson's Equation in Axizymmetric Geometry on a Rectangular Mesh
1996-09-10
DATHETA4.0 computes the magnetostatic field produced by multiple point current sources in the presence of perfect conductors in axisymmetric geometry. DATHETA4.0 has an interactive user interface and solves Poisson''s equation using the ADI method on a rectangular finite-difference mesh. DATHETA4.0 uncludes models specific to applied-B ion diodes.
Park, Dong Choon; Yeo, Seung Geun
2013-09-01
Aging is initiated based on genetic and environmental factors that operate from the time of birth of organisms. Aging induces physiological phenomena such as reduction of cell counts, deterioration of tissue proteins, tissue atrophy, a decrease of the metabolic rate, reduction of body fluids, and calcium metabolism abnormalities, with final progression onto pathological aging. Despite the efforts from many researchers, the progression and the mechanisms of aging are not clearly understood yet. Therefore, the authors would like to introduce several theories which have gained attentions among the published theories up to date; genetic program theory, wear-and-tear theory, telomere theory, endocrine theory, DNA damage hypothesis, error catastrophe theory, the rate of living theory, mitochondrial theory, and free radical theory. Although there have been many studies that have tried to prevent aging and prolong life, here we introduce a couple of theories which have been proven more or less; food, exercise, and diet restriction. PMID:24653904
Park, Dong Choon
2013-01-01
Aging is initiated based on genetic and environmental factors that operate from the time of birth of organisms. Aging induces physiological phenomena such as reduction of cell counts, deterioration of tissue proteins, tissue atrophy, a decrease of the metabolic rate, reduction of body fluids, and calcium metabolism abnormalities, with final progression onto pathological aging. Despite the efforts from many researchers, the progression and the mechanisms of aging are not clearly understood yet. Therefore, the authors would like to introduce several theories which have gained attentions among the published theories up to date; genetic program theory, wear-and-tear theory, telomere theory, endocrine theory, DNA damage hypothesis, error catastrophe theory, the rate of living theory, mitochondrial theory, and free radical theory. Although there have been many studies that have tried to prevent aging and prolong life, here we introduce a couple of theories which have been proven more or less; food, exercise, and diet restriction. PMID:24653904
Penalized count data regression with application to hospital stay after pediatric cardiac surgery
Wang, Zhu; Ma, Shuangge; Zappitelli, Michael; Parikh, Chirag; Wang, Ching-Yun; Devarajan, Prasad
2014-01-01
Pediatric cardiac surgery may lead to poor outcomes such as acute kidney injury (AKI) and prolonged hospital length of stay (LOS). Plasma and urine biomarkers may help with early identification and prediction of these adverse clinical outcomes. In a recent multi-center study, 311 children undergoing cardiac surgery were enrolled to evaluate multiple biomarkers for diagnosis and prognosis of AKI and other clinical outcomes. LOS is often analyzed as count data, thus Poisson regression and negative binomial (NB) regression are common choices for developing predictive models. With many correlated prognostic factors and biomarkers, variable selection is an important step. The present paper proposes new variable selection methods for Poisson and NB regression. We evaluated regularized regression through penalized likelihood function. We first extend the elastic net (Enet) Poisson to two penalized Poisson regression: Mnet, a combination of minimax concave and ridge penalties; and Snet, a combination of smoothly clipped absolute deviation (SCAD) and ridge penalties. Furthermore, we extend the above methods to the penalized NB regression. For the Enet, Mnet, and Snet penalties (EMSnet), we develop a unified algorithm to estimate the parameters and conduct variable selection simultaneously. Simulation studies show that the proposed methods have advantages with highly correlated predictors, against some of the competing methods. Applying the proposed methods to the aforementioned data, it is discovered that early postoperative urine biomarkers including NGAL, IL18, and KIM-1 independently predict LOS, after adjusting for risk and biomarker variables. PMID:24742430
Deformation mechanisms in negative Poisson's ratio materials - Structural aspects
NASA Technical Reports Server (NTRS)
Lakes, R.
1991-01-01
Poisson's ratio in materials is governed by the following aspects of the microstructure: the presence of rotational degrees of freedom, non-affine deformation kinematics, or anisotropic structure. Several structural models are examined. The non-affine kinematics are seen to be essential for the production of negative Poisson's ratios for isotropic materials containing central force linkages of positive stiffness. Non-central forces combined with pre-load can also give rise to a negative Poisson's ratio in isotropic materials. A chiral microstructure with non-central force interaction or non-affine deformation can also exhibit a negative Poisson's ratio. Toughness and damage resistance in these materials may be affected by the Poisson's ratio itself, as well as by generalized continuum aspects associated with the microstructure.
A Local Poisson Graphical Model for inferring networks from sequencing data.
Allen, Genevera I; Liu, Zhandong
2013-09-01
Gaussian graphical models, a class of undirected graphs or Markov Networks, are often used to infer gene networks based on microarray expression data. Many scientists, however, have begun using high-throughput sequencing technologies such as RNA-sequencing or next generation sequencing to measure gene expression. As the resulting data consists of counts of sequencing reads for each gene, Gaussian graphical models are not optimal for this discrete data. In this paper, we propose a novel method for inferring gene networks from sequencing data: the Local Poisson Graphical Model. Our model assumes a Local Markov property where each variable conditional on all other variables is Poisson distributed. We develop a neighborhood selection algorithm to fit our model locally by performing a series of l1 penalized Poisson, or log-linear, regressions. This yields a fast parallel algorithm for estimating networks from next generation sequencing data. In simulations, we illustrate the effectiveness of our methods for recovering network structure from count data. A case study on breast cancer microRNAs (miRNAs), a novel application of graphical models, finds known regulators of breast cancer genes and discovers novel miRNA clusters and hubs that are targets for future research. PMID:23955777
Poisson-Boltzmann-Nernst-Planck model
NASA Astrophysics Data System (ADS)
Zheng, Qiong; Wei, Guo-Wei
2011-05-01
The Poisson-Nernst-Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species in the system, since each ion species corresponds to one Nernst-Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst-Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst-Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson-Boltzmann and Nernst-Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external
Poisson-Boltzmann-Nernst-Planck model.
Zheng, Qiong; Wei, Guo-Wei
2011-05-21
The Poisson-Nernst-Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species in the system, since each ion species corresponds to one Nernst-Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst-Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst-Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson-Boltzmann and Nernst-Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external
Poisson-Boltzmann-Nernst-Planck model
Zheng Qiong; Wei Guowei
2011-05-21
The Poisson-Nernst-Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species in the system, since each ion species corresponds to one Nernst-Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst-Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst-Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson-Boltzmann and Nernst-Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external
Generalized HPC method for the Poisson equation
NASA Astrophysics Data System (ADS)
Bardazzi, A.; Lugni, C.; Antuono, M.; Graziani, G.; Faltinsen, O. M.
2015-10-01
An efficient and innovative numerical algorithm based on the use of Harmonic Polynomials on each Cell of the computational domain (HPC method) has been recently proposed by Shao and Faltinsen (2014) [1], to solve Boundary Value Problem governed by the Laplace equation. Here, we extend the HPC method for the solution of non-homogeneous elliptic boundary value problems. The homogeneous solution, i.e. the Laplace equation, is represented through a polynomial function with harmonic polynomials while the particular solution of the Poisson equation is provided by a bi-quadratic function. This scheme has been called generalized HPC method. The present algorithm, accurate up to the 4th order, proved to be efficient, i.e. easy to be implemented and with a low computational effort, for the solution of two-dimensional elliptic boundary value problems. Furthermore, it provides an analytical representation of the solution within each computational stencil, which allows its coupling with existing numerical algorithms within an efficient domain-decomposition strategy or within an adaptive mesh refinement algorithm.
Reconstructing Early School Trauma through Age Regression.
ERIC Educational Resources Information Center
Rousell, Michael A.; Gillis, David
1994-01-01
Normal fluctuations in consciousness and spontaneous trance states may produce inadvertent hypnotic influence in the classroom. Two case studies illustrate how students may be thus influenced by explicit or implicit suggestions, resulting in subsequent self-defeating behaviors. These cases were successfully treated by reconstructing earlier…
Periodicity characterization of orbital prediction error and Poisson series fitting
NASA Astrophysics Data System (ADS)
Bai, Xian-Zong; Chen, Lei; Tang, Guo-Jin
2012-09-01
Publicly available Two-Line Element Sets (TLE) contains no associated error or accuracy information. The historical-data-based method is a feasible choice for those objects only TLE data are available. Most of current TLE error analysis methods use polynomial fitting which cannot represent the periodic characteristics. This paper has presented a methodology for periodicity characterization and Poisson series fitting for orbital prediction error based on historical orbital data. As error-fitting function, the Poisson series can describe variation of error with respect to propagation duration and on-orbit position of objects. The Poisson coefficient matrices of each error components are fitted using least squares method. Effects of polynomial terms, trigonometric terms, and mixed terms of Poisson series are discussed. Substituting time difference and mean anomaly into the Poisson series one can obtain the error information at specific time. Four satellites (Cosmos-2251, GPS-62, SLOSHSAT, TelStar-10) from four orbital type (LEO, MEO, HEO, GEO, respectively) were selected as examples to demonstrate and validate the method. The results indicated that the periodic characteristics exist in all three components of four objects, especially HEO and MEO. The periodicity characterization and Poisson series fitting could improve accuracy of the orbit covariance information. The Poisson series is a common form for describing orbital prediction error, the commonly used polynomial fitting is a special case of the Poisson series fitting. The Poisson coefficient matrices can be obtained before close approach analysis. This method does not require any knowledge about how the state vectors are generated, so it can handle not only TLE data but also other orbit models and elements.
Boundary Lax pairs from non-ultra-local Poisson algebras
Avan, Jean; Doikou, Anastasia
2009-11-15
We consider non-ultra-local linear Poisson algebras on a continuous line. Suitable combinations of representations of these algebras yield representations of novel generalized linear Poisson algebras or 'boundary' extensions. They are parametrized by a boundary scalar matrix and depend, in addition, on the choice of an antiautomorphism. The new algebras are the classical-linear counterparts of the known quadratic quantum boundary algebras. For any choice of parameters, the non-ultra-local contribution of the original Poisson algebra disappears. We also systematically construct the associated classical Lax pair. The classical boundary principal chiral model is examined as a physical example.
Poisson Ratio of Epitaxial Germanium Films Grown on Silicon
NASA Astrophysics Data System (ADS)
Bharathan, Jayesh; Narayan, Jagdish; Rozgonyi, George; Bulman, Gary E.
2013-01-01
An accurate knowledge of elastic constants of thin films is important in understanding the effect of strain on material properties. We have used residual thermal strain to measure the Poisson ratio of Ge films grown on Si ⟨001⟩ substrates, using the sin2 ψ method and high-resolution x-ray diffraction. The Poisson ratio of the Ge films was measured to be 0.25, compared with the bulk value of 0.27. Our study indicates that use of Poisson ratio instead of bulk compliance values yields a more accurate description of the state of in-plane strain present in the film.
On classification of discrete, scalar-valued Poisson brackets
NASA Astrophysics Data System (ADS)
Parodi, E.
2012-10-01
We address the problem of classifying discrete differential-geometric Poisson brackets (dDGPBs) of any fixed order on a target space of dimension 1. We prove that these Poisson brackets (PBs) are in one-to-one correspondence with the intersection points of certain projective hypersurfaces. In addition, they can be reduced to a cubic PB of the standard Volterra lattice by discrete Miura-type transformations. Finally, by improving a lattice consolidation procedure, we obtain new families of non-degenerate, vector-valued and first-order dDGPBs that can be considered in the framework of admissible Lie-Poisson group theory.
Continental crust composition constrained by measurements of crustal Poisson's ratio
NASA Astrophysics Data System (ADS)
Zandt, George; Ammon, Charles J.
1995-03-01
DECIPHERING the geological evolution of the Earth's continental crust requires knowledge of its bulk composition and global variability. The main uncertainties are associated with the composition of the lower crust. Seismic measurements probe the elastic properties of the crust at depth, from which composition can be inferred. Of particular note is Poisson's ratio,Σ ; this elastic parameter can be determined uniquely from the ratio of P- to S-wave seismic velocity, and provides a better diagnostic of crustal composition than either P- or S-wave velocity alone1. Previous attempts to measure Σ have been limited by difficulties in obtaining coincident P- and S-wave data sampling the entire crust2. Here we report 76 new estimates of crustal Σ spanning all of the continents except Antarctica. We find that, on average, Σ increases with the age of the crust. Our results strongly support the presence of a mafic lower crust beneath cratons, and suggest either a uniformitarian craton formation process involving delamination of the lower crust during continental collisions, followed by magmatic underplating, or a model in which crust formation processes have changed since the Precambrian era.
Prediction in Multiple Regression.
ERIC Educational Resources Information Center
Osborne, Jason W.
2000-01-01
Presents the concept of prediction via multiple regression (MR) and discusses the assumptions underlying multiple regression analyses. Also discusses shrinkage, cross-validation, and double cross-validation of prediction equations and describes how to calculate confidence intervals around individual predictions. (SLD)
Improved Regression Calibration
ERIC Educational Resources Information Center
Skrondal, Anders; Kuha, Jouni
2012-01-01
The likelihood for generalized linear models with covariate measurement error cannot in general be expressed in closed form, which makes maximum likelihood estimation taxing. A popular alternative is regression calibration which is computationally efficient at the cost of inconsistent estimation. We propose an improved regression calibration…
Gerber, Samuel; Rübel, Oliver; Bremer, Peer-Timo; Pascucci, Valerio; Whitaker, Ross T.
2012-01-01
This paper introduces a novel partition-based regression approach that incorporates topological information. Partition-based regression typically introduce a quality-of-fit-driven decomposition of the domain. The emphasis in this work is on a topologically meaningful segmentation. Thus, the proposed regression approach is based on a segmentation induced by a discrete approximation of the Morse-Smale complex. This yields a segmentation with partitions corresponding to regions of the function with a single minimum and maximum that are often well approximated by a linear model. This approach yields regression models that are amenable to interpretation and have good predictive capacity. Typically, regression estimates are quantified by their geometrical accuracy. For the proposed regression, an important aspect is the quality of the segmentation itself. Thus, this paper introduces a new criterion that measures the topological accuracy of the estimate. The topological accuracy provides a complementary measure to the classical geometrical error measures and is very sensitive to over-fitting. The Morse-Smale regression is compared to state-of-the-art approaches in terms of geometry and topology and yields comparable or improved fits in many cases. Finally, a detailed study on climate-simulation data demonstrates the application of the Morse-Smale regression. Supplementary materials are available online and contain an implementation of the proposed approach in the R package msr, an analysis and simulations on the stability of the Morse-Smale complex approximation and additional tables for the climate-simulation study. PMID:23687424
Gerber, Samuel; Rubel, Oliver; Bremer, Peer -Timo; Pascucci, Valerio; Whitaker, Ross T.
2012-01-19
This paper introduces a novel partition-based regression approach that incorporates topological information. Partition-based regression typically introduces a quality-of-fit-driven decomposition of the domain. The emphasis in this work is on a topologically meaningful segmentation. Thus, the proposed regression approach is based on a segmentation induced by a discrete approximation of the Morse–Smale complex. This yields a segmentation with partitions corresponding to regions of the function with a single minimum and maximum that are often well approximated by a linear model. This approach yields regression models that are amenable to interpretation and have good predictive capacity. Typically, regression estimates are quantified by their geometrical accuracy. For the proposed regression, an important aspect is the quality of the segmentation itself. Thus, this article introduces a new criterion that measures the topological accuracy of the estimate. The topological accuracy provides a complementary measure to the classical geometrical error measures and is very sensitive to overfitting. The Morse–Smale regression is compared to state-of-the-art approaches in terms of geometry and topology and yields comparable or improved fits in many cases. Finally, a detailed study on climate-simulation data demonstrates the application of the Morse–Smale regression. Supplementary Materials are available online and contain an implementation of the proposed approach in the R package msr, an analysis and simulations on the stability of the Morse–Smale complex approximation, and additional tables for the climate-simulation study.
Moghimbeigi, Abbas
2015-05-01
Poisson regression models provide a standard framework for quantitative trait locus (QTL) mapping of count traits. In practice, however, count traits are often over-dispersed relative to the Poisson distribution. In these situations, the zero-inflated Poisson (ZIP), zero-inflated generalized Poisson (ZIGP) and zero-inflated negative binomial (ZINB) regression may be useful for QTL mapping of count traits. Added genetic variables to the negative binomial part equation, may also affect extra zero data. In this study, to overcome these challenges, I apply two-part ZINB model. The EM algorithm with Newton-Raphson method in the M-step uses for estimating parameters. An application of the two-part ZINB model for QTL mapping is considered to detect associations between the formation of gallstone and the genotype of markers. PMID:25728790
Negative Poisson's ratios for extreme states of matter
Baughman; Dantas; Stafstrom; Zakhidov; Mitchell; Dubin
2000-06-16
Negative Poisson's ratios are predicted for body-centered-cubic phases that likely exist in white dwarf cores and neutron star outer crusts, as well as those found for vacuumlike ion crystals, plasma dust crystals, and colloidal crystals (including certain virus crystals). The existence of this counterintuitive property, which means that a material laterally expands when stretched, is experimentally demonstrated for very low density crystals of trapped ions. At very high densities, the large predicted negative and positive Poisson's ratios might be important for understanding the asteroseismology of neutron stars and white dwarfs and the effect of stellar stresses on nuclear reaction rates. Giant Poisson's ratios are both predicted and observed for highly strained coulombic photonic crystals, suggesting possible applications of large, tunable Poisson's ratios for photonic crystal devices. PMID:10856209
Tuning the Poisson's Ratio of Biomaterials for Investigating Cellular Response
Meggs, Kyle; Qu, Xin; Chen, Shaochen
2013-01-01
Cells sense and respond to mechanical forces, regardless of whether the source is from a normal tissue matrix, an adjacent cell or a synthetic substrate. In recent years, cell response to surface rigidity has been extensively studied by modulating the elastic modulus of poly(ethylene glycol) (PEG)-based hydrogels. In the context of biomaterials, Poisson's ratio, another fundamental material property parameter has not been explored, primarily because of challenges involved in tuning the Poisson's ratio in biological scaffolds. Two-photon polymerization is used to fabricate suspended web structures that exhibit positive and negative Poisson's ratio (NPR), based on analytical models. NPR webs demonstrate biaxial expansion/compression behavior, as one or multiple cells apply local forces and move the structures. Unusual cell division on NPR structures is also demonstrated. This methodology can be used to tune the Poisson's ratio of several photocurable biomaterials and could have potential implications in the field of mechanobiology. PMID:24076754
Modeling laser velocimeter signals as triply stochastic Poisson processes
NASA Technical Reports Server (NTRS)
Mayo, W. T., Jr.
1976-01-01
Previous models of laser Doppler velocimeter (LDV) systems have not adequately described dual-scatter signals in a manner useful for analysis and simulation of low-level photon-limited signals. At low photon rates, an LDV signal at the output of a photomultiplier tube is a compound nonhomogeneous filtered Poisson process, whose intensity function is another (slower) Poisson process with the nonstationary rate and frequency parameters controlled by a random flow (slowest) process. In the present paper, generalized Poisson shot noise models are developed for low-level LDV signals. Theoretical results useful in detection error analysis and simulation are presented, along with measurements of burst amplitude statistics. Computer generated simulations illustrate the difference between Gaussian and Poisson models of low-level signals.
Schmid, Matthias; Wickler, Florian; Maloney, Kelly O.; Mitchell, Richard; Fenske, Nora; Mayr, Andreas
2013-01-01
Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis of ratings that are measured on a bounded scale. In this paper, we consider beta regression, which is a generalization of logit models to situations where the response is continuous on the interval (0,1). Consequently, beta regression is a convenient tool for analyzing percentage responses. The classical approach to fit a beta regression model is to use maximum likelihood estimation with subsequent AIC-based variable selection. As an alternative to this established - yet unstable - approach, we propose a new estimation technique called boosted beta regression. With boosted beta regression estimation and variable selection can be carried out simultaneously in a highly efficient way. Additionally, both the mean and the variance of a percentage response can be modeled using flexible nonlinear covariate effects. As a consequence, the new method accounts for common problems such as overdispersion and non-binomial variance structures. PMID:23626706
Lamb wave propagation in negative Poisson's ratio composites
NASA Astrophysics Data System (ADS)
Remillat, Chrystel; Wilcox, Paul; Scarpa, Fabrizio
2008-03-01
Lamb wave propagation is evaluated for cross-ply laminate composites exhibiting through-the-thickness negative Poisson's ratio. The laminates are mechanically modeled using the Classical Laminate Theory, while the propagation of Lamb waves is investigated using a combination of semi analytical models and Finite Element time-stepping techniques. The auxetic laminates exhibit well spaced bending, shear and symmetric fundamental modes, while featuring normal stresses for A 0 mode 3 times lower than composite laminates with positive Poisson's ratio.
Bicrossed products induced by Poisson vector fields and their integrability
NASA Astrophysics Data System (ADS)
Djiba, Samson Apourewagne; Wade, Aïssa
2016-01-01
First we show that, associated to any Poisson vector field E on a Poisson manifold (M,π), there is a canonical Lie algebroid structure on the first jet bundle J1M which, depends only on the cohomology class of E. We then introduce the notion of a cosymplectic groupoid and we discuss the integrability of the first jet bundle into a cosymplectic groupoid. Finally, we give applications to Atiyah classes and L∞-algebras.
Classification of linearly compact simple Nambu-Poisson algebras
NASA Astrophysics Data System (ADS)
Cantarini, Nicoletta; Kac, Victor G.
2016-05-01
We introduce the notion of a universal odd generalized Poisson superalgebra associated with an associative algebra A, by generalizing a construction made in the work of De Sole and Kac [Jpn. J. Math. 8, 1-145 (2013)]. By making use of this notion we give a complete classification of simple linearly compact (generalized) n-Nambu-Poisson algebras over an algebraically closed field of characteristic zero.
Hajian-Tilaki, K O; Heidari, B
2007-01-01
Obesity is an undesirable outcome of changing of lifestyle and behaviours. It is also reversible predisposing factor for the development of several debilitating diseases. This study was aimed to determine the prevalence rate of obesity, overweight, central obesity and their associated factors in the north of Iran. We conducted a population-based cross-sectional study with a sample of 1800 women and 1800 men with respective mean ages of 37.5 +/- 13.0 and 38.5 +/- 14.2 years of urban population aged 20-70 years living in the north of Iran. The demographic and lifestyle data, in particular, age, gender, marital status, marriage age, family history of obesity, educational level, occupation, occupational and leisure time physical activity, duration of exercise per week, parity and the number of children were collected with a designed questionnaire. Diagnosis of obesity and central obesity were confirmed by the WHO standard recommended method by determining of body mass index (BMI) and waist circumference (WC). Logistic regression model was used to estimate the adjusted odds ratio (OR) and its 95% confidence interval. Over half of the study subjects were at educational levels of high school or higher; 79.4% of population was married and 35.3% had a family history of parental obesity. The majority of subjects in particular women had none or low levels of physical activity. The overall prevalence rates of obesity and overweight were 18.8% and 34.8% respectively. The overall prevalence rate of central obesity was 28.3%. The rate of obesity in women was higher than men (P < 0.0001). In both genders, particularly in the women, the rate of obesity was raised by increasing age. There was an inverse relation between the risk of obesity and marriage age, the high level of education (OR = 0.19, P < 0.0001), severe occupational activity (OR = 0.44, P < 0.0001), the level of exercise (in subjects with 3-4 h exercise per week, OR = 0.58, P < 0.001) and leisure time activity. Marriage
George: Gaussian Process regression
NASA Astrophysics Data System (ADS)
Foreman-Mackey, Daniel
2015-11-01
George is a fast and flexible library, implemented in C++ with Python bindings, for Gaussian Process regression useful for accounting for correlated noise in astronomical datasets, including those for transiting exoplanet discovery and characterization and stellar population modeling.
Multivariate Regression with Calibration*
Liu, Han; Wang, Lie; Zhao, Tuo
2014-01-01
We propose a new method named calibrated multivariate regression (CMR) for fitting high dimensional multivariate regression models. Compared to existing methods, CMR calibrates the regularization for each regression task with respect to its noise level so that it is simultaneously tuning insensitive and achieves an improved finite-sample performance. Computationally, we develop an efficient smoothed proximal gradient algorithm which has a worst-case iteration complexity O(1/ε), where ε is a pre-specified numerical accuracy. Theoretically, we prove that CMR achieves the optimal rate of convergence in parameter estimation. We illustrate the usefulness of CMR by thorough numerical simulations and show that CMR consistently outperforms other high dimensional multivariate regression methods. We also apply CMR on a brain activity prediction problem and find that CMR is as competitive as the handcrafted model created by human experts. PMID:25620861
Comparing regression methods for the two-stage clonal expansion model of carcinogenesis.
Kaiser, J C; Heidenreich, W F
2004-11-15
In the statistical analysis of cohort data with risk estimation models, both Poisson and individual likelihood regressions are widely used methods of parameter estimation. In this paper, their performance has been tested with the biologically motivated two-stage clonal expansion (TSCE) model of carcinogenesis. To exclude inevitable uncertainties of existing data, cohorts with simple individual exposure history have been created by Monte Carlo simulation. To generate some similar properties of atomic bomb survivors and radon-exposed mine workers, both acute and protracted exposure patterns have been generated. Then the capacity of the two regression methods has been compared to retrieve a priori known model parameters from the simulated cohort data. For simple models with smooth hazard functions, the parameter estimates from both methods come close to their true values. However, for models with strongly discontinuous functions which are generated by the cell mutation process of transformation, the Poisson regression method fails to produce reliable estimates. This behaviour is explained by the construction of class averages during data stratification. Thereby, some indispensable information on the individual exposure history was destroyed. It could not be repaired by countermeasures such as the refinement of Poisson classes or a more adequate choice of Poisson groups. Although this choice might still exist we were unable to discover it. In contrast to this, the individual likelihood regression technique was found to work reliably for all considered versions of the TSCE model. PMID:15490436
Semiclassical Limits of Ore Extensions and a Poisson Generalized Weyl Algebra
NASA Astrophysics Data System (ADS)
Cho, Eun-Hee; Oh, Sei-Qwon
2016-07-01
We observe [Launois and Lecoutre, Trans. Am. Math. Soc. 368:755-785, 2016, Proposition 4.1] that Poisson polynomial extensions appear as semiclassical limits of a class of Ore extensions. As an application, a Poisson generalized Weyl algebra A 1, considered as a Poisson version of the quantum generalized Weyl algebra, is constructed and its Poisson structures are studied. In particular, a necessary and sufficient condition is obtained, such that A 1 is Poisson simple and established that the Poisson endomorphisms of A 1 are Poisson analogues of the endomorphisms of the quantum generalized Weyl algebra.
Semiclassical Limits of Ore Extensions and a Poisson Generalized Weyl Algebra
NASA Astrophysics Data System (ADS)
Cho, Eun-Hee; Oh, Sei-Qwon
2016-05-01
We observe [Launois and Lecoutre, Trans. Am. Math. Soc. 368:755-785, 2016, Proposition 4.1] that Poisson polynomial extensions appear as semiclassical limits of a class of Ore extensions. As an application, a Poisson generalized Weyl algebra A 1, considered as a Poisson version of the quantum generalized Weyl algebra, is constructed and its Poisson structures are studied. In particular, a necessary and sufficient condition is obtained, such that A 1 is Poisson simple and established that the Poisson endomorphisms of A 1 are Poisson analogues of the endomorphisms of the quantum generalized Weyl algebra.
Practical Session: Logistic Regression
NASA Astrophysics Data System (ADS)
Clausel, M.; Grégoire, G.
2014-12-01
An exercise is proposed to illustrate the logistic regression. One investigates the different risk factors in the apparition of coronary heart disease. It has been proposed in Chapter 5 of the book of D.G. Kleinbaum and M. Klein, "Logistic Regression", Statistics for Biology and Health, Springer Science Business Media, LLC (2010) and also by D. Chessel and A.B. Dufour in Lyon 1 (see Sect. 6 of http://pbil.univ-lyon1.fr/R/pdf/tdr341.pdf). This example is based on data given in the file evans.txt coming from http://www.sph.emory.edu/dkleinb/logreg3.htm#data.
Turesson, Ingemar; Velez, Ramon; Kristinsson, Sigurdur Y.; Landgren, Ola
2010-01-01
OBJECTIVE: To define age-adjusted incidence trends in multiple myeloma (MM) in a well-characterized population during a long period, given that some, but not all, studies have reported increasing MM incidence over time and that clinical experience from some centers suggests an increased incidence mainly in younger age groups. PATIENTS AND METHODS: We identified all patients (N=773) with MM diagnosed in Malmö, Sweden, from January 1, 1950, through December 31, 2005. Using census data for the population of Malmö, we calculated age- and sex-specific incidence rates. Incidence rates were also calculated for 10-year birth cohorts. Analyses for trends were performed using the Poisson regression. RESULTS: From 1950 through 2005, the average annual age-adjusted (European standard population) incidence rate remained stable (Poisson regression, P=.07 for men and P=.67 for women). Also, comparisons between 10-year birth cohorts (from 1870-1879 to 1970-1979) failed to detect any increase. Between 1950-1959 and 2000-2005, the median age at diagnosis of MM increased from 70 to 74 years, and the proportion of newly diagnosed patients aged 80 years or older increased from 16% to 31%. CONCLUSION: Our finding of stable MM incidence rates for all age groups during the past 5 decades suggests that recent clinical observations of an increase of MM in the young may reflect an increased referral stream of younger patients with MM, which in turn might be a consequence of improved access to better MM therapies. Importantly, because of the aging population, the proportion of patients with MM aged 80 years or older doubled between 1950-1959 and 2000-2005. PMID:20194150
Electrostatic forces in the Poisson-Boltzmann systems
Xiao, Li; Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray
2013-01-01
Continuum modeling of electrostatic interactions based upon numerical solutions of the Poisson-Boltzmann equation has been widely used in structural and functional analyses of biomolecules. A limitation of the numerical strategies is that it is conceptually difficult to incorporate these types of models into molecular mechanics simulations, mainly because of the issue in assigning atomic forces. In this theoretical study, we first derived the Maxwell stress tensor for molecular systems obeying the full nonlinear Poisson-Boltzmann equation. We further derived formulations of analytical electrostatic forces given the Maxwell stress tensor and discussed the relations of the formulations with those published in the literature. We showed that the formulations derived from the Maxwell stress tensor require a weaker condition for its validity, applicable to nonlinear Poisson-Boltzmann systems with a finite number of singularities such as atomic point charges and the existence of discontinuous dielectric as in the widely used classical piece-wise constant dielectric models. PMID:24028101
Detection of Gaussian signals in Poisson-modulated interference.
Streit, R L
2000-10-01
Passive broadband detection of target signals by an array of hydrophones in the presence of multiple discrete interferers is analyzed under Gaussian statistics and low signal-to-noise ratio conditions. A nonhomogeneous Poisson-modulated interference process is used to model the ensemble of possible arrival directions of the discrete interferers. Closed-form expressions are derived for the recognition differential of the passive-sonar equation in the presence of Poisson-modulated interference. The interference-compensated recognition differential differs from the classical recognition differential by an additive positive term that depend on the interference-to-noise ratio, the directionality of the Poisson-modulated interference, and the array beam pattern. PMID:11051502
A spectral Poisson solver for kinetic plasma simulation
NASA Astrophysics Data System (ADS)
Szeremley, Daniel; Obberath, Jens; Brinkmann, Ralf
2011-10-01
Plasma resonance spectroscopy is a well established plasma diagnostic method, realized in several designs. One of these designs is the multipole resonance probe (MRP). In its idealized - geometrically simplified - version it consists of two dielectrically shielded, hemispherical electrodes to which an RF signal is applied. A numerical tool is under development which is capable of simulating the dynamics of the plasma surrounding the MRP in electrostatic approximation. In this contribution we concentrate on the specialized Poisson solver for that tool. The plasma is represented by an ensemble of point charges. By expanding both the charge density and the potential into spherical harmonics, a largely analytical solution of the Poisson problem can be employed. For a practical implementation, the expansion must be appropriately truncated. With this spectral solver we are able to efficiently solve the Poisson equation in a kinetic plasma simulation without the need of introducing a spatial discretization.
The Poisson-Boltzmann model for tRNA
Gruziel, Magdalena; Grochowski, Pawel; Trylska, Joanna
2008-01-01
Using tRNA molecule as an example, we evaluate the applicability of the Poisson-Boltzmann model to highly charged systems such as nucleic acids. Particularly, we describe the effect of explicit crystallographic divalent ions and water molecules, ionic strength of the solvent, and the linear approximation to the Poisson-Boltzmann equation on the electrostatic potential and electrostatic free energy. We calculate and compare typical similarity indices and measures, such as Hodgkin index and root mean square deviation. Finally, we introduce a modification to the nonlinear Poisson-Boltzmann equation, which accounts in a simple way for the finite size of mobile ions, by applying a cutoff in the concentration formula for ionic distribution at regions of high electrostatic potentials. We test the influence of this ionic concentration cutoff on the electrostatic properties of tRNA. PMID:18432617
Blocked Shape Memory Effect in Negative Poisson's Ratio Polymer Metamaterials.
Boba, Katarzyna; Bianchi, Matteo; McCombe, Greg; Gatt, Ruben; Griffin, Anselm C; Richardson, Robert M; Scarpa, Fabrizio; Hamerton, Ian; Grima, Joseph N
2016-08-10
We describe a new class of negative Poisson's ratio (NPR) open cell PU-PE foams produced by blocking the shape memory effect in the polymer. Contrary to classical NPR open cell thermoset and thermoplastic foams that return to their auxetic phase after reheating (and therefore limit their use in technological applications), this new class of cellular solids has a permanent negative Poisson's ratio behavior, generated through multiple shape memory (mSM) treatments that lead to a fixity of the topology of the cell foam. The mSM-NPR foams have Poisson's ratio values similar to the auxetic foams prior their return to the conventional phase, but compressive stress-strain curves similar to the ones of conventional foams. The results show that by manipulating the shape memory effect in polymer microstructures it is possible to obtain new classes of materials with unusual deformation mechanisms. PMID:27377708
Explorations in Statistics: Regression
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2011-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This seventh installment of "Explorations in Statistics" explores regression, a technique that estimates the nature of the relationship between two things for which we may only surmise a mechanistic or predictive connection.…
Modern Regression Discontinuity Analysis
ERIC Educational Resources Information Center
Bloom, Howard S.
2012-01-01
This article provides a detailed discussion of the theory and practice of modern regression discontinuity (RD) analysis for estimating the effects of interventions or treatments. Part 1 briefly chronicles the history of RD analysis and summarizes its past applications. Part 2 explains how in theory an RD analysis can identify an average effect of…
Webcast entitled Statistical Tools for Making Sense of Data, by the National Nutrient Criteria Support Center, N-STEPS (Nutrients-Scientific Technical Exchange Partnership. The section "Correlation and Regression" provides an overview of these two techniques in the context of nut...
Multiple linear regression analysis
NASA Technical Reports Server (NTRS)
Edwards, T. R.
1980-01-01
Program rapidly selects best-suited set of coefficients. User supplies only vectors of independent and dependent data and specifies confidence level required. Program uses stepwise statistical procedure for relating minimal set of variables to set of observations; final regression contains only most statistically significant coefficients. Program is written in FORTRAN IV for batch execution and has been implemented on NOVA 1200.
Mechanisms of neuroblastoma regression
Brodeur, Garrett M.; Bagatell, Rochelle
2014-01-01
Recent genomic and biological studies of neuroblastoma have shed light on the dramatic heterogeneity in the clinical behaviour of this disease, which spans from spontaneous regression or differentiation in some patients, to relentless disease progression in others, despite intensive multimodality therapy. This evidence also suggests several possible mechanisms to explain the phenomena of spontaneous regression in neuroblastomas, including neurotrophin deprivation, humoral or cellular immunity, loss of telomerase activity and alterations in epigenetic regulation. A better understanding of the mechanisms of spontaneous regression might help to identify optimal therapeutic approaches for patients with these tumours. Currently, the most druggable mechanism is the delayed activation of developmentally programmed cell death regulated by the tropomyosin receptor kinase A pathway. Indeed, targeted therapy aimed at inhibiting neurotrophin receptors might be used in lieu of conventional chemotherapy or radiation in infants with biologically favourable tumours that require treatment. Alternative approaches consist of breaking immune tolerance to tumour antigens or activating neurotrophin receptor pathways to induce neuronal differentiation. These approaches are likely to be most effective against biologically favourable tumours, but they might also provide insights into treatment of biologically unfavourable tumours. We describe the different mechanisms of spontaneous neuroblastoma regression and the consequent therapeutic approaches. PMID:25331179
Bayesian ARTMAP for regression.
Sasu, L M; Andonie, R
2013-10-01
Bayesian ARTMAP (BA) is a recently introduced neural architecture which uses a combination of Fuzzy ARTMAP competitive learning and Bayesian learning. Training is generally performed online, in a single-epoch. During training, BA creates input data clusters as Gaussian categories, and also infers the conditional probabilities between input patterns and categories, and between categories and classes. During prediction, BA uses Bayesian posterior probability estimation. So far, BA was used only for classification. The goal of this paper is to analyze the efficiency of BA for regression problems. Our contributions are: (i) we generalize the BA algorithm using the clustering functionality of both ART modules, and name it BA for Regression (BAR); (ii) we prove that BAR is a universal approximator with the best approximation property. In other words, BAR approximates arbitrarily well any continuous function (universal approximation) and, for every given continuous function, there is one in the set of BAR approximators situated at minimum distance (best approximation); (iii) we experimentally compare the online trained BAR with several neural models, on the following standard regression benchmarks: CPU Computer Hardware, Boston Housing, Wisconsin Breast Cancer, and Communities and Crime. Our results show that BAR is an appropriate tool for regression tasks, both for theoretical and practical reasons. PMID:23665468
Liu, Wangyu; Wang, Ningling; Jiang, Xiaoyong; Peng, Yujian
2016-07-01
The branching system plays an important role in maintaining the survival of palm trees. Due to the nature of monocots, no additional vascular bundles can be added in the palm tree tissue as it ages. Therefore, the changing of the cross-sectional area in the palm branch creates a graded distribution in the mechanical properties of the tissue. In the present work, this graded distribution in the tissue mechanical properties from sheath to petiole were studied with a multi-scale modeling approach. Then, the entire palm branch was reconstructed and analyzed using finite element methods. The variation of the elastic modulus can lower the level of mechanical stress in the sheath and also allow the branch to have smaller values of pressure on the other branches. Under impact loading, the enhanced frictional dissipation at the surfaces of adjacent branches benefits from the large Poisson׳s ratio of the sheath tissue. These findings can help to link the wind resistance ability of palm trees to their graded materials distribution in the branching system. PMID:26807774
Estimation of adjusted rate differences using additive negative binomial regression.
Donoghoe, Mark W; Marschner, Ian C
2016-08-15
Rate differences are an important effect measure in biostatistics and provide an alternative perspective to rate ratios. When the data are event counts observed during an exposure period, adjusted rate differences may be estimated using an identity-link Poisson generalised linear model, also known as additive Poisson regression. A problem with this approach is that the assumption of equality of mean and variance rarely holds in real data, which often show overdispersion. An additive negative binomial model is the natural alternative to account for this; however, standard model-fitting methods are often unable to cope with the constrained parameter space arising from the non-negativity restrictions of the additive model. In this paper, we propose a novel solution to this problem using a variant of the expectation-conditional maximisation-either algorithm. Our method provides a reliable way to fit an additive negative binomial regression model and also permits flexible generalisations using semi-parametric regression functions. We illustrate the method using a placebo-controlled clinical trial of fenofibrate treatment in patients with type II diabetes, where the outcome is the number of laser therapy courses administered to treat diabetic retinopathy. An R package is available that implements the proposed method. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27073156
Validation of the Poisson Stochastic Radiative Transfer Model
NASA Technical Reports Server (NTRS)
Zhuravleva, Tatiana; Marshak, Alexander
2004-01-01
A new approach to validation of the Poisson stochastic radiative transfer method is proposed. In contrast to other validations of stochastic models, the main parameter of the Poisson model responsible for cloud geometrical structure - cloud aspect ratio - is determined entirely by matching measurements and calculations of the direct solar radiation. If the measurements of the direct solar radiation is unavailable, it was shown that there is a range of the aspect ratios that allows the stochastic model to accurately approximate the average measurements of surface downward and cloud top upward fluxes. Realizations of the fractionally integrated cascade model are taken as a prototype of real measurements.
Distributional properties of the three-dimensional Poisson Delaunay cell
Muche, L.
1996-07-01
This paper gives distributional properties of geometrical characteristics of the Delaunay tessellation generated by a stationary Poisson point process in {Re}{sup 3}. The considerations are based on a well-known formula given by Miles which describes the size and shape of the {open_quotes}typical{close_quotes} three-dimensional Poisson Delaunay cell. The results are the probability density functions for its volume, the area, and the perimeter of one of its faces, the angle spanned in a face by two of its edges, and the length of an edge. These probability density functions are given in integral form. Formulas for higher moments of these characteristics are given explicitly.
A Study of Poisson's Ratio in the Yield Region
NASA Technical Reports Server (NTRS)
Gerard, George; Wildhorn, Sorrel
1952-01-01
In the yield region of the stress-strain curve the variation in Poisson's ratio from the elastic to the plastic value is most pronounced. This variation was studied experimentally by a systematic series of tests on several aluminum alloys. The tests were conducted under simple tensile and compressive loading along three orthogonal axes. A theoretical variation of Poisson's ratio for an orthotropic solid was obtained from dilatational considerations. The assumptions used in deriving the theory were examined by use of the test data and were found to be in reasonable agreement with experimental evidence.
Distributional properties of the three-dimensional Poisson Delaunay cell
NASA Astrophysics Data System (ADS)
Muche, Lutz
1996-07-01
This paper gives distributional properties of geometrical characteristics of the Delaunay tessellation generated by a stationary Poisson point process in ℝ3. The considerations are based on a well-known formula given by Miles which describes the size and shape of the "typical" three-dimensional Poisson Delaunay cell. The results are the probability density functions for its volume, the area, and the perimeter of one of its faces, the angle spanned in a face by two of its edges, and the length of an edge. These probability density functions are given in integral form. Formulas for higher moments of these characteristics are given explicitly.
Ridge Regression: A Regression Procedure for Analyzing Correlated Independent Variables.
ERIC Educational Resources Information Center
Rakow, Ernest A.
Ridge regression is presented as an analytic technique to be used when predictor variables in a multiple linear regression situation are highly correlated, a situation which may result in unstable regression coefficients and difficulties in interpretation. Ridge regression avoids the problem of selection of variables that may occur in stepwise…
Incidence of Type 1 Diabetes in Sweden Among Individuals Aged 0–34 Years, 1983–2007
Dahlquist, Gisela G.; Nyström, Lennarth; Patterson, Christopher C.
2011-01-01
OBJECTIVE To clarify whether the increase in childhood type 1 diabetes is mirrored by a decrease in older age-groups, resulting in younger age at diagnosis. RESEARCH DESIGN AND METHODS We used data from two prospective research registers, the Swedish Childhood Diabetes Register, which included case subjects aged 0–14.9 years at diagnosis, and the Diabetes in Sweden Study, which included case subjects aged 15–34.9 years at diagnosis, covering birth cohorts between 1948 and 2007. The total database included 20,249 individuals with diabetes diagnosed between 1983 and 2007. Incidence rates over time were analyzed using Poisson regression models. RESULTS The overall yearly incidence rose to a peak of 42.3 per 100,000 person-years in male subjects aged 10–14 years and to a peak of 37.1 per 100,000 person-years in female subjects aged 5–9 years and decreased thereafter. There was a significant increase by calendar year in both sexes in the three age-groups <15 years; however, there were significant decreases in the older age-groups (25- to 29-years and 30- to 34-years age-groups). Poisson regression analyses showed that a cohort effect seemed to dominate over a time-period effect. CONCLUSIONS Twenty-five years of prospective nationwide incidence registration demonstrates a clear shift to younger age at onset rather than a uniform increase in incidence rates across all age-groups. The dominance of cohort effects over period effects suggests that exposures affecting young children may be responsible for the increasing incidence in the younger age-groups. PMID:21680725
Ridge Regression Signal Processing
NASA Technical Reports Server (NTRS)
Kuhl, Mark R.
1990-01-01
The introduction of the Global Positioning System (GPS) into the National Airspace System (NAS) necessitates the development of Receiver Autonomous Integrity Monitoring (RAIM) techniques. In order to guarantee a certain level of integrity, a thorough understanding of modern estimation techniques applied to navigational problems is required. The extended Kalman filter (EKF) is derived and analyzed under poor geometry conditions. It was found that the performance of the EKF is difficult to predict, since the EKF is designed for a Gaussian environment. A novel approach is implemented which incorporates ridge regression to explain the behavior of an EKF in the presence of dynamics under poor geometry conditions. The basic principles of ridge regression theory are presented, followed by the derivation of a linearized recursive ridge estimator. Computer simulations are performed to confirm the underlying theory and to provide a comparative analysis of the EKF and the recursive ridge estimator.
Fast Censored Linear Regression
HUANG, YIJIAN
2013-01-01
Weighted log-rank estimating function has become a standard estimation method for the censored linear regression model, or the accelerated failure time model. Well established statistically, the estimator defined as a consistent root has, however, rather poor computational properties because the estimating function is neither continuous nor, in general, monotone. We propose a computationally efficient estimator through an asymptotics-guided Newton algorithm, in which censored quantile regression methods are tailored to yield an initial consistent estimate and a consistent derivative estimate of the limiting estimating function. We also develop fast interval estimation with a new proposal for sandwich variance estimation. The proposed estimator is asymptotically equivalent to the consistent root estimator and barely distinguishable in samples of practical size. However, computation time is typically reduced by two to three orders of magnitude for point estimation alone. Illustrations with clinical applications are provided. PMID:24347802
NASA Astrophysics Data System (ADS)
Zhang, Ying; Bi, Peng; Hiller, Janet
2008-01-01
This is the first study to identify appropriate regression models for the association between climate variation and salmonellosis transmission. A comparison between different regression models was conducted using surveillance data in Adelaide, South Australia. By using notified salmonellosis cases and climatic variables from the Adelaide metropolitan area over the period 1990-2003, four regression methods were examined: standard Poisson regression, autoregressive adjusted Poisson regression, multiple linear regression, and a seasonal autoregressive integrated moving average (SARIMA) model. Notified salmonellosis cases in 2004 were used to test the forecasting ability of the four models. Parameter estimation, goodness-of-fit and forecasting ability of the four regression models were compared. Temperatures occurring 2 weeks prior to cases were positively associated with cases of salmonellosis. Rainfall was also inversely related to the number of cases. The comparison of the goodness-of-fit and forecasting ability suggest that the SARIMA model is better than the other three regression models. Temperature and rainfall may be used as climatic predictors of salmonellosis cases in regions with climatic characteristics similar to those of Adelaide. The SARIMA model could, thus, be adopted to quantify the relationship between climate variations and salmonellosis transmission.
[Regression and revitalization in hypnosis. Doubts and certainties, therapeutic utility].
Granone, F
1981-05-12
The difference between age regression and revification is pointed out and the neurophysiological and psychological bases of hypermnesia of the past are discussed. Moreover, mental, neurological, somatic and visceral symptomatology of revification, the usual techniques to obtain it and its therapeutical usefulness are described. Possible artifacts of age regression and methods to avoid then are then presented. PMID:7231772
Some applications of the fractional Poisson probability distribution
Laskin, Nick
2009-11-15
Physical and mathematical applications of the recently invented fractional Poisson probability distribution have been presented. As a physical application, a new family of quantum coherent states has been introduced and studied. As mathematical applications, we have developed the fractional generalization of Bell polynomials, Bell numbers, and Stirling numbers of the second kind. The appearance of fractional Bell polynomials is natural if one evaluates the diagonal matrix element of the evolution operator in the basis of newly introduced quantum coherent states. Fractional Stirling numbers of the second kind have been introduced and applied to evaluate the skewness and kurtosis of the fractional Poisson probability distribution function. A representation of the Bernoulli numbers in terms of fractional Stirling numbers of the second kind has been found. In the limit case when the fractional Poisson probability distribution becomes the Poisson probability distribution, all of the above listed developments and implementations turn into the well-known results of the quantum optics and the theory of combinatorial numbers.
On supermatrix models, Poisson geometry, and noncommutative supersymmetric gauge theories
Klimčík, Ctirad
2015-12-15
We construct a new supermatrix model which represents a manifestly supersymmetric noncommutative regularisation of the UOSp(2|1) supersymmetric Schwinger model on the supersphere. Our construction is much simpler than those already existing in the literature and it was found by using Poisson geometry in a substantial way.
Application of Poisson random effect models for highway network screening.
Jiang, Ximiao; Abdel-Aty, Mohamed; Alamili, Samer
2014-02-01
In recent years, Bayesian random effect models that account for the temporal and spatial correlations of crash data became popular in traffic safety research. This study employs random effect Poisson Log-Normal models for crash risk hotspot identification. Both the temporal and spatial correlations of crash data were considered. Potential for Safety Improvement (PSI) were adopted as a measure of the crash risk. Using the fatal and injury crashes that occurred on urban 4-lane divided arterials from 2006 to 2009 in the Central Florida area, the random effect approaches were compared to the traditional Empirical Bayesian (EB) method and the conventional Bayesian Poisson Log-Normal model. A series of method examination tests were conducted to evaluate the performance of different approaches. These tests include the previously developed site consistence test, method consistence test, total rank difference test, and the modified total score test, as well as the newly proposed total safety performance measure difference test. Results show that the Bayesian Poisson model accounting for both temporal and spatial random effects (PTSRE) outperforms the model that with only temporal random effect, and both are superior to the conventional Poisson Log-Normal model (PLN) and the EB model in the fitting of crash data. Additionally, the method evaluation tests indicate that the PTSRE model is significantly superior to the PLN model and the EB model in consistently identifying hotspots during successive time periods. The results suggest that the PTSRE model is a superior alternative for road site crash risk hotspot identification. PMID:24269863
Negative Poisson's Ratio in Single-Layer Graphene Ribbons.
Jiang, Jin-Wu; Park, Harold S
2016-04-13
The Poisson's ratio characterizes the resultant strain in the lateral direction for a material under longitudinal deformation. Though negative Poisson's ratios (NPR) are theoretically possible within continuum elasticity, they are most frequently observed in engineered materials and structures, as they are not intrinsic to many materials. In this work, we report NPR in single-layer graphene ribbons, which results from the compressive edge stress induced warping of the edges. The effect is robust, as the NPR is observed for graphene ribbons with widths smaller than about 10 nm, and for tensile strains smaller than about 0.5% with NPR values reaching as large as -1.51. The NPR is explained analytically using an inclined plate model, which is able to predict the Poisson's ratio for graphene sheets of arbitrary size. The inclined plate model demonstrates that the NPR is governed by the interplay between the width (a bulk property), and the warping amplitude of the edge (an edge property), which eventually yields a phase diagram determining the sign of the Poisson's ratio as a function of the graphene geometry. PMID:26986994
Wide-area traffic: The failure of Poisson modeling
Paxson, V.; Floyd, S.
1994-08-01
Network arrivals are often modeled as Poisson processes for analytic simplicity, even though a number of traffic studies have shown that packet interarrivals are not exponentially distributed. The authors evaluate 21 wide-area traces, investigating a number of wide-area TCP arrival processes (session and connection arrivals, FTPDATA connection arrivals within FTP sessions, and TELNET packet arrivals) to determine the error introduced by modeling them using Poisson processes. The authors find that user-initiated TCP session arrivals, such as remote-login and file-transfer, are well-modeled as Poisson processes with fixed hourly rates, but that other connection arrivals deviate considerably from Poisson; that modeling TELNET packet interarrivals as exponential grievously underestimates the burstiness of TELNET traffic, but using the empirical Tcplib[DJCME92] interarrivals preserves burstiness over many time scales; and that FTPDATA connection arrivals within FTP sessions come bunched into ``connection bursts``, the largest of which are so large that they completely dominate FTPDATA traffic. Finally, they offer some preliminary results regarding how the findings relate to the possible self-similarity of wide-area traffic.
On covariant Poisson brackets in classical field theory
Forger, Michael; Salles, Mário O.
2015-10-15
How to give a natural geometric definition of a covariant Poisson bracket in classical field theory has for a long time been an open problem—as testified by the extensive literature on “multisymplectic Poisson brackets,” together with the fact that all these proposals suffer from serious defects. On the other hand, the functional approach does provide a good candidate which has come to be known as the Peierls–De Witt bracket and whose construction in a geometrical setting is now well understood. Here, we show how the basic “multisymplectic Poisson bracket” already proposed in the 1970s can be derived from the Peierls–De Witt bracket, applied to a special class of functionals. This relation allows to trace back most (if not all) of the problems encountered in the past to ambiguities (the relation between differential forms on multiphase space and the functionals they define is not one-to-one) and also to the fact that this class of functionals does not form a Poisson subalgebra.
Vectorized multigrid Poisson solver for the CDC CYBER 205
NASA Technical Reports Server (NTRS)
Barkai, D.; Brandt, M. A.
1984-01-01
The full multigrid (FMG) method is applied to the two dimensional Poisson equation with Dirichlet boundary conditions. This has been chosen as a relatively simple test case for examining the efficiency of fully vectorizing of the multigrid method. Data structure and programming considerations and techniques are discussed, accompanied by performance details.
Subsonic Flow for the Multidimensional Euler-Poisson System
NASA Astrophysics Data System (ADS)
Bae, Myoungjean; Duan, Ben; Xie, Chunjing
2016-04-01
We establish the existence and stability of subsonic potential flow for the steady Euler-Poisson system in a multidimensional nozzle of a finite length when prescribing the electric potential difference on a non-insulated boundary from a fixed point at the exit, and prescribing the pressure at the exit of the nozzle. The Euler-Poisson system for subsonic potential flow can be reduced to a nonlinear elliptic system of second order. In this paper, we develop a technique to achieve a priori {C^{1,α}} estimates of solutions to a quasi-linear second order elliptic system with mixed boundary conditions in a multidimensional domain enclosed by a Lipschitz continuous boundary. In particular, we discovered a special structure of the Euler-Poisson system which enables us to obtain {C^{1,α}} estimates of the velocity potential and the electric potential functions, and this leads us to establish structural stability of subsonic flows for the Euler-Poisson system under perturbations of various data.
3D soft metamaterials with negative Poisson's ratio.
Babaee, Sahab; Shim, Jongmin; Weaver, James C; Chen, Elizabeth R; Patel, Nikita; Bertoldi, Katia
2013-09-25
Buckling is exploited to design a new class of three-dimensional metamaterials with negative Poisson's ratio. A library of auxetic building blocks is identified and procedures are defined to guide their selection and assembly. The auxetic properties of these materials are demonstrated both through experiments and finite element simulations and exhibit excellent qualitative and quantitative agreement. PMID:23878067
Negative poisson's ratio in single-layer black phosphorus.
Jiang, Jin-Wu; Park, Harold S
2014-01-01
The Poisson's ratio is a fundamental mechanical property that relates the resulting lateral strain to applied axial strain. Although this value can theoretically be negative, it is positive for nearly all materials, though negative values have been observed in so-called auxetic structures. However, nearly all auxetic materials are bulk materials whose microstructure has been specifically engineered to generate a negative Poisson's ratio. Here we report using first-principles calculations the existence of a negative Poisson's ratio in a single-layer, two-dimensional material, black phosphorus. In contrast to engineered bulk auxetics, this behaviour is intrinsic for single-layer black phosphorus, and originates from its puckered structure, where the pucker can be regarded as a re-entrant structure that is comprised of two coupled orthogonal hinges. As a result of this atomic structure, a negative Poisson's ratio is observed in the out-of-plane direction under uniaxial deformation in the direction parallel to the pucker. PMID:25131569
Void-containing materials with tailored Poisson's ratio
NASA Astrophysics Data System (ADS)
Goussev, Olga A.; Richner, Peter; Rozman, Michael G.; Gusev, Andrei A.
2000-10-01
Assuming square, hexagonal, and random packed arrays of nonoverlapping identical parallel cylindrical voids dispersed in an aluminum matrix, we have calculated numerically the concentration dependence of the transverse Poisson's ratios. It was shown that the transverse Poisson's ratio of the hexagonal and random packed arrays approached 1 upon increasing the concentration of voids while the ratio of the square packed array along the principal continuation directions approached 0. Experimental measurements were carried out on rectangular aluminum bricks with identical cylindrical holes drilled in square and hexagonal packed arrays. Experimental results were in good agreement with numerical predictions. We then demonstrated, based on the numerical and experimental results, that by varying the spatial arrangement of the holes and their volume fraction, one can design and manufacture voided materials with a tailored Poisson's ratio between 0 and 1. In practice, those with a high Poisson's ratio, i.e., close to 1, can be used to amplify the lateral responses of the structures while those with a low one, i.e., close to 0, can largely attenuate the lateral responses and can therefore be used in situations where stringent lateral stability is needed.
An Advanced Manipulator For Poisson Series With Numerical Coefficients
NASA Astrophysics Data System (ADS)
Biscani, Francesco; Casotto, S.
2006-06-01
The availability of an efficient and featureful manipulator for Poisson deries with numerical coefficients is a standard need for celestial mechanicians and has arisen during our work on the analytical development of the Tide-Generating-Potential (TGP). In the harmonic expansion of the TGP the Poisson series appearing in the theories of motion of the celestial bodies are subjected to a wide set of mathematical operations, ranging from simple additions and multiplications to more sophisticated operations on Legendre polynomials and spherical harmonics with Poisson series as arguments. To perform these operations we have developed an algebraic manipulator, called Piranha, structured as an object-oriented multi-platform C++ library. Piranha handles series with real and complex coefficients, and operates with an arbitrary degree of precision. It supports advanced features such as trigonometric operations and the generation of special functions from Poisson series. Piranha is provided with a proof-of-concept, multi-platform GUI, which serves as a testbed and benchmark for the library. We describe Piranha's architecture and characteristics, what it accomplishes currently and how it will be extended in the future (e.g., to handle series with symbolic coefficients in a consistent fashion with its current design).
Heteroscedastic transformation cure regression models.
Chen, Chyong-Mei; Chen, Chen-Hsin
2016-06-30
Cure models have been applied to analyze clinical trials with cures and age-at-onset studies with nonsusceptibility. Lu and Ying (On semiparametric transformation cure model. Biometrika 2004; 91:331?-343. DOI: 10.1093/biomet/91.2.331) developed a general class of semiparametric transformation cure models, which assumes that the failure times of uncured subjects, after an unknown monotone transformation, follow a regression model with homoscedastic residuals. However, it cannot deal with frequently encountered heteroscedasticity, which may result from dispersed ranges of failure time span among uncured subjects' strata. To tackle the phenomenon, this article presents semiparametric heteroscedastic transformation cure models. The cure status and the failure time of an uncured subject are fitted by a logistic regression model and a heteroscedastic transformation model, respectively. Unlike the approach of Lu and Ying, we derive score equations from the full likelihood for estimating the regression parameters in the proposed model. The similar martingale difference function to their proposal is used to estimate the infinite-dimensional transformation function. Our proposed estimating approach is intuitively applicable and can be conveniently extended to other complicated models when the maximization of the likelihood may be too tedious to be implemented. We conduct simulation studies to validate large-sample properties of the proposed estimators and to compare with the approach of Lu and Ying via the relative efficiency. The estimating method and the two relevant goodness-of-fit graphical procedures are illustrated by using breast cancer data and melanoma data. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26887342
Orthogonal Regression: A Teaching Perspective
ERIC Educational Resources Information Center
Carr, James R.
2012-01-01
A well-known approach to linear least squares regression is that which involves minimizing the sum of squared orthogonal projections of data points onto the best fit line. This form of regression is known as orthogonal regression, and the linear model that it yields is known as the major axis. A similar method, reduced major axis regression, is…
Correlation and simple linear regression.
Eberly, Lynn E
2007-01-01
This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression. PMID:18450049
Incremental hierarchical discriminant regression.
Weng, Juyang; Hwang, Wey-Shiuan
2007-03-01
This paper presents incremental hierarchical discriminant regression (IHDR) which incrementally builds a decision tree or regression tree for very high-dimensional regression or decision spaces by an online, real-time learning system. Biologically motivated, it is an approximate computational model for automatic development of associative cortex, with both bottom-up sensory inputs and top-down motor projections. At each internal node of the IHDR tree, information in the output space is used to automatically derive the local subspace spanned by the most discriminating features. Embedded in the tree is a hierarchical probability distribution model used to prune very unlikely cases during the search. The number of parameters in the coarse-to-fine approximation is dynamic and data-driven, enabling the IHDR tree to automatically fit data with unknown distribution shapes (thus, it is difficult to select the number of parameters up front). The IHDR tree dynamically assigns long-term memory to avoid the loss-of-memory problem typical with a global-fitting learning algorithm for neural networks. A major challenge for an incrementally built tree is that the number of samples varies arbitrarily during the construction process. An incrementally updated probability model, called sample-size-dependent negative-log-likelihood (SDNLL) metric is used to deal with large sample-size cases, small sample-size cases, and unbalanced sample-size cases, measured among different internal nodes of the IHDR tree. We report experimental results for four types of data: synthetic data to visualize the behavior of the algorithms, large face image data, continuous video stream from robot navigation, and publicly available data sets that use human defined features. PMID:17385628
Steganalysis using logistic regression
NASA Astrophysics Data System (ADS)
Lubenko, Ivans; Ker, Andrew D.
2011-02-01
We advocate Logistic Regression (LR) as an alternative to the Support Vector Machine (SVM) classifiers commonly used in steganalysis. LR offers more information than traditional SVM methods - it estimates class probabilities as well as providing a simple classification - and can be adapted more easily and efficiently for multiclass problems. Like SVM, LR can be kernelised for nonlinear classification, and it shows comparable classification accuracy to SVM methods. This work is a case study, comparing accuracy and speed of SVM and LR classifiers in detection of LSB Matching and other related spatial-domain image steganography, through the state-of-art 686-dimensional SPAM feature set, in three image sets.
Poisson structures for lifts and periodic reductions of integrable lattice equations
NASA Astrophysics Data System (ADS)
Kouloukas, Theodoros E.; Tran, Dinh T.
2015-02-01
We introduce and study suitable Poisson structures for four-dimensional maps derived as lifts and specific periodic reductions of integrable lattice equations. These maps are Poisson with respect to these structures and the corresponding integrals are in involution.
A linear regression solution to the spatial autocorrelation problem
NASA Astrophysics Data System (ADS)
Griffith, Daniel A.
The Moran Coefficient spatial autocorrelation index can be decomposed into orthogonal map pattern components. This decomposition relates it directly to standard linear regression, in which corresponding eigenvectors can be used as predictors. This paper reports comparative results between these linear regressions and their auto-Gaussian counterparts for the following georeferenced data sets: Columbus (Ohio) crime, Ottawa-Hull median family income, Toronto population density, southwest Ohio unemployment, Syracuse pediatric lead poisoning, and Glasgow standard mortality rates, and a small remotely sensed image of the High Peak district. This methodology is extended to auto-logistic and auto-Poisson situations, with selected data analyses including percentage of urban population across Puerto Rico, and the frequency of SIDs cases across North Carolina. These data analytic results suggest that this approach to georeferenced data analysis offers considerable promise.
NASA Technical Reports Server (NTRS)
Kuhl, Mark R.
1990-01-01
Current navigation requirements depend on a geometric dilution of precision (GDOP) criterion. As long as the GDOP stays below a specific value, navigation requirements are met. The GDOP will exceed the specified value when the measurement geometry becomes too collinear. A new signal processing technique, called Ridge Regression Processing, can reduce the effects of nearly collinear measurement geometry; thereby reducing the inflation of the measurement errors. It is shown that the Ridge signal processor gives a consistently better mean squared error (MSE) in position than the Ordinary Least Mean Squares (OLS) estimator. The applicability of this technique is currently being investigated to improve the following areas: receiver autonomous integrity monitoring (RAIM), coverage requirements, availability requirements, and precision approaches.
Reference manual for the POISSON/SUPERFISH Group of Codes
Not Available
1987-01-01
The POISSON/SUPERFISH Group codes were set up to solve two separate problems: the design of magnets and the design of rf cavities in a two-dimensional geometry. The first stage of either problem is to describe the layout of the magnet or cavity in a way that can be used as input to solve the generalized Poisson equation for magnets or the Helmholtz equations for cavities. The computer codes require that the problems be discretized by replacing the differentials (dx,dy) by finite differences ({delta}X,{delta}Y). Instead of defining the function everywhere in a plane, the function is defined only at a finite number of points on a mesh in the plane.
Correlation between supercooled liquid relaxation and glass Poisson's ratio.
Sun, Qijing; Hu, Lina; Zhou, Chao; Zheng, Haijiao; Yue, Yuanzheng
2015-10-28
We report on a correlation between the supercooled liquid (SL) relaxation and glass Poisson's ratio (v) by comparing the activation energy ratio (r) of the α and the slow β relaxations and the v values for both metallic and nonmetallic glasses. Poisson's ratio v generally increases with an increase in the ratio r and this relation can be described by the empirical function v = 0.5 - A*exp(-B*r), where A and B are constants. This correlation might imply that glass plasticity is associated with the competition between the α and the slow β relaxations in SLs. The underlying physics of this correlation lies in the heredity of the structural heterogeneity from liquid to glass. This work gives insights into both the microscopic mechanism of glass deformation through the SL dynamics and the complex structural evolution during liquid-glass transition. PMID:26520524
Image deconvolution under Poisson noise using SURE-LET approach
NASA Astrophysics Data System (ADS)
Xue, Feng; Liu, Jiaqi; Meng, Gang; Yan, Jing; Zhao, Min
2015-10-01
We propose an image deconvolution algorithm when the data is contaminated by Poisson noise. By minimizing Stein's unbiased risk estimate (SURE), the SURE-LET method was firstly proposed to deal with Gaussian noise corruption. Our key contribution is to demonstrate that the SURE-LET algorithm is also applicable for Poisson noisy image and proposed an efficient algorithm. The formulation of SURE requires knowledge of Gaussian noise variance. We experimentally found a simple and direct link between the noise variance estimated by median absolute difference (MAD) method and the optimal one that leads to the best deconvolution performance in terms of mean squared error (MSE). Extensive experiments show that this optimal noise variance works satisfactorily for a wide range of natural images.
Quantized Nambu-Poisson manifolds and n-Lie algebras
DeBellis, Joshua; Saemann, Christian; Szabo, Richard J.
2010-12-15
We investigate the geometric interpretation of quantized Nambu-Poisson structures in terms of noncommutative geometries. We describe an extension of the usual axioms of quantization in which classical Nambu-Poisson structures are translated to n-Lie algebras at quantum level. We demonstrate that this generalized procedure matches an extension of Berezin-Toeplitz quantization yielding quantized spheres, hyperboloids, and superspheres. The extended Berezin quantization of spheres is closely related to a deformation quantization of n-Lie algebras as well as the approach based on harmonic analysis. We find an interpretation of Nambu-Heisenberg n-Lie algebras in terms of foliations of R{sup n} by fuzzy spheres, fuzzy hyperboloids, and noncommutative hyperplanes. Some applications to the quantum geometry of branes in M-theory are also briefly discussed.
MODELING PAVEMENT DETERIORATION PROCESSES BY POISSON HIDDEN MARKOV MODELS
NASA Astrophysics Data System (ADS)
Nam, Le Thanh; Kaito, Kiyoyuki; Kobayashi, Kiyoshi; Okizuka, Ryosuke
In pavement management, it is important to estimate lifecycle cost, which is composed of the expenses for repairing local damages, including potholes, and repairing and rehabilitating the surface and base layers of pavements, including overlays. In this study, a model is produced under the assumption that the deterioration process of pavement is a complex one that includes local damages, which occur frequently, and the deterioration of the surface and base layers of pavement, which progresses slowly. The variation in pavement soundness is expressed by the Markov deterioration model and the Poisson hidden Markov deterioration model, in which the frequency of local damage depends on the distribution of pavement soundness, is formulated. In addition, the authors suggest a model estimation method using the Markov Chain Monte Carlo (MCMC) method, and attempt to demonstrate the applicability of the proposed Poisson hidden Markov deterioration model by studying concrete application cases.
Correlation between supercooled liquid relaxation and glass Poisson's ratio
NASA Astrophysics Data System (ADS)
Sun, Qijing; Hu, Lina; Zhou, Chao; Zheng, Haijiao; Yue, Yuanzheng
2015-10-01
We report on a correlation between the supercooled liquid (SL) relaxation and glass Poisson's ratio (v) by comparing the activation energy ratio (r) of the α and the slow β relaxations and the v values for both metallic and nonmetallic glasses. Poisson's ratio v generally increases with an increase in the ratio r and this relation can be described by the empirical function v = 0.5 - A*exp(-B*r), where A and B are constants. This correlation might imply that glass plasticity is associated with the competition between the α and the slow β relaxations in SLs. The underlying physics of this correlation lies in the heredity of the structural heterogeneity from liquid to glass. This work gives insights into both the microscopic mechanism of glass deformation through the SL dynamics and the complex structural evolution during liquid-glass transition.
Invariants and labels for Lie-Poisson Systems
Thiffeault, J.L.; Morrison, P.J.
1998-04-01
Reduction is a process that uses symmetry to lower the order of a Hamiltonian system. The new variables in the reduced picture are often not canonical: there are no clear variables representing positions and momenta, and the Poisson bracket obtained is not of the canonical type. Specifically, we give two examples that give rise to brackets of the noncanonical Lie-Poisson form: the rigid body and the two-dimensional ideal fluid. From these simple cases, we then use the semidirect product extension of algebras to describe more complex physical systems. The Casimir invariants in these systems are examined, and some are shown to be linked to the recovery of information about the configuration of the system. We discuss a case in which the extension is not a semidirect product, namely compressible reduced MHD, and find for this case that the Casimir invariants lend partial information about the configuration of the system.
Finite-size effects and percolation properties of Poisson geometries.
Larmier, C; Dumonteil, E; Malvagi, F; Mazzolo, A; Zoia, A
2016-07-01
Random tessellations of the space represent a class of prototype models of heterogeneous media, which are central in several applications in physics, engineering, and life sciences. In this work, we investigate the statistical properties of d-dimensional isotropic Poisson geometries by resorting to Monte Carlo simulation, with special emphasis on the case d=3. We first analyze the behavior of the key features of these stochastic geometries as a function of the dimension d and the linear size L of the domain. Then, we consider the case of Poisson binary mixtures, where the polyhedra are assigned two labels with complementary probabilities. For this latter class of random geometries, we numerically characterize the percolation threshold, the strength of the percolating cluster, and the average cluster size. PMID:27575099
Improved Poisson solver for cfa/magnetron simulation
Dombrowski, G.E.
1996-12-31
E{sub dc}, the static field of a device having vane-shaped anodes, has been determined by application of Hockney`s method, which in turn uses Buneman`s cyclic reduction. This result can be used for both cfa and magnetrons, but does not solve the general space-charge fields. As pointed out by Hockney, the matrix of coupling capacitive factors between the vane-defining mesh points can also be used to solve the Poisson equation for the entire cathode-anode domain. Space-charge fields of electrons between anode electrodes can now be determined. This technique also computes the Ramo function for the entire region. This method has been applied to the magnetron. Extension to the cfa with many different space-charge bunches does not appear to be practicable. Calculations for the type 4J50 magnetron by the various degrees of accuracy in solving the Poisson equation are compared with experimental measurements.
Quantized Nambu-Poisson manifolds and n-Lie algebras
NASA Astrophysics Data System (ADS)
DeBellis, Joshua; Sämann, Christian; Szabo, Richard J.
2010-12-01
We investigate the geometric interpretation of quantized Nambu-Poisson structures in terms of noncommutative geometries. We describe an extension of the usual axioms of quantization in which classical Nambu-Poisson structures are translated to n-Lie algebras at quantum level. We demonstrate that this generalized procedure matches an extension of Berezin-Toeplitz quantization yielding quantized spheres, hyperboloids, and superspheres. The extended Berezin quantization of spheres is closely related to a deformation quantization of n-Lie algebras as well as the approach based on harmonic analysis. We find an interpretation of Nambu-Heisenberg n-Lie algebras in terms of foliations of {{R}}^n by fuzzy spheres, fuzzy hyperboloids, and noncommutative hyperplanes. Some applications to the quantum geometry of branes in M-theory are also briefly discussed.
Intrinsic Negative Poisson's Ratio for Single-Layer Graphene.
Jiang, Jin-Wu; Chang, Tienchong; Guo, Xingming; Park, Harold S
2016-08-10
Negative Poisson's ratio (NPR) materials have drawn significant interest because the enhanced toughness, shear resistance, and vibration absorption that typically are seen in auxetic materials may enable a range of novel applications. In this work, we report that single-layer graphene exhibits an intrinsic NPR, which is robust and independent of its size and temperature. The NPR arises due to the interplay between two intrinsic deformation pathways (one with positive Poisson's ratio, the other with NPR), which correspond to the bond stretching and angle bending interactions in graphene. We propose an energy-based deformation pathway criteria, which predicts that the pathway with NPR has lower energy and thus becomes the dominant deformation mode when graphene is stretched by a strain above 6%, resulting in the NPR phenomenon. PMID:27408994
Self-Attracting Poisson Clouds in an Expanding Universe
NASA Astrophysics Data System (ADS)
Bertoin, Jean
We consider the following elementary model for clustering by ballistic aggregation in an expanding universe. At the initial time, there is a doubly infinite sequence of particles lying in a one-dimensional universe that is expanding at constant rate. We suppose that each particle p attracts points at a certain rate a(p)/2 depending only on p, and when two particles, say p and q, collide by the effect of attraction, they merge as a single particle p*q. The main purpose of this work is to point at the following remarkable property of Poisson clouds in these dynamics. Under certain technical conditions, if at the initial time the system is distributed according to a spatially stationary Poisson cloud with intensity μ0, then at any time t > 0, the system will again have a Poissonian distribution, now with intensity μt, where the family solves a generalization of Smoluchowski's coagulation equation.
A comparison between simulation and poisson-boltzmann fields
NASA Astrophysics Data System (ADS)
Pettitt, B. Montgomery; Valdeavella, C. V.
1999-11-01
The electrostatic potentials from molecular dynamics (MD) trajectories and Poisson-Boltzmann calculations on a tetra peptide are compared to understand the validity of the resulting free energy surface. The Tuftsin peptide with sequence, Thr-Lys-Pro-Arg, in water is used for the comparison. The results obtained from the analysis of the MD trajectories for the total electrostatic potential at points on a grid using the Ewald technique are compared with the solution to the Poisson-Boltzmann (PB) equation averaged over the same set of configurations. The latter was solved using an optimal set of dielectric constant parameters. Structural averaging of the field over the MD simulation was examined in the context of the PB results. The detailed spatial variation of the electrostatic potential on the molecular surface are not qualitatively reproducible from MD to PB. Implications of using such field calculations and the implied free energies are discussed.
New method for blowup of the Euler-Poisson system
NASA Astrophysics Data System (ADS)
Kwong, Man Kam; Yuen, Manwai
2016-08-01
In this paper, we provide a new method for establishing the blowup of C2 solutions for the pressureless Euler-Poisson system with attractive forces for RN (N ≥ 2) with ρ(0, x0) > 0 and Ω 0 i j ( x 0 ) = /1 2 [" separators=" ∂ i u j ( 0 , x 0 ) - ∂ j u i ( 0 , x 0 ) ] = 0 at some point x0 ∈ RN. By applying the generalized Hubble transformation div u ( t , x 0 ( t ) ) = /N a ˙ ( t ) a ( t ) to a reduced Riccati differential inequality derived from the system, we simplify the inequality into the Emden equation a ̈ ( t ) = - /λ a ( t ) N - 1 , a ( 0 ) = 1 , a ˙ ( 0 ) = /div u ( 0 , x 0 ) N . Known results on its blowup set allow us to easily obtain the blowup conditions of the Euler-Poisson system.
Finite-size effects and percolation properties of Poisson geometries
NASA Astrophysics Data System (ADS)
Larmier, C.; Dumonteil, E.; Malvagi, F.; Mazzolo, A.; Zoia, A.
2016-07-01
Random tessellations of the space represent a class of prototype models of heterogeneous media, which are central in several applications in physics, engineering, and life sciences. In this work, we investigate the statistical properties of d -dimensional isotropic Poisson geometries by resorting to Monte Carlo simulation, with special emphasis on the case d =3 . We first analyze the behavior of the key features of these stochastic geometries as a function of the dimension d and the linear size L of the domain. Then, we consider the case of Poisson binary mixtures, where the polyhedra are assigned two labels with complementary probabilities. For this latter class of random geometries, we numerically characterize the percolation threshold, the strength of the percolating cluster, and the average cluster size.
Filtering with Marked Point Process Observations via Poisson Chaos Expansion
Sun Wei; Zeng Yong; Zhang Shu
2013-06-15
We study a general filtering problem with marked point process observations. The motivation comes from modeling financial ultra-high frequency data. First, we rigorously derive the unnormalized filtering equation with marked point process observations under mild assumptions, especially relaxing the bounded condition of stochastic intensity. Then, we derive the Poisson chaos expansion for the unnormalized filter. Based on the chaos expansion, we establish the uniqueness of solutions of the unnormalized filtering equation. Moreover, we derive the Poisson chaos expansion for the unnormalized filter density under additional conditions. To explore the computational advantage, we further construct a new consistent recursive numerical scheme based on the truncation of the chaos density expansion for a simple case. The new algorithm divides the computations into those containing solely system coefficients and those including the observations, and assign the former off-line.
Tensorial Basis Spline Collocation Method for Poisson's Equation
NASA Astrophysics Data System (ADS)
Plagne, Laurent; Berthou, Jean-Yves
2000-01-01
This paper aims to describe the tensorial basis spline collocation method applied to Poisson's equation. In the case of a localized 3D charge distribution in vacuum, this direct method based on a tensorial decomposition of the differential operator is shown to be competitive with both iterative BSCM and FFT-based methods. We emphasize the O(h4) and O(h6) convergence of TBSCM for cubic and quintic splines, respectively. We describe the implementation of this method on a distributed memory parallel machine. Performance measurements on a Cray T3E are reported. Our code exhibits high performance and good scalability: As an example, a 27 Gflops performance is obtained when solving Poisson's equation on a 2563 non-uniform 3D Cartesian mesh by using 128 T3E-750 processors. This represents 215 Mflops per processors.
Insulin resistance: regression and clustering.
Yoon, Sangho; Assimes, Themistocles L; Quertermous, Thomas; Hsiao, Chin-Fu; Chuang, Lee-Ming; Hwu, Chii-Min; Rajaratnam, Bala; Olshen, Richard A
2014-01-01
In this paper we try to define insulin resistance (IR) precisely for a group of Chinese women. Our definition deliberately does not depend upon body mass index (BMI) or age, although in other studies, with particular random effects models quite different from models used here, BMI accounts for a large part of the variability in IR. We accomplish our goal through application of Gauss mixture vector quantization (GMVQ), a technique for clustering that was developed for application to lossy data compression. Defining data come from measurements that play major roles in medical practice. A precise statement of what the data are is in Section 1. Their family structures are described in detail. They concern levels of lipids and the results of an oral glucose tolerance test (OGTT). We apply GMVQ to residuals obtained from regressions of outcomes of an OGTT and lipids on functions of age and BMI that are inferred from the data. A bootstrap procedure developed for our family data supplemented by insights from other approaches leads us to believe that two clusters are appropriate for defining IR precisely. One cluster consists of women who are IR, and the other of women who seem not to be. Genes and other features are used to predict cluster membership. We argue that prediction with "main effects" is not satisfactory, but prediction that includes interactions may be. PMID:24887437
Binomial and Poisson Mixtures, Maximum Likelihood, and Maple Code
Bowman, Kimiko o; Shenton, LR
2006-01-01
The bias, variance, and skewness of maximum likelihoood estimators are considered for binomial and Poisson mixture distributions. The moments considered are asymptotic, and they are assessed using the Maple code. Question of existence of solutions and Karl Pearson's study are mentioned, along with the problems of valid sample space. Large samples to reduce variances are not unusual; this also applies to the size of the asymptotic skewness.
Events in time: Basic analysis of Poisson data
Engelhardt, M.E.
1994-09-01
The report presents basic statistical methods for analyzing Poisson data, such as the member of events in some period of time. It gives point estimates, confidence intervals, and Bayesian intervals for the rate of occurrence per unit of time. It shows how to compare subsets of the data, both graphically and by statistical tests, and how to look for trends in time. It presents a compound model when the rate of occurrence varies randomly. Examples and SAS programs are given.
Studying Resist Stochastics with the Multivariate Poisson Propagation Model
Naulleau, Patrick; Anderson, Christopher; Chao, Weilun; Bhattarai, Suchit; Neureuther, Andrew
2014-01-01
Progress in the ultimate performance of extreme ultraviolet resist has arguably decelerated in recent years suggesting an approach to stochastic limits both in photon counts and material parameters. Here we report on the performance of a variety of leading extreme ultraviolet resist both with and without chemical amplification. The measured performance is compared to stochastic modeling results using the Multivariate Poisson Propagation Model. The results show that the best materials are indeed nearing modeled performance limits.
On third Poisson structure of KdV equation
Gorsky, A.; Marshakov, A.; Orlov, A.
1995-12-01
The third Poisson structure of the KdV equation in terms of canonical {open_quote}free fields{close_quote} and the reduced WZNW model is discussed. We prove that it is {open_quotes}diagonalized{close_quotes} in the Lagrange variables which were used before in the formulation of 2d gravity. We propose a quantum path integral for the KdV equation based on this representation.
Poisson reduction for nonholonomic mechanical systems with symmetry
NASA Astrophysics Data System (ADS)
Wang Sang Koon; Marsden, Jerrold E.
1998-10-01
This paper continues the work of Koon and Marsden [10] that began the comparison of the Hamiltonian and Lagrangian formulations of nonholonomic systems. Because of the necessary replacement of conservation laws with the momentum equation, it is natural to let the value of momentum be a variable and for this reason it is natural to take a Poisson viewpoint. Some of this theory has been started in van der Schaft and Maschke [24]. We build on their work, further develop the theory of nonholonomic Poisson reduction, and tie this theory to other work in the area. We use this reduction procedure to organize nonholonomic dynamics into a reconstruction equation, a nonholonomic momentum equation and the reduced Lagrange-d'Alembert equations in Hamiltonian form. We also show that these equations are equivalent to those given by the Lagrangian reduction methods of Bloch, Krishnaprasad, Marsden and Murray [4]. Because of the results of Koon and Marsden [10], this is also equivalent to the results of Bates and Śniatycki [2], obtained by nonholonomic symplectic reduction. Two interesting complications make this effort especially interesting. First of all, as we have mentioned, symmetry need not lead to conservation laws but rather to a momentum equation. Second, the natural Poisson bracket fails to satisfy the Jacobi identity. In fact, the so-called Jacobiizer (the cyclic sum that vanishes when the Jacobi identity holds), or equivalently, the Schouten bracket, is an interesting expression involving the curvature of the underlying distribution describing the nonholonomic constraints. The Poisson reduction results in this paper are important for the future development of the stability theory for nonholonomic mechanical systems with symmetry, as begun by Zenkov, Bloch and Marsden [25]. In particular, they should be useful for the development of the powerful block diagonalization properties of the energy-momentum method developed by Simo, Lewis and Marsden [23].
Wang, Yiyi; Kockelman, Kara M
2013-11-01
This work examines the relationship between 3-year pedestrian crash counts across Census tracts in Austin, Texas, and various land use, network, and demographic attributes, such as land use balance, residents' access to commercial land uses, sidewalk density, lane-mile densities (by roadway class), and population and employment densities (by type). The model specification allows for region-specific heterogeneity, correlation across response types, and spatial autocorrelation via a Poisson-based multivariate conditional auto-regressive (CAR) framework and is estimated using Bayesian Markov chain Monte Carlo methods. Least-squares regression estimates of walk-miles traveled per zone serve as the exposure measure. Here, the Poisson-lognormal multivariate CAR model outperforms an aspatial Poisson-lognormal multivariate model and a spatial model (without cross-severity correlation), both in terms of fit and inference. Positive spatial autocorrelation emerges across neighborhoods, as expected (due to latent heterogeneity or missing variables that trend in space, resulting in spatial clustering of crash counts). In comparison, the positive aspatial, bivariate cross correlation of severe (fatal or incapacitating) and non-severe crash rates reflects latent covariates that have impacts across severity levels but are more local in nature (such as lighting conditions and local sight obstructions), along with spatially lagged cross correlation. Results also suggest greater mixing of residences and commercial land uses is associated with higher pedestrian crash risk across different severity levels, ceteris paribus, presumably since such access produces more potential conflicts between pedestrian and vehicle movements. Interestingly, network densities show variable effects, and sidewalk provision is associated with lower severe-crash rates. PMID:24036167
Numerical methods for the Poisson-Fermi equation in electrolytes
NASA Astrophysics Data System (ADS)
Liu, Jinn-Liang
2013-08-01
The Poisson-Fermi equation proposed by Bazant, Storey, and Kornyshev [Phys. Rev. Lett. 106 (2011) 046102] for ionic liquids is applied to and numerically studied for electrolytes and biological ion channels in three-dimensional space. This is a fourth-order nonlinear PDE that deals with both steric and correlation effects of all ions and solvent molecules involved in a model system. The Fermi distribution follows from classical lattice models of configurational entropy of finite size ions and solvent molecules and hence prevents the long and outstanding problem of unphysical divergence predicted by the Gouy-Chapman model at large potentials due to the Boltzmann distribution of point charges. The equation reduces to Poisson-Boltzmann if the correlation length vanishes. A simplified matched interface and boundary method exhibiting optimal convergence is first developed for this equation by using a gramicidin A channel model that illustrates challenging issues associated with the geometric singularities of molecular surfaces of channel proteins in realistic 3D simulations. Various numerical methods then follow to tackle a range of numerical problems concerning the fourth-order term, nonlinearity, stability, efficiency, and effectiveness. The most significant feature of the Poisson-Fermi equation, namely, its inclusion of steric and correlation effects, is demonstrated by showing good agreement with Monte Carlo simulation data for a charged wall model and an L type calcium channel model.
Blind beam-hardening correction from Poisson measurements
NASA Astrophysics Data System (ADS)
Gu, Renliang; Dogandžić, Aleksandar
2016-02-01
We develop a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. We employ our mass-attenuation spectrum parameterization of the noiseless measurements and express the mass- attenuation spectrum as a linear combination of B-spline basis functions of order one. A block coordinate-descent algorithm is developed for constrained minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density map image; the image sparsity is imposed using a convex total-variation (TV) norm penalty term. This algorithm alternates between a Nesterov's proximal-gradient (NPG) step for estimating the density map image and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (L-BFGS-B) step for estimating the incident-spectrum parameters. To accelerate convergence of the density- map NPG steps, we apply function restart and a step-size selection scheme that accounts for varying local Lipschitz constants of the Poisson NLL. Real X-ray CT reconstruction examples demonstrate the performance of the proposed scheme.
Assessment of Linear Finite-Difference Poisson-Boltzmann Solvers
Wang, Jun; Luo, Ray
2009-01-01
CPU time and memory usage are two vital issues that any numerical solvers for the Poisson-Boltzmann equation have to face in biomolecular applications. In this study we systematically analyzed the CPU time and memory usage of five commonly used finite-difference solvers with a large and diversified set of biomolecular structures. Our comparative analysis shows that modified incomplete Cholesky conjugate gradient and geometric multigrid are the most efficient in the diversified test set. For the two efficient solvers, our test shows that their CPU times increase approximately linearly with the numbers of grids. Their CPU times also increase almost linearly with the negative logarithm of the convergence criterion at very similar rate. Our comparison further shows that geometric multigrid performs better in the large set of tested biomolecules. However, modified incomplete Cholesky conjugate gradient is superior to geometric multigrid in molecular dynamics simulations of tested molecules. We also investigated other significant components in numerical solutions of the Poisson-Boltzmann equation. It turns out that the time-limiting step is the free boundary condition setup for the linear systems for the selected proteins if the electrostatic focusing is not used. Thus, development of future numerical solvers for the Poisson-Boltzmann equation should balance all aspects of the numerical procedures in realistic biomolecular applications. PMID:20063271
Assessment of linear finite-difference Poisson-Boltzmann solvers.
Wang, Jun; Luo, Ray
2010-06-01
CPU time and memory usage are two vital issues that any numerical solvers for the Poisson-Boltzmann equation have to face in biomolecular applications. In this study, we systematically analyzed the CPU time and memory usage of five commonly used finite-difference solvers with a large and diversified set of biomolecular structures. Our comparative analysis shows that modified incomplete Cholesky conjugate gradient and geometric multigrid are the most efficient in the diversified test set. For the two efficient solvers, our test shows that their CPU times increase approximately linearly with the numbers of grids. Their CPU times also increase almost linearly with the negative logarithm of the convergence criterion at very similar rate. Our comparison further shows that geometric multigrid performs better in the large set of tested biomolecules. However, modified incomplete Cholesky conjugate gradient is superior to geometric multigrid in molecular dynamics simulations of tested molecules. We also investigated other significant components in numerical solutions of the Poisson-Boltzmann equation. It turns out that the time-limiting step is the free boundary condition setup for the linear systems for the selected proteins if the electrostatic focusing is not used. Thus, development of future numerical solvers for the Poisson-Boltzmann equation should balance all aspects of the numerical procedures in realistic biomolecular applications. PMID:20063271
Matrix decomposition graphics processing unit solver for Poisson image editing
NASA Astrophysics Data System (ADS)
Lei, Zhao; Wei, Li
2012-10-01
In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.
A generalized Poisson solver for first-principles device simulations
NASA Astrophysics Data System (ADS)
Bani-Hashemian, Mohammad Hossein; Brück, Sascha; Luisier, Mathieu; VandeVondele, Joost
2016-01-01
Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative method in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated.
Poisson-like spiking in circuits with probabilistic synapses.
Moreno-Bote, Rubén
2014-07-01
Neuronal activity in cortex is variable both spontaneously and during stimulation, and it has the remarkable property that it is Poisson-like over broad ranges of firing rates covering from virtually zero to hundreds of spikes per second. The mechanisms underlying cortical-like spiking variability over such a broad continuum of rates are currently unknown. We show that neuronal networks endowed with probabilistic synaptic transmission, a well-documented source of variability in cortex, robustly generate Poisson-like variability over several orders of magnitude in their firing rate without fine-tuning of the network parameters. Other sources of variability, such as random synaptic delays or spike generation jittering, do not lead to Poisson-like variability at high rates because they cannot be sufficiently amplified by recurrent neuronal networks. We also show that probabilistic synapses predict Fano factor constancy of synaptic conductances. Our results suggest that synaptic noise is a robust and sufficient mechanism for the type of variability found in cortex. PMID:25032705
A generalized Poisson solver for first-principles device simulations.
Bani-Hashemian, Mohammad Hossein; Brück, Sascha; Luisier, Mathieu; VandeVondele, Joost
2016-01-28
Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative method in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated. PMID:26827208
Improved central confidence intervals for the ratio of Poisson means
NASA Astrophysics Data System (ADS)
Cousins, R. D.
The problem of confidence intervals for the ratio of two unknown Poisson means was "solved" decades ago, but a closer examination reveals that the standard solution is far from optimal from the frequentist point of view. We construct a more powerful set of central confidence intervals, each of which is a (typically proper) subinterval of the corresponding standard interval. They also provide upper and lower confidence limits which are more restrictive than the standard limits. The construction follows Neyman's original prescription, though discreteness of the Poisson distribution and the presence of a nuisance parameter (one of the unknown means) lead to slightly conservative intervals. Philosophically, the issue of the appropriateness of the construction method is similar to the issue of conditioning on the margins in 2×2 contingency tables. From a frequentist point of view, the new set maintains (over) coverage of the unknown true value of the ratio of means at each stated confidence level, even though the new intervals are shorter than the old intervals by any measure (except for two cases where they are identical). As an example, when the number 2 is drawn from each Poisson population, the 90% CL central confidence interval on the ratio of means is (0.169, 5.196), rather than (0.108, 9.245). In the cited literature, such confidence intervals have applications in numerous branches of pure and applied science, including agriculture, wildlife studies, manufacturing, medicine, reliability theory, and elementary particle physics.
Novel negative Poisson's ratio behavior induced by an elastic instability
NASA Astrophysics Data System (ADS)
Bertoldi, Katia; Reis, Pedro; Willshaw, Stephen; Mullin, Tom
2010-03-01
When materials are compressed along a particular axis they are most commonly observed to expand in directions orthogonal to the applied load. The property that characterizes this behavior is the Poisson's ratio which is defined as the ratio between the negative transverse and longitudinal strains. Materials with a negative Poisson's ratio will contract in the transverse direction when compressed and demonstration of practical examples is relatively recent. A significant challenge in the fabrication of auxetic materials is that it usually involves embedding structures with intricate geometries within a host matrix. As such, the manufacturing process has been a bottleneck in the practical development towards applications. Here we exploit elastic instabilities to create novel effects within materials with periodic microstructure and we show that they may lead to negative Poisson's ratio behavior for the 2D periodic structures i.e. it only occurs under compression. The uncomplicated manufacturing process of the samples together with the robustness of the observed phenomena suggests that this may form the basis of a practical method for constructing planar auxetic materials over a wide range of length-scales.
Image denoising in mixed Poisson-Gaussian noise.
Luisier, Florian; Blu, Thierry; Unser, Michael
2011-03-01
We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy. PMID:20840902
Recursive Algorithm For Linear Regression
NASA Technical Reports Server (NTRS)
Varanasi, S. V.
1988-01-01
Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.
NASA Astrophysics Data System (ADS)
Chen, Luzhuo; Liu, Changhong; Wang, Jiaping; Zhang, Wei; Hu, Chunhua; Fan, Shoushan
2009-06-01
Auxetic materials with large negative Poisson's ratios are fabricated by highly oriented carbon nanotube structures. The Poisson's ratio can be obtained down to -0.50. Furthermore, negative Poisson's ratios can be maintained in the carbon nanotube/polymer composites when the nanotubes are embedded, while the composites show much better mechanical properties including larger strain-to-failure (˜22%) compared to the pristine nanotube thin film (˜3%). A theoretical model is developed to predict the Poisson's ratios. It indicates that the large negative Poisson's ratios are caused by the realignment of curved nanotubes during stretching and the theoretical predictions agree well with the experimental results.
Multinomial logistic regression ensembles.
Lee, Kyewon; Ahn, Hongshik; Moon, Hojin; Kodell, Ralph L; Chen, James J
2013-05-01
This article proposes a method for multiclass classification problems using ensembles of multinomial logistic regression models. A multinomial logit model is used as a base classifier in ensembles from random partitions of predictors. The multinomial logit model can be applied to each mutually exclusive subset of the feature space without variable selection. By combining multiple models the proposed method can handle a huge database without a constraint needed for analyzing high-dimensional data, and the random partition can improve the prediction accuracy by reducing the correlation among base classifiers. The proposed method is implemented using R, and the performance including overall prediction accuracy, sensitivity, and specificity for each category is evaluated on two real data sets and simulation data sets. To investigate the quality of prediction in terms of sensitivity and specificity, the area under the receiver operating characteristic (ROC) curve (AUC) is also examined. The performance of the proposed model is compared to a single multinomial logit model and it shows a substantial improvement in overall prediction accuracy. The proposed method is also compared with other classification methods such as the random forest, support vector machines, and random multinomial logit model. PMID:23611203
Bayesian Spatial Quantile Regression
Reich, Brian J.; Fuentes, Montserrat; Dunson, David B.
2013-01-01
Tropospheric ozone is one of the six criteria pollutants regulated by the United States Environmental Protection Agency under the Clean Air Act and has been linked with several adverse health effects, including mortality. Due to the strong dependence on weather conditions, ozone may be sensitive to climate change and there is great interest in studying the potential effect of climate change on ozone, and how this change may affect public health. In this paper we develop a Bayesian spatial model to predict ozone under different meteorological conditions, and use this model to study spatial and temporal trends and to forecast ozone concentrations under different climate scenarios. We develop a spatial quantile regression model that does not assume normality and allows the covariates to affect the entire conditional distribution, rather than just the mean. The conditional distribution is allowed to vary from site-to-site and is smoothed with a spatial prior. For extremely large datasets our model is computationally infeasible, and we develop an approximate method. We apply the approximate version of our model to summer ozone from 1997–2005 in the Eastern U.S., and use deterministic climate models to project ozone under future climate conditions. Our analysis suggests that holding all other factors fixed, an increase in daily average temperature will lead to the largest increase in ozone in the Industrial Midwest and Northeast. PMID:23459794
Bayesian Spatial Quantile Regression.
Reich, Brian J; Fuentes, Montserrat; Dunson, David B
2011-03-01
Tropospheric ozone is one of the six criteria pollutants regulated by the United States Environmental Protection Agency under the Clean Air Act and has been linked with several adverse health effects, including mortality. Due to the strong dependence on weather conditions, ozone may be sensitive to climate change and there is great interest in studying the potential effect of climate change on ozone, and how this change may affect public health. In this paper we develop a Bayesian spatial model to predict ozone under different meteorological conditions, and use this model to study spatial and temporal trends and to forecast ozone concentrations under different climate scenarios. We develop a spatial quantile regression model that does not assume normality and allows the covariates to affect the entire conditional distribution, rather than just the mean. The conditional distribution is allowed to vary from site-to-site and is smoothed with a spatial prior. For extremely large datasets our model is computationally infeasible, and we develop an approximate method. We apply the approximate version of our model to summer ozone from 1997-2005 in the Eastern U.S., and use deterministic climate models to project ozone under future climate conditions. Our analysis suggests that holding all other factors fixed, an increase in daily average temperature will lead to the largest increase in ozone in the Industrial Midwest and Northeast. PMID:23459794
Luo, Chongliang; Liu, Jin; Dey, Dipak K; Chen, Kun
2016-07-01
In many fields, multi-view datasets, measuring multiple distinct but interrelated sets of characteristics on the same set of subjects, together with data on certain outcomes or phenotypes, are routinely collected. The objective in such a problem is often two-fold: both to explore the association structures of multiple sets of measurements and to develop a parsimonious model for predicting the future outcomes. We study a unified canonical variate regression framework to tackle the two problems simultaneously. The proposed criterion integrates multiple canonical correlation analysis with predictive modeling, balancing between the association strength of the canonical variates and their joint predictive power on the outcomes. Moreover, the proposed criterion seeks multiple sets of canonical variates simultaneously to enable the examination of their joint effects on the outcomes, and is able to handle multivariate and non-Gaussian outcomes. An efficient algorithm based on variable splitting and Lagrangian multipliers is proposed. Simulation studies show the superior performance of the proposed approach. We demonstrate the effectiveness of the proposed approach in an [Formula: see text] intercross mice study and an alcohol dependence study. PMID:26861909
Counting people with low-level features and Bayesian regression.
Chan, Antoni B; Vasconcelos, Nuno
2012-04-01
An approach to the problem of estimating the size of inhomogeneous crowds, which are composed of pedestrians that travel in different directions, without using explicit object segmentation or tracking is proposed. Instead, the crowd is segmented into components of homogeneous motion, using the mixture of dynamic-texture motion model. A set of holistic low-level features is extracted from each segmented region, and a function that maps features into estimates of the number of people per segment is learned with Bayesian regression. Two Bayesian regression models are examined. The first is a combination of Gaussian process regression with a compound kernel, which accounts for both the global and local trends of the count mapping but is limited by the real-valued outputs that do not match the discrete counts. We address this limitation with a second model, which is based on a Bayesian treatment of Poisson regression that introduces a prior distribution on the linear weights of the model. Since exact inference is analytically intractable, a closed-form approximation is derived that is computationally efficient and kernelizable, enabling the representation of nonlinear functions. An approximate marginal likelihood is also derived for kernel hyperparameter learning. The two regression-based crowd counting methods are evaluated on a large pedestrian data set, containing very distinct camera views, pedestrian traffic, and outliers, such as bikes or skateboarders. Experimental results show that regression-based counts are accurate regardless of the crowd size, outperforming the count estimates produced by state-of-the-art pedestrian detectors. Results on 2 h of video demonstrate the efficiency and robustness of the regression-based crowd size estimation over long periods of time. PMID:22020684
Linear regression in astronomy. I
NASA Technical Reports Server (NTRS)
Isobe, Takashi; Feigelson, Eric D.; Akritas, Michael G.; Babu, Gutti Jogesh
1990-01-01
Five methods for obtaining linear regression fits to bivariate data with unknown or insignificant measurement errors are discussed: ordinary least-squares (OLS) regression of Y on X, OLS regression of X on Y, the bisector of the two OLS lines, orthogonal regression, and 'reduced major-axis' regression. These methods have been used by various researchers in observational astronomy, most importantly in cosmic distance scale applications. Formulas for calculating the slope and intercept coefficients and their uncertainties are given for all the methods, including a new general form of the OLS variance estimates. The accuracy of the formulas was confirmed using numerical simulations. The applicability of the procedures is discussed with respect to their mathematical properties, the nature of the astronomical data under consideration, and the scientific purpose of the regression. It is found that, for problems needing symmetrical treatment of the variables, the OLS bisector performs significantly better than orthogonal or reduced major-axis regression.
Using regression models to determine the poroelastic properties of cartilage.
Chung, Chen-Yuan; Mansour, Joseph M
2013-07-26
The feasibility of determining biphasic material properties using regression models was investigated. A transversely isotropic poroelastic finite element model of stress relaxation was developed and validated against known results. This model was then used to simulate load intensity for a wide range of material properties. Linear regression equations for load intensity as a function of the five independent material properties were then developed for nine time points (131, 205, 304, 390, 500, 619, 700, 800, and 1000s) during relaxation. These equations illustrate the effect of individual material property on the stress in the time history. The equations at the first four time points, as well as one at a later time (five equations) could be solved for the five unknown material properties given computed values of the load intensity. Results showed that four of the five material properties could be estimated from the regression equations to within 9% of the values used in simulation if time points up to 1000s are included in the set of equations. However, reasonable estimates of the out of plane Poisson's ratio could not be found. Although all regression equations depended on permeability, suggesting that true equilibrium was not realized at 1000s of simulation, it was possible to estimate material properties to within 10% of the expected values using equations that included data up to 800s. This suggests that credible estimates of most material properties can be obtained from tests that are not run to equilibrium, which is typically several thousand seconds. PMID:23796400
An Implementation of Bayesian Adaptive Regression Splines (BARS) in C with S and R Wrappers.
Wallstrom, Garrick; Liebner, Jeffrey; Kass, Robert E
2008-06-01
BARS (DiMatteo, Genovese, and Kass 2001) uses the powerful reversible-jump MCMC engine to perform spline-based generalized nonparametric regression. It has been shown to work well in terms of having small mean-squared error in many examples (smaller than known competitors), as well as producing visually-appealing fits that are smooth (filtering out high-frequency noise) while adapting to sudden changes (retaining high-frequency signal). However, BARS is computationally intensive. The original implementation in S was too slow to be practical in certain situations, and was found to handle some data sets incorrectly. We have implemented BARS in C for the normal and Poisson cases, the latter being important in neurophysiological and other point-process applications. The C implementation includes all needed subroutines for fitting Poisson regression, manipulating B-splines (using code created by Bates and Venables), and finding starting values for Poisson regression (using code for density estimation created by Kooperberg). The code utilizes only freely-available external libraries (LAPACK and BLAS) and is otherwise self-contained. We have also provided wrappers so that BARS can be used easily within S or R. PMID:19777145
Predicting Severity of Child Abuse Injury with Ordinal Probit Regression.
ERIC Educational Resources Information Center
Zuravin, Susan J.; And Others
1994-01-01
Examined reports of one physically abused child from each of 789 families. Results of ordinal probit regression analysis identified that model with four predictors (perpetrator identity, reporter identity, severity of allegations, and season report was made) and two interaction terms (child's age by mother's age and child's age by child's gender)…
Evaluating differential effects using regression interactions and regression mixture models
Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung
2015-01-01
Research increasingly emphasizes understanding differential effects. This paper focuses on understanding regression mixture models, a relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their formulation, and their assumptions are compared using Monte Carlo simulations and real data analysis. The capabilities of regression mixture models are described and specific issues to be addressed when conducting regression mixtures are proposed. The paper aims to clarify the role that regression mixtures can take in the estimation of differential effects and increase awareness of the benefits and potential pitfalls of this approach. Regression mixture models are shown to be a potentially effective exploratory method for finding differential effects when these effects can be defined by a small number of classes of respondents who share a typical relationship between a predictor and an outcome. It is also shown that the comparison between regression mixture models and interactions becomes substantially more complex as the number of classes increases. It is argued that regression interactions are well suited for direct tests of specific hypotheses about differential effects and regression mixtures provide a useful approach for exploring effect heterogeneity given adequate samples and study design. PMID:26556903
Polarizable Atomic Multipole Solutes in a Poisson-Boltzmann Continuum
Schnieders, Michael J.; Baker, Nathan A.; Ren, Pengyu; Ponder, Jay W.
2008-01-01
Modeling the change in the electrostatics of organic molecules upon moving from vacuum into solvent, due to polarization, has long been an interesting problem. In vacuum, experimental values for the dipole moments and polarizabilities of small, rigid molecules are known to high accuracy; however, it has generally been difficult to determine these quantities for a polar molecule in water. A theoretical approach introduced by Onsager used vacuum properties of small molecules, including polarizability, dipole moment and size, to predict experimentally known permittivities of neat liquids via the Poisson equation. Since this important advance in understanding the condensed phase, a large number of computational methods have been developed to study solutes embedded in a continuum via numerical solutions to the Poisson-Boltzmann equation (PBE). Only recently have the classical force fields used for studying biomolecules begun to include explicit polarization in their functional forms. Here we describe the theory underlying a newly developed Polarizable Multipole Poisson-Boltzmann (PMPB) continuum electrostatics model, which builds on the Atomic Multipole Optimized Energetics for Biomolecular Applications (AMOEBA) force field. As an application of the PMPB methodology, results are presented for several small folded proteins studied by molecular dynamics in explicit water as well as embedded in the PMPB continuum. The dipole moment of each protein increased on average by a factor of 1.27 in explicit water and 1.26 in continuum solvent. The essentially identical electrostatic response in both models suggests that PMPB electrostatics offers an efficient alternative to sampling explicit solvent molecules for a variety of interesting applications, including binding energies, conformational analysis, and pKa prediction. Introduction of 150 mM salt lowered the electrostatic solvation energy between 2–13 kcal/mole, depending on the formal charge of the protein, but had only a
On population size estimators in the Poisson mixture model.
Mao, Chang Xuan; Yang, Nan; Zhong, Jinhua
2013-09-01
Estimating population sizes via capture-recapture experiments has enormous applications. The Poisson mixture model can be adopted for those applications with a single list in which individuals appear one or more times. We compare several nonparametric estimators, including the Chao estimator, the Zelterman estimator, two jackknife estimators and the bootstrap estimator. The target parameter of the Chao estimator is a lower bound of the population size. Those of the other four estimators are not lower bounds, and they may produce lower confidence limits for the population size with poor coverage probabilities. A simulation study is reported and two examples are investigated. PMID:23865502
A Poisson process approximation for generalized K-5 confidence regions
NASA Technical Reports Server (NTRS)
Arsham, H.; Miller, D. R.
1982-01-01
One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.
Poisson's Ratio and the Densification of Glass under High Pressure
Rouxel, T.; Ji, H.; Hammouda, T.; Moreac, A.
2008-06-06
Because of a relatively low atomic packing density, (C{sub g}) glasses experience significant densification under high hydrostatic pressure. Poisson's ratio ({nu}) is correlated to C{sub g} and typically varies from 0.15 for glasses with low C{sub g} such as amorphous silica to 0.38 for close-packed atomic networks such as in bulk metallic glasses. Pressure experiments were conducted up to 25 GPa at 293 K on silica, soda-lime-silica, chalcogenide, and bulk metallic glasses. We show from these high-pressure data that there is a direct correlation between {nu} and the maximum post-decompression density change.
Numerical Poisson-Boltzmann Model for Continuum Membrane Systems.
Botello-Smith, Wesley M; Liu, Xingping; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray
2013-01-01
Membrane protein systems are important computational research topics due to their roles in rational drug design. In this study, we developed a continuum membrane model utilizing a level set formulation under the numerical Poisson-Boltzmann framework within the AMBER molecular mechanics suite for applications such as protein-ligand binding affinity and docking pose predictions. Two numerical solvers were adapted for periodic systems to alleviate possible edge effects. Validation on systems ranging from organic molecules to membrane proteins up to 200 residues, demonstrated good numerical properties. This lays foundations for sophisticated models with variable dielectric treatments and second-order accurate modeling of solvation interactions. PMID:23439886
Fission meter and neutron detection using poisson distribution comparison
Rowland, Mark S; Snyderman, Neal J
2014-11-18
A neutron detector system and method for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly into information that a first responder can use to discriminate materials. The system comprises counting neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source. Comparison of the observed neutron count distribution with a Poisson distribution is performed to distinguish fissile material from non-fissile material.
Theory of multicolor lattice gas - A cellular automaton Poisson solver
NASA Technical Reports Server (NTRS)
Chen, H.; Matthaeus, W. H.; Klein, L. W.
1990-01-01
The present class of models for cellular automata involving a quiescent hydrodynamic lattice gas with multiple-valued passive labels termed 'colors', the lattice collisions change individual particle colors while preserving net color. The rigorous proofs of the multicolor lattice gases' essential features are rendered more tractable by an equivalent subparticle representation in which the color is represented by underlying two-state 'spins'. Schemes for the introduction of Dirichlet and Neumann boundary conditions are described, and two illustrative numerical test cases are used to verify the theory. The lattice gas model is equivalent to a Poisson equation solution.
The Poisson equation at second order in relativistic cosmology
Hidalgo, J.C.; Christopherson, Adam J.; Malik, Karim A. E-mail: Adam.Christopherson@nottingham.ac.uk
2013-08-01
We calculate the relativistic constraint equation which relates the curvature perturbation to the matter density contrast at second order in cosmological perturbation theory. This relativistic ''second order Poisson equation'' is presented in a gauge where the hydrodynamical inhomogeneities coincide with their Newtonian counterparts exactly for a perfect fluid with constant equation of state. We use this constraint to introduce primordial non-Gaussianity in the density contrast in the framework of General Relativity. We then derive expressions that can be used as the initial conditions of N-body codes for structure formation which probe the observable signature of primordial non-Gaussianity in the statistics of the evolved matter density field.
Linear regression in astronomy. II
NASA Technical Reports Server (NTRS)
Feigelson, Eric D.; Babu, Gutti J.
1992-01-01
A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.
Quantile regression for climate data
NASA Astrophysics Data System (ADS)
Marasinghe, Dilhani Shalika
Quantile regression is a developing statistical tool which is used to explain the relationship between response and predictor variables. This thesis describes two examples of climatology using quantile regression.Our main goal is to estimate derivatives of a conditional mean and/or conditional quantile function. We introduce a method to handle autocorrelation in the framework of quantile regression and used it with the temperature data. Also we explain some properties of the tornado data which is non-normally distributed. Even though quantile regression provides a more comprehensive view, when talking about residuals with the normality and the constant variance assumption, we would prefer least square regression for our temperature analysis. When dealing with the non-normality and non constant variance assumption, quantile regression is a better candidate for the estimation of the derivative.
Some Poisson structures and Lax equations associated with the Toeplitz lattice and the Schur lattice
NASA Astrophysics Data System (ADS)
Lemarie, Caroline
2016-01-01
The Toeplitz lattice is a Hamiltonian system whose Poisson structure is known. In this paper, we unveil the origins of this Poisson structure and derive from it the associated Lax equations for this lattice. We first construct a Poisson subvariety H n of GL n (C), which we view as a real or complex Poisson-Lie group whose Poisson structure comes from a quadratic R-bracket on gl n (C) for a fixed R-matrix. The existence of Hamiltonians, associated to the Toeplitz lattice for the Poisson structure on H n , combined with the properties of the quadratic R-bracket allow us to give explicit formulas for the Lax equation. Then we derive from it the integrability in the sense of Liouville of the Toeplitz lattice. When we view the lattice as being defined over R, we can construct a Poisson subvariety H n τ of U n which is itself a Poisson-Dirac subvariety of GL n R (C). We then construct a Hamiltonian for the Poisson structure induced on H n τ , corresponding to another system which derives from the Toeplitz lattice the modified Schur lattice. Thanks to the properties of Poisson-Dirac subvarieties, we give an explicit Lax equation for the new system and derive from it a Lax equation for the Schur lattice. We also deduce the integrability in the sense of Liouville of the modified Schur lattice.
Evaluating Differential Effects Using Regression Interactions and Regression Mixture Models
ERIC Educational Resources Information Center
Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung
2015-01-01
Research increasingly emphasizes understanding differential effects. This article focuses on understanding regression mixture models, which are relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their…
Retro-regression--another important multivariate regression improvement.
Randić, M
2001-01-01
We review the serious problem associated with instabilities of the coefficients of regression equations, referred to as the MRA (multivariate regression analysis) "nightmare of the first kind". This is manifested when in a stepwise regression a descriptor is included or excluded from a regression. The consequence is an unpredictable change of the coefficients of the descriptors that remain in the regression equation. We follow with consideration of an even more serious problem, referred to as the MRA "nightmare of the second kind", arising when optimal descriptors are selected from a large pool of descriptors. This process typically causes at different steps of the stepwise regression a replacement of several previously used descriptors by new ones. We describe a procedure that resolves these difficulties. The approach is illustrated on boiling points of nonanes which are considered (1) by using an ordered connectivity basis; (2) by using an ordering resulting from application of greedy algorithm; and (3) by using an ordering derived from an exhaustive search for optimal descriptors. A novel variant of multiple regression analysis, called retro-regression (RR), is outlined showing how it resolves the ambiguities associated with both "nightmares" of the first and the second kind of MRA. PMID:11410035
The association of host age and gender with inflammation around neurocysticercosis cysts.
Kelvin, E A; Carpio, A; Bagiella, E; Leslie, D; Leon, P; Andrews, H; Hauser, W A
2009-09-01
The results of previous investigations indicate that age and gender may influence the strength of the human host's immune response to infection of the central nervous system with the larvae of Taenia solium. Most of the relevant research on such neurocysticercosis (NCC) has, however, been conducted on hospital-based samples in developing countries, where differential access to healthcare may bias the study results. Using data from 171 NCC patients participating in a treatment trial, the associations of patient age and gender with the presence of inflammation around NCC cysts (i.e. cysts in the transitional phase) have recently been explored, after controlling for measures of economic and geographical access to healthcare. Data on cysts were collected from computed-tomography or magnetic-resonance images taken at four time-points, from baseline to 12-months post-treatment. The odds of having transitional cysts were evaluated by logistic regression whereas Poisson regression was used to explore the numbers of transitional cysts, with generalised estimating equations (GEE) used to account for the multiple observations over time. After controlling for healthcare access, the odds of having transitional cysts were found to be 1.5-fold higher for the female patients than for the male, although this association was not statistically significant (P = 0.136). In the Poisson model, however, the number of transitional cysts was found to be 1.8-fold higher in the female patients than in the male, and this gender effect was not only statistically significant (P = 0.002) but also constant over time. The association of host age with transitional cysts was more complicated, with significant interaction between age and time. It therefore appears that there are significant gender and age differences in the local immune response to NCC, even after adjusting for differences in healthcare access. PMID:19695154
POISSON project. III. Investigating the evolution of the mass accretion rate
NASA Astrophysics Data System (ADS)
Antoniucci, S.; García López, R.; Nisini, B.; Caratti o Garatti, A.; Giannini, T.; Lorenzetti, D.
2014-12-01
Context. As part of the Protostellar Optical-Infrared Spectral Survey On NTT (POISSON) project, we present the results of the analysis of low-resolution near-IR spectroscopic data (0.9-2.4 μm) of two samples of young stellar objects in the Lupus (52 objects) and Serpens (17 objects) star-forming clouds, with masses in the range of 0.1 to 2.0 M⊙ and ages spanning from 105 to a few 107 yr. Aims: After determining the accretion parameters of the targets by analysing their H i near-IR emission features, we added the results from the Lupus and Serpens clouds to those from previous regions (investigated in POISSON with the same methodology) to obtain a final catalogue (143 objects) of mass accretion rate values (Ṁacc) derived in a homogeneous and consistent fashion. Our final goal is to analyse how Ṁacc correlates with the stellar mass (M∗) and how it evolves in time in the whole POISSON sample. Methods: We derived the accretion luminosity (Lacc) and Ṁacc for Lupus and Serpens objects from the Brγ (Paβ in a few cases) line by using relevant empirical relationships available in the literature that connect the H i line luminosity and Lacc. To minimise the biases that arise from adopting literature data that are based on different evolutionary models and also for self-consistency, we re-derived mass and age for each source of the POISSON samples using the same set of evolutionary tracks. Results: We observe a correlation Ṁacc~M*2.2 between mass accretion rate and stellar mass, similarly to what has previously been observed in several star-forming regions. We find that the time variation of Ṁacc is roughly consistent with the expected evolution of the accretion rate in viscous disks, with an asymptotic decay that behaves as t-1.6. However, Ṁacc values are characterised by a large scatter at similar ages and are on average higher than the predictions of viscous models. Conclusions: Although part of the scattering may be related to systematics due to the
Polyelectrolyte Microcapsules: Ion Distributions from a Poisson-Boltzmann Model
NASA Astrophysics Data System (ADS)
Tang, Qiyun; Denton, Alan R.; Rozairo, Damith; Croll, Andrew B.
2014-03-01
Recent experiments have shown that polystyrene-polyacrylic-acid-polystyrene (PS-PAA-PS) triblock copolymers in a solvent mixture of water and toluene can self-assemble into spherical microcapsules. Suspended in water, the microcapsules have a toluene core surrounded by an elastomer triblock shell. The longer, hydrophilic PAA blocks remain near the outer surface of the shell, becoming charged through dissociation of OH functional groups in water, while the shorter, hydrophobic PS blocks form a networked (glass or gel) structure. Within a mean-field Poisson-Boltzmann theory, we model these polyelectrolyte microcapsules as spherical charged shells, assuming different dielectric constants inside and outside the capsule. By numerically solving the nonlinear Poisson-Boltzmann equation, we calculate the radial distribution of anions and cations and the osmotic pressure within the shell as a function of salt concentration. Our predictions, which can be tested by comparison with experiments, may guide the design of microcapsules for practical applications, such as drug delivery. This work was supported by the National Science Foundation under Grant No. DMR-1106331.
A Tubular Biomaterial Construct Exhibiting a Negative Poisson's Ratio.
Lee, Jin Woo; Soman, Pranav; Park, Jeong Hun; Chen, Shaochen; Cho, Dong-Woo
2016-01-01
Developing functional small-diameter vascular grafts is an important objective in tissue engineering research. In this study, we address the problem of compliance mismatch by designing and developing a 3D tubular construct that has a negative Poisson's ratio νxy (NPR). NPR constructs have the unique ability to expand transversely when pulled axially, thereby resulting in a highly-compliant tubular construct. In this work, we used projection stereolithography to 3D-print a planar NPR sheet composed of photosensitive poly(ethylene) glycol diacrylate biomaterial. We used a step-lithography exposure and a stitch process to scale up the projection printing process, and used the cut-missing rib unit design to develop a centimeter-scale NPR sheet, which was rolled up to form a tubular construct. The constructs had Poisson's ratios of -0.6 ≤ νxy ≤ -0.1. The NPR construct also supports higher cellular adhesion than does the construct that has positive νxy. Our NPR design offers a significant advance in the development of highly-compliant vascular grafts. PMID:27232181
Assessent of elliptic solvers for the pressure Poisson equation
NASA Astrophysics Data System (ADS)
Strodtbeck, J. P.; Polly, J. B.; McDonough, J. M.
2008-11-01
It is well known that as much as 80% of the total arithmetic needed for a solution of the incompressible Navier--Stokes equations can be expended for solving the pressure Poisson equation, and this has long been one of the prime motivations for study of elliptic solvers. In recent years various Krylov-subspace methods have begun to receive wide use because of their rapid convergence rates and automatic generation of iteration parameters. However, it is actually total floating-point arithmetic operations that must be of concern when selecting a solver for CFD, and not simply required number of iterations. In the present study we recast speed of convergence for typical CFD pressure Poisson problems in terms of CPU time spent on floating-point arithmetic and demonstrate that in many cases simple successive-overrelaxation (SOR) methods are as effective as some of the popular Krylov-subspace techniques such as BiCGStab(l) provided optimal SOR iteration parameters are employed; furthermore, SOR procedures require significantly less memory. We then describe some techniques for automatically predicting optimal SOR parameters.
The multisensor PHD filter: II. Erroneous solution via Poisson magic
NASA Astrophysics Data System (ADS)
Mahler, Ronald
2009-05-01
The theoretical foundation for the probability hypothesis density (PHD) filter is the FISST multitarget differential and integral calculus. The "core" PHD filter presumes a single sensor. Theoretically rigorous formulas for the multisensor PHD filter can be derived using the FISST calculus, but are computationally intractable. A less theoretically desirable solution-the iterated-corrector approximation-must be used instead. Recently, it has been argued that an "elementary" methodology, the "Poisson-intensity approach," renders FISST obsolete. It has further been claimed that the iterated-corrector approximation is suspect, and in its place an allegedly superior "general multisensor intensity filter" has been proposed. In this and a companion paper I demonstrate that it is these claims which are erroneous. The companion paper introduces formulas for the actual "general multisensor intensity filter." In this paper I demonstrate that (1) the "general multisensor intensity filter" fails in important special cases; (2) it will perform badly in even the easiest multitarget tracking problems; and (3) these rather serious missteps suggest that the "Poisson-intensity approach" is inherently faulty.
Poisson's equation solution of Coulomb integrals in atoms and molecules
NASA Astrophysics Data System (ADS)
Weatherford, Charles A.; Red, Eddie; Joseph, Dwayne; Hoggan, Philip
The integral bottleneck in evaluating molecular energies arises from the two-electron contributions. These are difficult and time-consuming to evaluate, especially over exponential type orbitals, used here to ensure the correct behaviour of atomic orbitals. In this work, it is shown that the two-centre Coulomb integrals involved can be expressed as one-electron kinetic-energy-like integrals. This is accomplished using the fact that the Coulomb operator is a Green's function of the Laplacian. The ensuing integrals may be further simplified by defining Coulomb forms for the one-electron potential satisfying Poisson's equation therein. A sum of overlap integrals with the atomic orbital energy eigenvalue as a factor is then obtained to give the Coulomb energy. The remaining questions of translating orbitals involved in three and four centre integrals and the evaluation of exchange energy are also briefly discussed. The summation coefficients in Coulomb forms are evaluated using the LU decomposition. This algorithm is highly parallel. The Poisson method may be used to calculate Coulomb energy integrals efficiently. For a single processor, gains of CPU time for a given chemical accuracy exceed a factor of 40. This method lends itself to evaluation on a parallel computer.
Yang, Tse-Chuan; Shoff, Carla; Matthews, Stephen A.
2014-01-01
Based on ecological studies, second demographic transition (SDT) theorists concluded that some areas in the US were in vanguard of the SDT compared to others, implying spatial nonstationarity may be inherent in the SDT process. Linking the SDT to the infant mortality literature, we sought out to answer two related questions: Are the main components of the SDT, specifically marriage postponement, cohabitation, and divorce, associated with infant mortality? If yes, do these associations vary across the US? We applied global Poisson and geographically weighted Poisson regression (GWPR) models, a place-specific analytic approach, to county-level data in the contiguous US. After accounting for the racial/ethnic and socioeconomic compositions of counties and prenatal care utilization, we found (1) marriage postponement was negatively related to infant mortality in the southwestern states, but positively associated with infant mortality in parts of Indiana, Kentucky, and Tennessee, (2) cohabitation rates were positively related to infant mortality, and this relationship was stronger in California, coastal Virginia, and the Carolinas than other areas, and (3) a positive association between divorce rates and infant mortality in southwestern and northeastern areas of the US. These spatial patterns suggested that the associations between the SDT and infant mortality were stronger in the areas in vanguard of the SDT than in others. The comparison between global Poisson and GWPR results indicated that a place-specific spatial analysis not only fit the data better, but also provided insights into understanding the non-stationarity of the associations between the SDT and infant mortality. PMID:25383259
Yang, Tse-Chuan; Shoff, Carla; Matthews, Stephen A
2013-01-01
Based on ecological studies, second demographic transition (SDT) theorists concluded that some areas in the US were in vanguard of the SDT compared to others, implying spatial nonstationarity may be inherent in the SDT process. Linking the SDT to the infant mortality literature, we sought out to answer two related questions: Are the main components of the SDT, specifically marriage postponement, cohabitation, and divorce, associated with infant mortality? If yes, do these associations vary across the US? We applied global Poisson and geographically weighted Poisson regression (GWPR) models, a place-specific analytic approach, to county-level data in the contiguous US. After accounting for the racial/ethnic and socioeconomic compositions of counties and prenatal care utilization, we found (1) marriage postponement was negatively related to infant mortality in the southwestern states, but positively associated with infant mortality in parts of Indiana, Kentucky, and Tennessee, (2) cohabitation rates were positively related to infant mortality, and this relationship was stronger in California, coastal Virginia, and the Carolinas than other areas, and (3) a positive association between divorce rates and infant mortality in southwestern and northeastern areas of the US. These spatial patterns suggested that the associations between the SDT and infant mortality were stronger in the areas in vanguard of the SDT than in others. The comparison between global Poisson and GWPR results indicated that a place-specific spatial analysis not only fit the data better, but also provided insights into understanding the non-stationarity of the associations between the SDT and infant mortality. PMID:25383259
Precision Efficacy Analysis for Regression.
ERIC Educational Resources Information Center
Brooks, Gordon P.
When multiple linear regression is used to develop a prediction model, sample size must be large enough to ensure stable coefficients. If the derivation sample size is inadequate, the model may not predict well for future subjects. The precision efficacy analysis for regression (PEAR) method uses a cross- validity approach to select sample sizes…
Ecological Regression and Voting Rights.
ERIC Educational Resources Information Center
Freedman, David A.; And Others
1991-01-01
The use of ecological regression in voting rights cases is discussed in the context of a lawsuit against Los Angeles County (California) in 1990. Ecological regression assumes that systematic voting differences between precincts are explained by ethnic differences. An alternative neighborhood model is shown to lead to different conclusions. (SLD)
Logistic Regression: Concept and Application
ERIC Educational Resources Information Center
Cokluk, Omay
2010-01-01
The main focus of logistic regression analysis is classification of individuals in different groups. The aim of the present study is to explain basic concepts and processes of binary logistic regression analysis intended to determine the combination of independent variables which best explain the membership in certain groups called dichotomous…
Fungible weights in logistic regression.
Jones, Jeff A; Waller, Niels G
2016-06-01
In this article we develop methods for assessing parameter sensitivity in logistic regression models. To set the stage for this work, we first review Waller's (2008) equations for computing fungible weights in linear regression. Next, we describe 2 methods for computing fungible weights in logistic regression. To demonstrate the utility of these methods, we compute fungible logistic regression weights using data from the Centers for Disease Control and Prevention's (2010) Youth Risk Behavior Surveillance Survey, and we illustrate how these alternate weights can be used to evaluate parameter sensitivity. To make our work accessible to the research community, we provide R code (R Core Team, 2015) that will generate both kinds of fungible logistic regression weights. (PsycINFO Database Record PMID:26651981
[Regression grading in gastrointestinal tumors].
Tischoff, I; Tannapfel, A
2012-02-01
Preoperative neoadjuvant chemoradiation therapy is a well-established and essential part of the interdisciplinary treatment of gastrointestinal tumors. Neoadjuvant treatment leads to regressive changes in tumors. To evaluate the histological tumor response different scoring systems describing regressive changes are used and known as tumor regression grading. Tumor regression grading is usually based on the presence of residual vital tumor cells in proportion to the total tumor size. Currently, no nationally or internationally accepted grading systems exist. In general, common guidelines should be used in the pathohistological diagnostics of tumors after neoadjuvant therapy. In particularly, the standard tumor grading will be replaced by tumor regression grading. Furthermore, tumors after neoadjuvant treatment are marked with the prefix "y" in the TNM classification. PMID:22293790
Assessing risk factors for periodontitis using regression
NASA Astrophysics Data System (ADS)
Lobo Pereira, J. A.; Ferreira, Maria Cristina; Oliveira, Teresa
2013-10-01
Multivariate statistical analysis is indispensable to assess the associations and interactions between different factors and the risk of periodontitis. Among others, regression analysis is a statistical technique widely used in healthcare to investigate and model the relationship between variables. In our work we study the impact of socio-demographic, medical and behavioral factors on periodontal health. Using regression, linear and logistic models, we can assess the relevance, as risk factors for periodontitis disease, of the following independent variables (IVs): Age, Gender, Diabetic Status, Education, Smoking status and Plaque Index. The multiple linear regression analysis model was built to evaluate the influence of IVs on mean Attachment Loss (AL). Thus, the regression coefficients along with respective p-values will be obtained as well as the respective p-values from the significance tests. The classification of a case (individual) adopted in the logistic model was the extent of the destruction of periodontal tissues defined by an Attachment Loss greater than or equal to 4 mm in 25% (AL≥4mm/≥25%) of sites surveyed. The association measures include the Odds Ratios together with the correspondent 95% confidence intervals.
Impact of BAC limit reduction on different population segments: a Poisson fixed effect analysis.
Kaplan, Sigal; Prato, Carlo Giacomo
2007-11-01
Over the past few decades, several countries enacted the reduction of the legal blood alcohol concentration (BAC) limit, often alongside the administrative license revocation or suspension, to battle drinking-and-driving behavior. Several researchers investigated the effectiveness of these policies by applying different analysis procedures, while assuming population homogeneity in responding to these laws. The present analysis focuses on the evaluation of the impact of BAC limit reduction on different population segments. Poisson regression models, adapted to account for possible observation dependence over time and state specific effects, are estimated to measure the reduction of the number of alcohol-related accidents and fatalities for single-vehicle accidents in 22 U.S. jurisdictions over a period of 15 years starting in 1990. Model estimates demonstrate that, for alcohol-related single-vehicle crashes, (i) BAC laws are more effective in terms of reduction of number of casualties rather than number of accidents, (ii) women and elderly population exhibit higher law compliance with respect to men and to young adult and adult population, respectively, and (iii) the presence of passengers in the vehicle enhances the sense of responsibility of the driver. PMID:17920837
Non-linear properties of metallic cellular materials with a negative Poisson's ratio
NASA Technical Reports Server (NTRS)
Choi, J. B.; Lakes, R. S.
1992-01-01
Negative Poisson's ratio copper foam was prepared and characterized experimentally. The transformation into re-entrant foam was accomplished by applying sequential permanent compressions above the yield point to achieve a triaxial compression. The Poisson's ratio of the re-entrant foam depended on strain and attained a relative minimum at strains near zero. Poisson's ratio as small as -0.8 was achieved. The strain dependence of properties occurred over a narrower range of strain than in the polymer foams studied earlier. Annealing of the foam resulted in a slightly greater magnitude of negative Poisson's ratio and greater toughness at the expense of a decrease in the Young's modulus.
Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.
Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique
2015-05-01
The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. PMID:25385093
Kerr, Zachary Y.; Marshall, Stephen W.; Simon, Janet E.; Hayden, Ross; Snook, Erin M.; Dodge, Thomas; Gallo, Joseph A.; Valovich McLeod, Tamara C.; Mensch, James; Murphy, Joseph M.; Nittoli, Vincent C.; Dompier, Thomas P.; Ragan, Brian; Yeargin, Susan W.; Parsons, John T.
2015-01-01
Background: American youth football leagues are typically structured using either age-only (AO) or age-and-weight (AW) playing standard conditions. These playing standard conditions group players by age in the former condition and by a combination of age and weight in the latter condition. However, no study has systematically compared injury risk between these 2 playing standards. Purpose: To compare injury rates between youth tackle football players in the AO and AW playing standard conditions. Study Design: Cohort study; Level of evidence, 2. Methods: Athletic trainers evaluated and recorded injuries at each practice and game during the 2012 and 2013 football seasons. Players (age, 5-14 years) were drawn from 13 recreational leagues across 6 states. The sample included 4092 athlete-seasons (AW, 2065; AO, 2027) from 210 teams (AW, 106; O, 104). Injury rate ratios (RRs) with 95% CIs were used to compare the playing standard conditions. Multivariate Poisson regression was used to estimate RRs adjusted for residual effects of age and clustering by team and league. There were 4 endpoints of interest: (1) any injury, (2) non–time loss (NTL) injuries only, (3) time loss (TL) injuries only, and (4) concussions only. Results: Over 2 seasons, the cohort accumulated 1475 injuries and 142,536 athlete-exposures (AEs). The most common injuries were contusions (34.4%), ligament sprains (16.3%), concussions (9.6%), and muscle strains (7.8%). The overall injury rate for both playing standard conditions combined was 10.3 per 1000 AEs (95% CI, 9.8-10.9). The TL injury, NTL injury, and concussion rates in both playing standard conditions combined were 3.1, 7.2, and 1.0 per 1000 AEs, respectively. In multivariate Poisson regression models controlling for age, team, and league, no differences were found between playing standard conditions in the overall injury rate (RRoverall, 1.1; 95% CI, 0.4-2.6). Rates for the other 3 endpoints were also similar (RRNTL, 1.1 [95% CI, 0
Time series regression model for infectious disease and weather.
Imai, Chisato; Armstrong, Ben; Chalabi, Zaid; Mangtani, Punam; Hashizume, Masahiro
2015-10-01
Time series regression has been developed and long used to evaluate the short-term associations of air pollution and weather with mortality or morbidity of non-infectious diseases. The application of the regression approaches from this tradition to infectious diseases, however, is less well explored and raises some new issues. We discuss and present potential solutions for five issues often arising in such analyses: changes in immune population, strong autocorrelations, a wide range of plausible lag structures and association patterns, seasonality adjustments, and large overdispersion. The potential approaches are illustrated with datasets of cholera cases and rainfall from Bangladesh and influenza and temperature in Tokyo. Though this article focuses on the application of the traditional time series regression to infectious diseases and weather factors, we also briefly introduce alternative approaches, including mathematical modeling, wavelet analysis, and autoregressive integrated moving average (ARIMA) models. Modifications proposed to standard time series regression practice include using sums of past cases as proxies for the immune population, and using the logarithm of lagged disease counts to control autocorrelation due to true contagion, both of which are motivated from "susceptible-infectious-recovered" (SIR) models. The complexity of lag structures and association patterns can often be informed by biological mechanisms and explored by using distributed lag non-linear models. For overdispersed models, alternative distribution models such as quasi-Poisson and negative binomial should be considered. Time series regression can be used to investigate dependence of infectious diseases on weather, but may need modifying to allow for features specific to this context. PMID:26188633
Practical Session: Simple Linear Regression
NASA Astrophysics Data System (ADS)
Clausel, M.; Grégoire, G.
2014-12-01
Two exercises are proposed to illustrate the simple linear regression. The first one is based on the famous Galton's data set on heredity. We use the lm R command and get coefficients estimates, standard error of the error, R2, residuals …In the second example, devoted to data related to the vapor tension of mercury, we fit a simple linear regression, predict values, and anticipate on multiple linear regression. This pratical session is an excerpt from practical exercises proposed by A. Dalalyan at EPNC (see Exercises 1 and 2 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_4.pdf).
Splines for Diffeomorphic Image Regression
Singh, Nikhil; Niethammer, Marc
2016-01-01
This paper develops a method for splines on diffeomorphisms for image regression. In contrast to previously proposed methods to capture image changes over time, such as geodesic regression, the method can capture more complex spatio-temporal deformations. In particular, it is a first step towards capturing periodic motions for example of the heart or the lung. Starting from a variational formulation of splines the proposed approach allows for the use of temporal control points to control spline behavior. This necessitates the development of a shooting formulation for splines. Experimental results are shown for synthetic and real data. The performance of the method is compared to geodesic regression. PMID:25485370
Optical Signal Processing: Poisson Image Restoration and Shearing Interferometry
NASA Technical Reports Server (NTRS)
Hong, Yie-Ming
1973-01-01
Optical signal processing can be performed in either digital or analog systems. Digital computers and coherent optical systems are discussed as they are used in optical signal processing. Topics include: image restoration; phase-object visualization; image contrast reversal; optical computation; image multiplexing; and fabrication of spatial filters. Digital optical data processing deals with restoration of images degraded by signal-dependent noise. When the input data of an image restoration system are the numbers of photoelectrons received from various areas of a photosensitive surface, the data are Poisson distributed with mean values proportional to the illuminance of the incoherently radiating object and background light. Optical signal processing using coherent optical systems is also discussed. Following a brief review of the pertinent details of Ronchi's diffraction grating interferometer, moire effect, carrier-frequency photography, and achromatic holography, two new shearing interferometers based on them are presented. Both interferometers can produce variable shear.
Deformations of non-semisimple Poisson pencils of hydrodynamic type
NASA Astrophysics Data System (ADS)
Della Vedova, Alberto; Lorenzoni, Paolo; Savoldi, Andrea
2016-09-01
We study the deformations of two-component non-semisimple Poisson pencils of hydrodynamic type associated with Balinskiǐ–Novikov algebras. We show that in most cases the second order deformations are parametrized by two functions of a single variable. We find that one function is invariant with respect to the subgroup of Miura transformations, preserving the dispersionless limit, and another function is related to a one-parameter family of truncated structures. In two exceptional cases the second order deformations are parametrized by four functions. Among these two are invariants and two are related to a two-parameter family of truncated structures. We also study the lift of the deformations of n-component semisimple structures. This example suggests that deformations of non-semisimple pencils corresponding to the lifted invariant parameters are unobstructed.
An alternating minimization method for blind deconvolution from Poisson data
NASA Astrophysics Data System (ADS)
Prato, Marco; La Camera, Andrea; Bonettini, Silvia
2014-10-01
Blind deconvolution is a particularly challenging inverse problem since information on both the desired target and the acquisition system have to be inferred from the measured data. When the collected data are affected by Poisson noise, this problem is typically addressed by the minimization of the Kullback-Leibler divergence, in which the unknowns are sought in particular feasible sets depending on the a priori information provided by the specific application. If these sets are separated, then the resulting constrained minimization problem can be addressed with an inexact alternating strategy. In this paper we apply this optimization tool to the problem of reconstructing astronomical images from adaptive optics systems, and we show that the proposed approach succeeds in providing very good results in the blind deconvolution of nondense stellar clusters.
Note on the Poisson structure of the damped oscillator
Hone, A. N. W.; Senthilvelan, M.
2009-10-15
The damped harmonic oscillator is one of the most studied systems with respect to the problem of quantizing dissipative systems. Recently Chandrasekar et al. [J. Math. Phys. 48, 032701 (2007)] applied the Prelle-Singer method to construct conserved quantities and an explicit time-independent Lagrangian and Hamiltonian structure for the damped oscillator. Here we describe the associated Poisson bracket which generates the continuous flow, pointing out that there is a subtle problem of definition on the whole phase space. The action-angle variables for the system are also presented, and we further explain how to extend these considerations to the discrete setting. Some implications for the quantum case are briefly mentioned.
Analytical stress intensity solution for the Stable Poisson Loaded specimen
NASA Astrophysics Data System (ADS)
Ghosn, Louis J.; Calomino, Anthony M.; Brewer, David N.
1993-04-01
An analytical calibration of the Stable Poisson Loaded (SPL) specimen is presented. The specimen configuration is similar to the ASTM E-561 compact-tension specimen with displacement controlled wedge loading used for R-curve determination. The crack mouth opening displacements (CMODs) are produced by the diametral expansion of an axially compressed cylindrical pin located in the wake of a machined notch. Due to the unusual loading configuration, a three-dimensional finite element analysis was performed with gap elements simulating the contact between the pin and specimen. In this report, stress intensity factors, CMODs, and crack displacement profiles, are reported for different crack lengths and different contacting conditions. It was concluded that the computed stress intensity factor decreases sharply with increasing crack length thus making the SPL specimen configuration attractive for fracture testing of brittle, high modulus materials.
Numerical calibration of the stable poisson loaded specimen
NASA Astrophysics Data System (ADS)
Ghosn, Louis J.; Calomino, Anthony M.; Brewer, Dave N.
1992-10-01
An analytical calibration of the Stable Poisson Loaded (SPL) specimen is presented. The specimen configuration is similar to the ASTM E-561 compact-tension specimen with displacement controlled wedge loading used for R-Curve determination. The crack mouth opening displacements (CMOD's) are produced by the diametral expansion of an axially compressed cylindrical pin located in the wake of a machined notch. Due to the unusual loading configuration, a three-dimensional finite element analysis was performed with gap elements simulating the contact between the pin and specimen. In this report, stress intensity factors, CMOD's, and crack displacement profiles are reported for different crack lengths and different contacting conditions. It was concluded that the computed stress intensity factor decreases sharply with increasing crack length, thus making the SPL specimen configuration attractive for fracture testing of brittle, high modulus materials.
Analytical stress intensity solution for the stable Poisson loaded specimen
NASA Astrophysics Data System (ADS)
Ghosn, Louis J.; Calomino, Anthony M.; Brewer, David N.
1993-04-01
An analytical calibration of the Stable Poisson Loaded (SPL) specimen is presented. The specimen configuration is similar to the ASTM E-561 compact-tension specimen with displacement controlled wedge loading used for R-curve determination. The crack mouth opening displacements (CMOD's) are produced by the diametral expansion of an axially compressed cylindrical pin located in the wake of a machined notch. Due to the unusual loading configuration, a three-dimensional finite element analysis was performed with gap elements simulating the contact between the pin and specimen. In this report, stress intensity factors, CMOD's, and crack displacement profiles, are reported for different crack lengths and different contacting conditions. It was concluded that the computed stress intensity factor decreases sharply with increasing crack length thus making the SPL specimen configuration attractive for fracture testing of brittle, high modulus materials.
Application of the sine-Poisson equation in solar magnetostatics
NASA Technical Reports Server (NTRS)
Webb, G. M.; Zank, G. P.
1990-01-01
Solutions of the sine-Poisson equations are used to construct a class of isothermal magnetostatic atmospheres, with one ignorable coordinate corresponding to a uniform gravitational field in a plane geometry. The distributed current in the model (j) is directed along the x-axis, where x is the horizontal ignorable coordinate; (j) varies as the sine of the magnetostatic potential and falls off exponentially with distance vertical to the base with an e-folding distance equal to the gravitational scale height. Solutions for the magnetostatic potential A corresponding to the one-soliton, two-soliton, and breather solutions of the sine-Gordon equation are studied. Depending on the values of the free parameters in the soliton solutions, horizontally periodic magnetostatic structures are obtained possessing either a single X-type neutral point, multiple neural X-points, or solutions without X-points.
Poisson's ratios of auxetic and other technological materials.
Ballato, Arthur
2010-01-01
Poisson's ratio, the relation between lateral contraction of a thin, linearly elastic rod when subjected to a longitudinal extension, has a long and interesting history. For isotropic bodies, it can theoretically range from +1/2 to -1; the experimental gamut for anisotropics is even larger. The ratio is positive for all combinations of directions in most crystals. But as far back as the 1800s, Voigt and others found that negative values were encountered for some materials, a property now called auxeticity. Here we examine this property from the point of view of crystal stability and compute extrema of the ratio for various interesting and technologically important materials. Potential applications of the auxetic property are mentioned. PMID:20040420
The relative risk in a cohort study with Poisson cases.
Mulder, P G
1988-01-01
This paper deals with making statistical inference about the relative risk (or risk ratio) in a cohort (or prospective) study with dichotomous exposure when the number of cases is a Poisson distributed variable. The exact procedure for testing the null hypothesis for the relative risk and the exact computation of its confidence interval for a single 2 X 2 table is presented. Maximum likelihood methods and the homogeneity test are presented for the common risk ratio when data is stratified in several 2 X 2 tables. These methods are based upon a sufficient statistic and therefore are considered proper statistical alternatives to the more descriptive epidemiological measures such as (in)directly standardized mortality (morbidity) ratios. All computations can be done on a programmable pocket calculator. With the HP-41 CV more than 70 strata can be distinguished. PMID:3180748
Numerical calibration of the stable poisson loaded specimen
NASA Technical Reports Server (NTRS)
Ghosn, Louis J.; Calomino, Anthony M.; Brewer, Dave N.
1992-01-01
An analytical calibration of the Stable Poisson Loaded (SPL) specimen is presented. The specimen configuration is similar to the ASTM E-561 compact-tension specimen with displacement controlled wedge loading used for R-Curve determination. The crack mouth opening displacements (CMOD's) are produced by the diametral expansion of an axially compressed cylindrical pin located in the wake of a machined notch. Due to the unusual loading configuration, a three-dimensional finite element analysis was performed with gap elements simulating the contact between the pin and specimen. In this report, stress intensity factors, CMOD's, and crack displacement profiles are reported for different crack lengths and different contacting conditions. It was concluded that the computed stress intensity factor decreases sharply with increasing crack length, thus making the SPL specimen configuration attractive for fracture testing of brittle, high modulus materials.
A bivariate survival model with compound Poisson frailty
Wienke, A.; Ripatti, S.; Palmgren, J.; Yashin, A.
2015-01-01
A correlated frailty model is suggested for analysis of bivariate time-to-event data. The model is an extension of the correlated power variance function (PVF) frailty model (correlated three-parameter frailty model). It is based on a bivariate extension of the compound Poisson frailty model in univariate survival analysis. It allows for a non-susceptible fraction (of zero frailty) in the population, overcoming the common assumption in survival analysis that all individuals are susceptible to the event under study. The model contains the correlated gamma frailty model and the correlated inverse Gaussian frailty model as special cases. A maximum likelihood estimation procedure for the parameters is presented and its properties are studied in a small simulation study. This model is applied to breast cancer incidence data of Swedish twins. The proportion of women susceptible to breast cancer is estimated to be 15 per cent. PMID:19856276
Beyond Poisson-Boltzmann: Numerical Sampling of Charge Density Fluctuations.
Poitevin, Frédéric; Delarue, Marc; Orland, Henri
2016-07-01
We present a method aimed at sampling charge density fluctuations in Coulomb systems. The derivation follows from a functional integral representation of the partition function in terms of charge density fluctuations. Starting from the mean-field solution given by the Poisson-Boltzmann equation, an original approach is proposed to numerically sample fluctuations around it, through the propagation of a Langevin-like stochastic partial differential equation (SPDE). The diffusion tensor of the SPDE can be chosen so as to avoid the numerical complexity linked to long-range Coulomb interactions, effectively rendering the theory completely local. A finite-volume implementation of the SPDE is described, and the approach is illustrated with preliminary results on the study of a system made of two like-charge ions immersed in a bath of counterions. PMID:27075231
Nonstationary elementary-field light randomly triggered by Poisson impulses.
Fernández-Pousa, Carlos R
2013-05-01
A stochastic theory of nonstationary light describing the random emission of elementary pulses is presented. The emission is governed by a nonhomogeneous Poisson point process determined by a time-varying emission rate. The model describes, in the appropriate limits, stationary, cyclostationary, locally stationary, and pulsed radiation, and reduces to a Gaussian theory in the limit of dense emission rate. The first- and second-order coherence theories are solved after the computation of second- and fourth-order correlation functions by use of the characteristic function. The ergodicity of second-order correlations under various types of detectors is explored and a number of observables, including optical spectrum, amplitude, and intensity correlations, are analyzed. PMID:23695325
Numerical Solution of the Gyrokinetic Poisson Equation in TEMPEST
NASA Astrophysics Data System (ADS)
Dorr, Milo; Cohen, Bruce; Cohen, Ronald; Dimits, Andris; Hittinger, Jeffrey; Kerbel, Gary; Nevins, William; Rognlien, Thomas; Umansky, Maxim; Xiong, Andrew; Xu, Xueqiao
2006-10-01
The gyrokinetic Poisson (GKP) model in the TEMPEST continuum gyrokinetic edge plasma code yields the electrostatic potential due to the charge density of electrons and an arbitrary number of ion species including the effects of gyroaveraging in the limit kρ1. The TEMPEST equations are integrated as a differential algebraic system involving a nonlinear system solve via Newton-Krylov iteration. The GKP preconditioner block is inverted using a multigrid preconditioned conjugate gradient (CG) algorithm. Electrons are treated as kinetic or adiabatic. The Boltzmann relation in the adiabatic option employs flux surface averaging to maintain neutrality within field lines and is solved self-consistently with the GKP equation. A decomposition procedure circumvents the near singularity of the GKP Jacobian block that otherwise degrades CG convergence.
Identifying Seismicity Levels via Poisson Hidden Markov Models
NASA Astrophysics Data System (ADS)
Orfanogiannaki, K.; Karlis, D.; Papadopoulos, G. A.
2010-08-01
Poisson Hidden Markov models (PHMMs) are introduced to model temporal seismicity changes. In a PHMM the unobserved sequence of states is a finite-state Markov chain and the distribution of the observation at any time is Poisson with rate depending only on the current state of the chain. Thus, PHMMs allow a region to have varying seismicity rate. We applied the PHMM to model earthquake frequencies in the seismogenic area of Killini, Ionian Sea, Greece, between period 1990 and 2006. Simulations of data from the assumed model showed that it describes quite well the true data. The earthquake catalogue is dominated by main shocks occurring in 1993, 1997 and 2002. The time plot of PHMM seismicity states not only reproduces the three seismicity clusters but also quantifies the seismicity level and underlies the degree of strength of the serial dependence of the events at any point of time. Foreshock activity becomes quite evident before the three sequences with the gradual transition to states of cascade seismicity. Traditional analysis, based on the determination of highly significant changes of seismicity rates, failed to recognize foreshocks before the 1997 main shock due to the low number of events preceding that main shock. Then, PHMM has better performance than traditional analysis since the transition from one state to another does not only depend on the total number of events involved but also on the current state of the system. Therefore, PHMM recognizes significant changes of seismicity soon after they start, which is of particular importance for real-time recognition of foreshock activities and other seismicity changes.
Multiple Regression and Its Discontents
ERIC Educational Resources Information Center
Snell, Joel C.; Marsh, Mitchell
2012-01-01
Multiple regression is part of a larger statistical strategy originated by Gauss. The authors raise questions about the theory and suggest some changes that would make room for Mandelbrot and Serendipity.
Time-Warped Geodesic Regression
Hong, Yi; Singh, Nikhil; Kwitt, Roland; Niethammer, Marc
2016-01-01
We consider geodesic regression with parametric time-warps. This allows, for example, to capture saturation effects as typically observed during brain development or degeneration. While highly-flexible models to analyze time-varying image and shape data based on generalizations of splines and polynomials have been proposed recently, they come at the cost of substantially more complex inference. Our focus in this paper is therefore to keep the model and its inference as simple as possible while allowing to capture expected biological variation. We demonstrate that by augmenting geodesic regression with parametric time-warp functions, we can achieve comparable flexibility to more complex models while retaining model simplicity. In addition, the time-warp parameters provide useful information of underlying anatomical changes as demonstrated for the analysis of corpora callosa and rat calvariae. We exemplify our strategy for shape regression on the Grassmann manifold, but note that the method is generally applicable for time-warped geodesic regression. PMID:25485368
Marginalized zero-inflated negative binomial regression with application to dental caries.
Preisser, John S; Das, Kalyan; Long, D Leann; Divaris, Kimon
2016-05-10
The zero-inflated negative binomial regression model (ZINB) is often employed in diverse fields such as dentistry, health care utilization, highway safety, and medicine to examine relationships between exposures of interest and overdispersed count outcomes exhibiting many zeros. The regression coefficients of ZINB have latent class interpretations for a susceptible subpopulation at risk for the disease/condition under study with counts generated from a negative binomial distribution and for a non-susceptible subpopulation that provides only zero counts. The ZINB parameters, however, are not well-suited for estimating overall exposure effects, specifically, in quantifying the effect of an explanatory variable in the overall mixture population. In this paper, a marginalized zero-inflated negative binomial regression (MZINB) model for independent responses is proposed to model the population marginal mean count directly, providing straightforward inference for overall exposure effects based on maximum likelihood estimation. Through simulation studies, the finite sample performance of MZINB is compared with marginalized zero-inflated Poisson, Poisson, and negative binomial regression. The MZINB model is applied in the evaluation of a school-based fluoride mouthrinse program on dental caries in 677 children. PMID:26568034
Basis Selection for Wavelet Regression
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Lau, Sonie (Technical Monitor)
1998-01-01
A wavelet basis selection procedure is presented for wavelet regression. Both the basis and the threshold are selected using cross-validation. The method includes the capability of incorporating prior knowledge on the smoothness (or shape of the basis functions) into the basis selection procedure. The results of the method are demonstrated on sampled functions widely used in the wavelet regression literature. The results of the method are contrasted with other published methods.
Regression methods for spatial data
NASA Technical Reports Server (NTRS)
Yakowitz, S. J.; Szidarovszky, F.
1982-01-01
The kriging approach, a parametric regression method used by hydrologists and mining engineers, among others also provides an error estimate the integral of the regression function. The kriging method is explored and some of its statistical characteristics are described. The Watson method and theory are extended so that the kriging features are displayed. Theoretical and computational comparisons of the kriging and Watson approaches are offered.
Wrong Signs in Regression Coefficients
NASA Technical Reports Server (NTRS)
McGee, Holly
1999-01-01
When using parametric cost estimation, it is important to note the possibility of the regression coefficients having the wrong sign. A wrong sign is defined as a sign on the regression coefficient opposite to the researcher's intuition and experience. Some possible causes for the wrong sign discussed in this paper are a small range of x's, leverage points, missing variables, multicollinearity, and computational error. Additionally, techniques for determining the cause of the wrong sign are given.
Modeling Repeated Count Data: Some Extensions of the Rasch Poisson Counts Model.
ERIC Educational Resources Information Center
Duijn, Marijtje A. J. van; Jansen, Margo G. H.
1995-01-01
The Rasch Poisson Counts Model, a unidimensional latent trait model for tests that postulates that intensity parameters are products of test difficulty and subject ability parameters, is expanded into the Dirichlet-Gamma-Poisson model that takes into account variation between subjects and interaction between subjects and tests. (SLD)
Comment on: ‘A Poisson resampling method for simulating reduced counts in nuclear medicine images’
NASA Astrophysics Data System (ADS)
de Nijs, Robin
2015-07-01
In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.
Interpretation of Standardized Regression Coefficients in Multiple Regression.
ERIC Educational Resources Information Center
Thayer, Jerome D.
The extent to which standardized regression coefficients (beta values) can be used to determine the importance of a variable in an equation was explored. The beta value and the part correlation coefficient--also called the semi-partial correlation coefficient and reported in squared form as the incremental "r squared"--were compared for variables…
Demosaicing Based on Directional Difference Regression and Efficient Regression Priors.
Wu, Jiqing; Timofte, Radu; Van Gool, Luc
2016-08-01
Color demosaicing is a key image processing step aiming to reconstruct the missing pixels from a recorded raw image. On the one hand, numerous interpolation methods focusing on spatial-spectral correlations have been proved very efficient, whereas they yield a poor image quality and strong visible artifacts. On the other hand, optimization strategies, such as learned simultaneous sparse coding and sparsity and adaptive principal component analysis-based algorithms, were shown to greatly improve image quality compared with that delivered by interpolation methods, but unfortunately are computationally heavy. In this paper, we propose efficient regression priors as a novel, fast post-processing algorithm that learns the regression priors offline from training data. We also propose an independent efficient demosaicing algorithm based on directional difference regression, and introduce its enhanced version based on fused regression. We achieve an image quality comparable to that of the state-of-the-art methods for three benchmarks, while being order(s) of magnitude faster. PMID:27254866
Interquantile Shrinkage in Regression Models
Jiang, Liewen; Wang, Huixia Judy; Bondell, Howard D.
2012-01-01
Conventional analysis using quantile regression typically focuses on fitting the regression model at different quantiles separately. However, in situations where the quantile coefficients share some common feature, joint modeling of multiple quantiles to accommodate the commonality often leads to more efficient estimation. One example of common features is that a predictor may have a constant effect over one region of quantile levels but varying effects in other regions. To automatically perform estimation and detection of the interquantile commonality, we develop two penalization methods. When the quantile slope coefficients indeed do not change across quantile levels, the proposed methods will shrink the slopes towards constant and thus improve the estimation efficiency. We establish the oracle properties of the two proposed penalization methods. Through numerical investigations, we demonstrate that the proposed methods lead to estimations with competitive or higher efficiency than the standard quantile regression estimation in finite samples. Supplemental materials for the article are available online. PMID:24363546
Survival Data and Regression Models
NASA Astrophysics Data System (ADS)
Grégoire, G.
2014-12-01
We start this chapter by introducing some basic elements for the analysis of censored survival data. Then we focus on right censored data and develop two types of regression models. The first one concerns the so-called accelerated failure time models (AFT), which are parametric models where a function of a parameter depends linearly on the covariables. The second one is a semiparametric model, where the covariables enter in a multiplicative form in the expression of the hazard rate function. The main statistical tool for analysing these regression models is the maximum likelihood methodology and, in spite we recall some essential results about the ML theory, we refer to the chapter "Logistic Regression" for a more detailed presentation.
Linear regression analysis of survival data with missing censoring indicators.
Wang, Qihua; Dinse, Gregg E
2011-04-01
Linear regression analysis has been studied extensively in a random censorship setting, but typically all of the censoring indicators are assumed to be observed. In this paper, we develop synthetic data methods for estimating regression parameters in a linear model when some censoring indicators are missing. We define estimators based on regression calibration, imputation, and inverse probability weighting techniques, and we prove all three estimators are asymptotically normal. The finite-sample performance of each estimator is evaluated via simulation. We illustrate our methods by assessing the effects of sex and age on the time to non-ambulatory progression for patients in a brain cancer clinical trial. PMID:20559722
A regularization corrected score method for nonlinear regression models with covariate error.
Zucker, David M; Gorfine, Malka; Li, Yi; Tadesse, Mahlet G; Spiegelman, Donna
2013-03-01
Many regression analyses involve explanatory variables that are measured with error, and failing to account for this error is well known to lead to biased point and interval estimates of the regression coefficients. We present here a new general method for adjusting for covariate error. Our method consists of an approximate version of the Stefanski-Nakamura corrected score approach, using the method of regularization to obtain an approximate solution of the relevant integral equation. We develop the theory in the setting of classical likelihood models; this setting covers, for example, linear regression, nonlinear regression, logistic regression, and Poisson regression. The method is extremely general in terms of the types of measurement error models covered, and is a functional method in the sense of not involving assumptions on the distribution of the true covariate. We discuss the theoretical properties of the method and present simulation results in the logistic regression setting (univariate and multivariate). For illustration, we apply the method to data from the Harvard Nurses' Health Study concerning the relationship between physical activity and breast cancer mortality in the period following a diagnosis of breast cancer. PMID:23379851
Fast Poisson noise removal by biorthogonal Haar domain hypothesis testing
NASA Astrophysics Data System (ADS)
Zhang, B.; Fadili, M. J.; Starck, J.-L.; Digel, S. W.
2008-07-01
Methods based on hypothesis tests (HTs) in the Haar domain are widely used to denoise Poisson count data. Facing large datasets or real-time applications, Haar-based denoisers have to use the decimated transform to meet limited-memory or computation-time constraints. Unfortunately, for regular underlying intensities, decimation yields discontinuous estimates and strong “staircase” artifacts. In this paper, we propose to combine the HT framework with the decimated biorthogonal Haar (Bi-Haar) transform instead of the classical Haar. The Bi-Haar filter bank is normalized such that the p-values of Bi-Haar coefficients (p) provide good approximation to those of Haar (pH) for high-intensity settings or large scales; for low-intensity settings and small scales, we show that p are essentially upper-bounded by pH. Thus, we may apply the Haar-based HTs to Bi-Haar coefficients to control a prefixed false positive rate. By doing so, we benefit from the regular Bi-Haar filter bank to gain a smooth estimate while always maintaining a low computational complexity. A Fisher-approximation-based threshold implementing the HTs is also established. The efficiency of this method is illustrated on an example of hyperspectral-source-flux estimation.
Performance of Nonlinear Finite-Difference Poisson-Boltzmann Solvers.
Cai, Qin; Hsieh, Meng-Juei; Wang, Jun; Luo, Ray
2010-01-12
We implemented and optimized seven finite-difference solvers for the full nonlinear Poisson-Boltzmann equation in biomolecular applications, including four relaxation methods, one conjugate gradient method, and two inexact Newton methods. The performance of the seven solvers was extensively evaluated with a large number of nucleic acids and proteins. Worth noting is the inexact Newton method in our analysis. We investigated the role of linear solvers in its performance by incorporating the incomplete Cholesky conjugate gradient and the geometric multigrid into its inner linear loop. We tailored and optimized both linear solvers for faster convergence rate. In addition, we explored strategies to optimize the successive over-relaxation method to reduce its convergence failures without too much sacrifice in its convergence rate. Specifically we attempted to adaptively change the relaxation parameter and to utilize the damping strategy from the inexact Newton method to improve the successive over-relaxation method. Our analysis shows that the nonlinear methods accompanied with a functional-assisted strategy, such as the conjugate gradient method and the inexact Newton method, can guarantee convergence in the tested molecules. Especially the inexact Newton method exhibits impressive performance when it is combined with highly efficient linear solvers that are tailored for its special requirement. PMID:24723843
Error propagation in PIV-based Poisson pressure calculations
NASA Astrophysics Data System (ADS)
Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd
2015-11-01
After more than 20 years of development, PIV has become a standard non-invasive velocity field measurement technique, and promises to make PIV-based pressure calculations possible. However, the errors inherent in PIV velocity fields propagate through integration and contaminate the calculated pressure field. We propose an analysis that shows how the uncertainties in the velocity field propagate to the pressure field through the Poisson equation. First we model the dynamics of error propagation using boundary value problems (BVPs). Next, L2-norm and/or L∞-norm are utilized as the measure of error in the velocity and pressure field. Finally, using analysis techniques including the maximum principle, the Poincare inequality pressure field can be bounded by the error level of the data by considering the well-posedness of the BVPs. Specifically, we exam if and how the error in the pressure field depend continually on the BVP data. Factors such as flow field geometry, boundary conditions, and velocity field noise levels will be discussed analytically.
The Poisson Gamma distribution for wind speed data
NASA Astrophysics Data System (ADS)
Ćakmakyapan, Selen; Özel, Gamze
2016-04-01
The wind energy is one of the most significant alternative clean energy source and rapidly developing renewable energy sources in the world. For the evaluation of wind energy potential, probability density functions (pdfs) are usually used to model wind speed distributions. The selection of the appropriate pdf reduces the wind power estimation error and also allow to achieve characteristics. In the literature, different pdfs used to model wind speed data for wind energy applications. In this study, we propose a new probability distribution to model the wind speed data. Firstly, we defined the new probability distribution named Poisson-Gamma (PG) distribution and we analyzed a wind speed data sets which are about five pressure degree for the station. We obtained the data sets from Turkish State Meteorological Service. Then, we modelled the data sets with Exponential, Weibull, Lomax, 3 parameters Burr, Gumbel, Gamma, Rayleigh which are used to model wind speed data, and PG distributions. Finally, we compared the distribution, to select the best fitted model and demonstrated that PG distribution modeled the data sets better.
Poisson process approximation for sequence repeats, and sequencing by hybridization.
Arratia, R; Martin, D; Reinert, G; Waterman, M S
1996-01-01
Sequencing by hybridization is a tool to determine a DNA sequence from the unordered list of all l-tuples contained in this sequence; typical numbers for l are l = 8, 10, 12. For theoretical purposes we assume that the multiset of all l-tuples is known. This multiset determines the DNA sequence uniquely if none of the so-called Ukkonen transformations are possible. These transformations require repeats of (l-1)-tuples in the sequence, with these repeats occurring in certain spatial patterns. We model DNA as an i.i.d. sequence. We first prove Poisson process approximations for the process of indicators of all leftmost long repeats allowing self-overlap and for the process of indicators of all left-most long repeats without self-overlap. Using the Chen-Stein method, we get bounds on the error of these approximations. As a corollary, we approximate the distribution of longest repeats. In the second step we analyze the spatial patterns of the repeats. Finally we combine these two steps to prove an approximation for the probability that a random sequence is uniquely recoverable from its list of l-tuples. For all our results we give some numerical examples including error bounds. PMID:8891959
A Boussinesq-scaled, pressure-Poisson water wave model
NASA Astrophysics Data System (ADS)
Donahue, Aaron S.; Zhang, Yao; Kennedy, Andrew B.; Westerink, Joannes J.; Panda, Nishant; Dawson, Clint
2015-02-01
Through the use of Boussinesq scaling we develop and test a model for resolving non-hydrostatic pressure profiles in nonlinear wave systems over varying bathymetry. A Green-Nagdhi type polynomial expansion is used to resolve the pressure profile along the vertical axis, this is then inserted into the pressure-Poisson equation, retaining terms up to a prescribed order and solved using a weighted residual approach. The model shows rapid convergence properties with increasing order of polynomial expansion which can be greatly improved through the application of asymptotic rearrangement. Models of Boussinesq scaling of the fully nonlinear O (μ2) and weakly nonlinear O (μN) are presented, the analytical and numerical properties of O (μ2) and O (μ4) models are discussed. Optimal basis functions in the Green-Nagdhi expansion are determined through manipulation of the free-parameters which arise due to the Boussinesq scaling. The optimal O (μ2) model has dispersion accuracy equivalent to a Padé [2,2] approximation with one extra free-parameter. The optimal O (μ4) model obtains dispersion accuracy equivalent to a Padé [4,4] approximation with two free-parameters which can be used to optimize shoaling or nonlinear properties. In comparison to experimental results the O (μ4) model shows excellent agreement to experimental data.
Generalized master equation via aging continuous-time random walks.
Allegrini, Paolo; Aquino, Gerardo; Grigolini, Paolo; Palatella, Luigi; Rosa, Angelo
2003-11-01
We discuss the problem of the equivalence between continuous-time random walk (CTRW) and generalized master equation (GME). The walker, making instantaneous jumps from one site of the lattice to another, resides in each site for extended times. The sojourn times have a distribution density psi(t) that is assumed to be an inverse power law with the power index micro. We assume that the Onsager principle is fulfilled, and we use this assumption to establish a complete equivalence between GME and the Montroll-Weiss CTRW. We prove that this equivalence is confined to the case where psi(t) is an exponential. We argue that is so because the Montroll-Weiss CTRW, as recently proved by Barkai [E. Barkai, Phys. Rev. Lett. 90, 104101 (2003)], is nonstationary, thereby implying aging, while the Onsager principle is valid only in the case of fully aged systems. The case of a Poisson distribution of sojourn times is the only one with no aging associated to it, and consequently with no need to establish special initial conditions to fulfill the Onsager principle. We consider the case of a dichotomous fluctuation, and we prove that the Onsager principle is fulfilled for any form of regression to equilibrium provided that the stationary condition holds true. We set the stationary condition on both the CTRW and the GME, thereby creating a condition of total equivalence, regardless of the nature of the waiting-time distribution. As a consequence of this procedure we create a GME that is a bona fide master equation, in spite of being non-Markov. We note that the memory kernel of the GME affords information on the interaction between system of interest and its bath. The Poisson case yields a bath with infinitely fast fluctuations. We argue that departing from the Poisson form has the effect of creating a condition of infinite memory and that these results might be useful to shed light on the problem of how to unravel non-Markov quantum master equations. PMID:14682862
Partial least squares Cox regression for genome-wide data.
Nygård, Ståle; Borgan, Ornulf; Lingjaerde, Ole Christian; Størvold, Hege Leite
2008-06-01
Most methods for survival prediction from high-dimensional genomic data combine the Cox proportional hazards model with some technique of dimension reduction, such as partial least squares regression (PLS). Applying PLS to the Cox model is not entirely straightforward, and multiple approaches have been proposed. The method of Park etal. (Bioinformatics 18(Suppl. 1):S120-S127, 2002) uses a reformulation of the Cox likelihood to a Poisson type likelihood, thereby enabling estimation by iteratively reweighted partial least squares for generalized linear models. We propose a modification of the method of Park et al. (2002) such that estimates of the baseline hazard and the gene effects are obtained in separate steps. The resulting method has several advantages over the method of Park et al. (2002) and other existing Cox PLS approaches, as it allows for estimation of survival probabilities for new patients, enables a less memory-demanding estimation procedure, and allows for incorporation of lower-dimensional non-genomic variables like disease grade and tumor thickness. We also propose to combine our Cox PLS method with an initial gene selection step in which genes are ordered by their Cox score and only the highest-ranking k% of the genes are retained, obtaining a so-called supervised partial least squares regression method. In simulations, both the unsupervised and the supervised version outperform other Cox PLS methods. PMID:18188699
Universal Poisson Statistics of mRNAs with Complex Decay Pathways.
Thattai, Mukund
2016-01-19
Messenger RNA (mRNA) dynamics in single cells are often modeled as a memoryless birth-death process with a constant probability per unit time that an mRNA molecule is synthesized or degraded. This predicts a Poisson steady-state distribution of mRNA number, in close agreement with experiments. This is surprising, since mRNA decay is known to be a complex process. The paradox is resolved by realizing that the Poisson steady state generalizes to arbitrary mRNA lifetime distributions. A mapping between mRNA dynamics and queueing theory highlights an identifiability problem: a measured Poisson steady state is consistent with a large variety of microscopic models. Here, I provide a rigorous and intuitive explanation for the universality of the Poisson steady state. I show that the mRNA birth-death process and its complex decay variants all take the form of the familiar Poisson law of rare events, under a nonlinear rescaling of time. As a corollary, not only steady-states but also transients are Poisson distributed. Deviations from the Poisson form occur only under two conditions, promoter fluctuations leading to transcriptional bursts or nonindependent degradation of mRNA molecules. These results place severe limits on the power of single-cell experiments to probe microscopic mechanisms, and they highlight the need for single-molecule measurements. PMID:26743048
Cactus: An Introduction to Regression
ERIC Educational Resources Information Center
Hyde, Hartley
2008-01-01
When the author first used "VisiCalc," the author thought it a very useful tool when he had the formulas. But how could he design a spreadsheet if there was no known formula for the quantities he was trying to predict? A few months later, the author relates he learned to use multiple linear regression software and suddenly it all clicked into…
Regression modelling of Dst index
NASA Astrophysics Data System (ADS)
Parnowski, Aleksei
We developed a new approach to the problem of real-time space weather indices forecasting using readily available data from ACE and a number of ground stations. It is based on the regression modelling method [1-3], which combines the benefits of empirical and statistical approaches. Mathematically it is based upon the partial regression analysis and Monte Carlo simulations to deduce the empirical relationships in the system. The typical elapsed time per forecast is a few seconds on an average PC. This technique can be easily extended to other indices like AE and Kp. The proposed system can also be useful for investigating physical phenomena related to interactions between the solar wind and the magnetosphere -it already helped uncovering two new geoeffective parameters. 1. Parnowski A.S. Regression modeling method of space weather prediction // Astrophysics Space Science. — 2009. — V. 323, 2. — P. 169-180. doi:10.1007/s10509-009-0060-4 [arXiv:0906.3271] 2. Parnovskiy A.S. Regression Modeling and its Application to the Problem of Prediction of Space Weather // Journal of Automation and Information Sciences. — 2009. — V. 41, 5. — P. 61-69. doi:10.1615/JAutomatInfScien.v41.i5.70 3. Parnowski A.S. Statistically predicting Dst without satellite data // Earth, Planets and Space. — 2009. — V. 61, 5. — P. 621-624.
Fungible Weights in Multiple Regression
ERIC Educational Resources Information Center
Waller, Niels G.
2008-01-01
Every set of alternate weights (i.e., nonleast squares weights) in a multiple regression analysis with three or more predictors is associated with an infinite class of weights. All members of a given class can be deemed "fungible" because they yield identical "SSE" (sum of squared errors) and R[superscript 2] values. Equations for generating…
Spontaneous regression of breast cancer.
Lewison, E F
1976-11-01
The dramatic but rare regression of a verified case of breast cancer in the absence of adequate, accepted, or conventional treatment has been observed and documented by clinicians over the course of many years. In my practice limited to diseases of the breast, over the past 25 years I have observed 12 patients with a unique and unusual clinical course valid enough to be regarded as spontaneous regression of breast cancer. These 12 patients, with clinically confirmed breast cancer, had temporary arrest or partial remission of their disease in the absence of complete or adequate treatment. In most of these cases, spontaneous regression could not be equated ultimately with permanent cure. Three of these case histories are summarized, and patient characteristics of pertinent clinical interest in the remaining case histories are presented and discussed. Despite widespread doubt and skepticism, there is ample clinical evidence to confirm the fact that spontaneous regression of breast cancer is a rare phenomenon but is real and does occur. PMID:799758
Correlation Weights in Multiple Regression
ERIC Educational Resources Information Center
Waller, Niels G.; Jones, Jeff A.
2010-01-01
A general theory on the use of correlation weights in linear prediction has yet to be proposed. In this paper we take initial steps in developing such a theory by describing the conditions under which correlation weights perform well in population regression models. Using OLS weights as a comparison, we define cases in which the two weighting…
Quantile Regression with Censored Data
ERIC Educational Resources Information Center
Lin, Guixian
2009-01-01
The Cox proportional hazards model and the accelerated failure time model are frequently used in survival data analysis. They are powerful, yet have limitation due to their model assumptions. Quantile regression offers a semiparametric approach to model data with possible heterogeneity. It is particularly powerful for censored responses, where the…
Ridge Regression for Interactive Models.
ERIC Educational Resources Information Center
Tate, Richard L.
1988-01-01
An exploratory study of the value of ridge regression for interactive models is reported. Assuming that the linear terms in a simple interactive model are centered to eliminate non-essential multicollinearity, a variety of common models, representing both ordinal and disordinal interactions, are shown to have "orientations" that are favorable to…
Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors
Woodard, Dawn B.; Crainiceanu, Ciprian; Ruppert, David
2013-01-01
We propose a new method for regression using a parsimonious and scientifically interpretable representation of functional predictors. Our approach is designed for data that exhibit features such as spikes, dips, and plateaus whose frequency, location, size, and shape varies stochastically across subjects. We propose Bayesian inference of the joint functional and exposure models, and give a method for efficient computation. We contrast our approach with existing state-of-the-art methods for regression with functional predictors, and show that our method is more effective and efficient for data that include features occurring at varying locations. We apply our methodology to a large and complex dataset from the Sleep Heart Health Study, to quantify the association between sleep characteristics and health outcomes. Software and technical appendices are provided in online supplemental materials. PMID:24293988
Poisson-Lie T-duals of the bi-Yang-Baxter models
NASA Astrophysics Data System (ADS)
Klimčík, Ctirad
2016-09-01
We prove the conjecture of Sfetsos, Siampos and Thompson that suitable analytic continuations of the Poisson-Lie T-duals of the bi-Yang-Baxter sigma models coincide with the recently introduced generalized λ-models. We then generalize this result by showing that the analytic continuation of a generic σ-model of "universal WZW-type" introduced by Tseytlin in 1993 is nothing but the Poisson-Lie T-dual of a generic Poisson-Lie symmetric σ-model introduced by Klimčík and Ševera in 1995.
Universal Negative Poisson Ratio of Self-Avoiding Fixed-Connectivity Membranes
Bowick, M.; Cacciuto, A.; Thorleifsson, G.; Travesset, A.
2001-10-01
We determine the Poisson ratio of self-avoiding fixed-connectivity membranes, modeled as impenetrable plaquettes, to be {sigma}=-0.37(6) , in statistical agreement with the Poisson ratio of phantom fixed-connectivity membranes {sigma}=-0.32(4) . Together with the equality of critical exponents, this result implies a unique universality class for fixed-connectivity membranes. Our findings thus establish that physical fixed-connectivity membranes provide a wide class of auxetic (negative Poisson ratio) materials with significant potential applications in materials science.
Universal negative poisson ratio of self-avoiding fixed-connectivity membranes.
Bowick, M; Cacciuto, A; Thorleifsson, G; Travesset, A
2001-10-01
We determine the Poisson ratio of self-avoiding fixed-connectivity membranes, modeled as impenetrable plaquettes, to be sigma = -0.37(6), in statistical agreement with the Poisson ratio of phantom fixed-connectivity membranes sigma = -0.32(4). Together with the equality of critical exponents, this result implies a unique universality class for fixed-connectivity membranes. Our findings thus establish that physical fixed-connectivity membranes provide a wide class of auxetic (negative Poisson ratio) materials with significant potential applications in materials science. PMID:11580677
Blow-up conditions for two dimensional modified Euler-Poisson equations
NASA Astrophysics Data System (ADS)
Lee, Yongki
2016-09-01
The multi-dimensional Euler-Poisson system describes the dynamic behavior of many important physical flows, yet as a hyperbolic system its solution can blow-up for some initial configurations. This article strives to advance our understanding on the critical threshold phenomena through the study of a two-dimensional modified Euler-Poisson system with a modified Riesz transform where the singularity at the origin is removed. We identify upper-thresholds for finite time blow-up of solutions for the modified Euler-Poisson equations with attractive/repulsive forcing.
Regression Verification Using Impact Summaries
NASA Technical Reports Server (NTRS)
Backes, John; Person, Suzette J.; Rungta, Neha; Thachuk, Oksana
2013-01-01
Regression verification techniques are used to prove equivalence of syntactically similar programs. Checking equivalence of large programs, however, can be computationally expensive. Existing regression verification techniques rely on abstraction and decomposition techniques to reduce the computational effort of checking equivalence of the entire program. These techniques are sound but not complete. In this work, we propose a novel approach to improve scalability of regression verification by classifying the program behaviors generated during symbolic execution as either impacted or unimpacted. Our technique uses a combination of static analysis and symbolic execution to generate summaries of impacted program behaviors. The impact summaries are then checked for equivalence using an o-the-shelf decision procedure. We prove that our approach is both sound and complete for sequential programs, with respect to the depth bound of symbolic execution. Our evaluation on a set of sequential C artifacts shows that reducing the size of the summaries can help reduce the cost of software equivalence checking. Various reduction, abstraction, and compositional techniques have been developed to help scale software verification techniques to industrial-sized systems. Although such techniques have greatly increased the size and complexity of systems that can be checked, analysis of large software systems remains costly. Regression analysis techniques, e.g., regression testing [16], regression model checking [22], and regression verification [19], restrict the scope of the analysis by leveraging the differences between program versions. These techniques are based on the idea that if code is checked early in development, then subsequent versions can be checked against a prior (checked) version, leveraging the results of the previous analysis to reduce analysis cost of the current version. Regression verification addresses the problem of proving equivalence of closely related program
3DGRAPE - THREE DIMENSIONAL GRIDS ABOUT ANYTHING BY POISSON'S EQUATION
NASA Technical Reports Server (NTRS)
Sorenson, R. L.
1994-01-01
The ability to treat arbitrary boundary shapes is one of the most desirable characteristics of a method for generating grids. 3DGRAPE is designed to make computational grids in or about almost any shape. These grids are generated by the solution of Poisson's differential equations in three dimensions. The program automatically finds its own values for inhomogeneous terms which give near-orthogonality and controlled grid cell height at boundaries. Grids generated by 3DGRAPE have been applied to both viscous and inviscid aerodynamic problems, and to problems in other fluid-dynamic areas. 3DGRAPE uses zones to solve the problem of warping one cube into the physical domain in real-world computational fluid dynamics problems. In a zonal approach, a physical domain is divided into regions, each of which maps into its own computational cube. It is believed that even the most complicated physical region can be divided into zones, and since it is possible to warp a cube into each zone, a grid generator which is oriented to zones and allows communication across zonal boundaries (where appropriate) solves the problem of topological complexity. 3DGRAPE expects to read in already-distributed x,y,z coordinates on the bodies of interest, coordinates which will remain fixed during the entire grid-generation process. The 3DGRAPE code makes no attempt to fit given body shapes and redistribute points thereon. Body-fitting is a formidable problem in itself. The user must either be working with some simple analytical body shape, upon which a simple analytical distribution can be easily effected, or must have available some sophisticated stand-alone body-fitting software. 3DGRAPE does not require the user to supply the block-to-block boundaries nor the shapes of the distribution of points. 3DGRAPE will typically supply those block-to-block boundaries simply as surfaces in the elliptic grid. Thus at block-to-block boundaries the following conditions are obtained: (1) grids lines will
Parkinson disease male-to-female ratios increase with age: French nationwide study and meta-analysis
Moisan, Frédéric; Kab, Sofiane; Mohamed, Fatima; Canonico, Marianne; Le Guern, Morgane; Quintin, Cécile; Carcaillon, Laure; Nicolau, Javier; Duport, Nicolas; Singh-Manoux, Archana; Boussac-Zarebska, Marjorie; Elbaz, Alexis
2016-01-01
Background Parkinson’s disease (PD) is 1.5 times more frequent in men than women. Whether age modifies this ratio is unclear. We examined whether male-to-female (M–F) ratios change with age through a French nationwide prevalence/incidence study (2010) and a meta-analysis of incidence studies. Methods We used French national drug claims databases to identify PD cases using a validated algorithm. We computed M–F prevalence/incidence ratios overall and by age using Poisson regression. Ratios were regressed on age to estimate their annual change. We identified all PD incidence studies with age/sex-specific data, and performed a meta-analysis of M–F ratios. Results On the basis of 149 672 prevalent (50% women) and 25 438 incident (49% women) cases, age-standardised rates were higher in men (prevalence=2.865/1000; incidence=0.490/1000 person-years) than women (prevalence=1.934/1000; incidence=0.328/1000 person-years). The overall M–F ratio was 1.48 for prevalence and 1.49 for incidence. Prevalence and incidence M–F ratios increased by 0.05 and 0.14, respectively, per 10 years of age. Incidence was similar in men and women under 50 years (M–F ratio <1.2, p>0.20), and over 1.6 (p<0.001) times higher in men than women above 80 years (p trend <0.001). A meta-analysis of 22 incidence studies (14 126 cases, 46% women) confirmed that M– F ratios increased with age (0.26 per 10 years, p trend=0.005). Conclusions Age-increasing M–F ratios suggest that PD aetiology changes with age. Sex-related risk/protective factors may play a different role across the continuum of age at onset. This finding may inform aetiological PD research. PMID:26701996
Regression analysis of networked data
Zhou, Yan; Song, Peter X.-K.
2016-01-01
This paper concerns regression methodology for assessing relationships between multi-dimensional response variables and covariates that are correlated within a network. To address analytical challenges associated with the integration of network topology into the regression analysis, we propose a hybrid quadratic inference method that uses both prior and data-driven correlations among network nodes. A Godambe information-based tuning strategy is developed to allocate weights between the prior and data-driven network structures, so the estimator is efficient. The proposed method is conceptually simple and computationally fast, and has appealing large-sample properties. It is evaluated by simulation, and its application is illustrated using neuroimaging data from an association study of the effects of iron deficiency on auditory recognition memory in infants. PMID:27279658
Observational Studies: Matching or Regression?
Brazauskas, Ruta; Logan, Brent R
2016-03-01
In observational studies with an aim of assessing treatment effect or comparing groups of patients, several approaches could be used. Often, baseline characteristics of patients may be imbalanced between groups, and adjustments are needed to account for this. It can be accomplished either via appropriate regression modeling or, alternatively, by conducting a matched pairs study. The latter is often chosen because it makes groups appear to be comparable. In this article we considered these 2 options in terms of their ability to detect a treatment effect in time-to-event studies. Our investigation shows that a Cox regression model applied to the entire cohort is often a more powerful tool in detecting treatment effect as compared with a matched study. Real data from a hematopoietic cell transplantation study is used as an example. PMID:26712591
Activity of Excitatory Neuron with Delayed Feedback Stimulated with Poisson Stream is Non-Markov
NASA Astrophysics Data System (ADS)
Vidybida, Alexander K.
2015-09-01
For a class of excitatory spiking neuron models with delayed feedback fed with a Poisson stochastic process, it is proven that the stream of output interspike intervals cannot be presented as a Markov process of any order.
Hung, Tran Loc; Giang, Le Truong
2016-01-01
Using the Stein-Chen method some upper bounds in Poisson approximation for distributions of row-wise triangular arrays of independent negative-binomial distributed random variables are established in this note. PMID:26844026
Particle trapping: A key requisite of structure formation and stability of Vlasov–Poisson plasmas
Schamel, Hans
2015-04-15
Particle trapping is shown to control the existence of undamped coherent structures in Vlasov–Poisson plasmas and thereby affects the onset of plasma instability beyond the realm of linear Landau theory.
A Hands-on Activity for Teaching the Poisson Distribution Using the Stock Market
ERIC Educational Resources Information Center
Dunlap, Mickey; Studstill, Sharyn
2014-01-01
The number of increases a particular stock makes over a fixed period follows a Poisson distribution. This article discusses using this easily-found data as an opportunity to let students become involved in the data collection and analysis process.
Accurate Young's modulus measurement based on Rayleigh wave velocity and empirical Poisson's ratio
NASA Astrophysics Data System (ADS)
Li, Mingxia; Feng, Zhihua
2016-07-01
This paper presents a method for Young's modulus measurement based on Rayleigh wave speed. The error in Poisson's ratio has weak influence on the measurement of Young's modulus based on Rayleigh wave speed, and Poisson's ratio minimally varies in a certain material; thus, we can accurately estimate Young's modulus with surface wave speed and a rough Poisson's ratio. We numerically analysed three methods using Rayleigh, longitudinal, and transversal wave speed, respectively, and the error in Poisson's ratio shows the least influence on the result in the method involving Rayleigh wave speed. An experiment was performed and has proved the feasibility of this method. Device for speed measuring could be small, and no sample pretreatment is needed. Hence, developing a portable instrument based on this method is possible. This method makes a good compromise between usability and precision.
Efficiency optimization of a fast Poisson solver in beam dynamics simulation
NASA Astrophysics Data System (ADS)
Zheng, Dawei; Pöplau, Gisela; van Rienen, Ursula
2016-01-01
Calculating the solution of Poisson's equation relating to space charge force is still the major time consumption in beam dynamics simulations and calls for further improvement. In this paper, we summarize a classical fast Poisson solver in beam dynamics simulations: the integrated Green's function method. We introduce three optimization steps of the classical Poisson solver routine: using the reduced integrated Green's function instead of the integrated Green's function; using the discrete cosine transform instead of discrete Fourier transform for the Green's function; using a novel fast convolution routine instead of an explicitly zero-padded convolution. The new Poisson solver routine preserves the advantages of fast computation and high accuracy. This provides a fast routine for high performance calculation of the space charge effect in accelerators.
Global Existence for the Vlasov-Poisson System in Bounded Domains
NASA Astrophysics Data System (ADS)
Hwang, Hyung Ju; Velázquez, Juan J. L.
2010-03-01
In this paper we prove global existence for solutions of the Vlasov-Poisson system in convex bounded domains with specular boundary conditions and with a prescribed outward electrical field at the boundary.
On deformations of one-dimensional Poisson structures of hydrodynamic type with degenerate metric
NASA Astrophysics Data System (ADS)
Savoldi, Andrea
2016-06-01
We provide a complete list of two- and three-component Poisson structures of hydrodynamic type with degenerate metric, and study their homogeneous deformations. In the non-degenerate case any such deformation is trivial, that is, can be obtained via Miura transformations. We demonstrate that in the degenerate case this class of deformations is non-trivial, and depends on a certain number of arbitrary functions. This shows that the second Poisson-Lichnerowicz cohomology group does not vanish.
NASA Astrophysics Data System (ADS)
Goldstein, R. V.; Gorodtsov, V. A.; Lisovenko, D. S.
2013-09-01
The study of materials with unusual mechanical properties has attracts a lot of attention in view of new possibilities for their application. One of these properties is negative Poisson's ratio which is commonly found in crystalline materials (materials with linear anisotropy). However, until now the capabilities of negative Poisson's ratios in tubular crystals (materials with curvilinear anisotropy), e.g., in today's popular nanotubes, have not been studied.
Hyperbolically Patterned 3D Graphene Metamaterial with Negative Poisson's Ratio and Superelasticity.
Zhang, Qiangqiang; Xu, Xiang; Lin, Dong; Chen, Wenli; Xiong, Guoping; Yu, Yikang; Fisher, Timothy S; Li, Hui
2016-03-16
A hyperbolically patterned 3D graphene metamaterial (GM) with negative Poisson's ratio and superelasticity is highlighted. It is synthesized by a modified hydrothermal approach and subsequent oriented freeze-casting strategy. GM presents a tunable Poisson's ratio by adjusting the structural porosity, macroscopic aspect ratio (L/D), and freeze-casting conditions. Such a GM suggests promising applications as soft actuators, sensors, robust shock absorbers, and environmental remediation. PMID:26788692
Lord, Dominique
2006-07-01
There has been considerable research conducted on the development of statistical models for predicting crashes on highway facilities. Despite numerous advancements made for improving the estimation tools of statistical models, the most common probabilistic structure used for modeling motor vehicle crashes remains the traditional Poisson and Poisson-gamma (or Negative Binomial) distribution; when crash data exhibit over-dispersion, the Poisson-gamma model is usually the model of choice most favored by transportation safety modelers. Crash data collected for safety studies often have the unusual attributes of being characterized by low sample mean values. Studies have shown that the goodness-of-fit of statistical models produced from such datasets can be significantly affected. This issue has been defined as the "low mean problem" (LMP). Despite recent developments on methods to circumvent the LMP and test the goodness-of-fit of models developed using such datasets, no work has so far examined how the LMP affects the fixed dispersion parameter of Poisson-gamma models used for modeling motor vehicle crashes. The dispersion parameter plays an important role in many types of safety studies and should, therefore, be reliably estimated. The primary objective of this research project was to verify whether the LMP affects the estimation of the dispersion parameter and, if it is, to determine the magnitude of the problem. The secondary objective consisted of determining the effects of an unreliably estimated dispersion parameter on common analyses performed in highway safety studies. To accomplish the objectives of the study, a series of Poisson-gamma distributions were simulated using different values describing the mean, the dispersion parameter, and the sample size. Three estimators commonly used by transportation safety modelers for estimating the dispersion parameter of Poisson-gamma models were evaluated: the method of moments, the weighted regression, and the maximum
On the Determination of Poisson Statistics for Haystack Radar Observations of Orbital Debris
NASA Technical Reports Server (NTRS)
Stokely, Christopher L.; Benbrook, James R.; Horstman, Matt
2007-01-01
A convenient and powerful method is used to determine if radar detections of orbital debris are observed according to Poisson statistics. This is done by analyzing the time interval between detection events. For Poisson statistics, the probability distribution of the time interval between events is shown to be an exponential distribution. This distribution is a special case of the Erlang distribution that is used in estimating traffic loads on telecommunication networks. Poisson statistics form the basis of many orbital debris models but the statistical basis of these models has not been clearly demonstrated empirically until now. Interestingly, during the fiscal year 2003 observations with the Haystack radar in a fixed staring mode, there are no statistically significant deviations observed from that expected with Poisson statistics, either independent or dependent of altitude or inclination. One would potentially expect some significant clustering of events in time as a result of satellite breakups, but the presence of Poisson statistics indicates that such debris disperse rapidly with respect to Haystack's very narrow radar beam. An exception to Poisson statistics is observed in the months following the intentional breakup of the Fengyun satellite in January 2007.
Hydrodynamic limit of Wigner-Poisson kinetic theory: Revisited
Akbari-Moghanjoughi, M.
2015-02-15
In this paper, we revisit the hydrodynamic limit of the Langmuir wave dispersion relation based on the Wigner-Poisson model in connection with that obtained directly from the original Lindhard dielectric function based on the random-phase-approximation. It is observed that the (fourth-order) expansion of the exact Lindhard dielectric constant correctly reduces to the hydrodynamic dispersion relation with an additional term of fourth-order, beside that caused by the quantum diffraction effect. It is also revealed that the generalized Lindhard dielectric theory accounts for the recently discovered Shukla-Eliasson attractive potential (SEAP). However, the expansion of the exact Lindhard static dielectric function leads to a k{sup 4} term of different magnitude than that obtained from the linearized quantum hydrodynamics model. It is shown that a correction factor of 1/9 should be included in the term arising from the quantum Bohm potential of the momentum balance equation in fluid model in order for a correct plasma dielectric response treatment. Finally, it is observed that the long-range oscillatory screening potential (Friedel oscillations) of type cos(2k{sub F}r)/r{sup 3}, which is a consequence of the divergence of the dielectric function at point k = 2k{sub F} in a quantum plasma, arises due to the finiteness of the Fermi-wavenumber and is smeared out in the limit of very high electron number-densities, typical of white dwarfs and neutron stars. In the very low electron number-density regime, typical of semiconductors and metals, where the Friedel oscillation wavelength becomes much larger compared to the interparticle distances, the SEAP appears with a much deeper potential valley. It is remarked that the fourth-order approximate Lindhard dielectric constant approaches that of the linearized quantum hydrodynamic in the limit if very high electron number-density. By evaluation of the imaginary part of the Lindhard dielectric function, it is shown that the
Multilevel Methods for the Poisson-Boltzmann Equation
NASA Astrophysics Data System (ADS)
Holst, Michael Jay
We consider the numerical solution of the Poisson -Boltzmann equation (PBE), a three-dimensional second order nonlinear elliptic partial differential equation arising in biophysics. This problem has several interesting features impacting numerical algorithms, including discontinuous coefficients representing material interfaces, rapid nonlinearities, and three spatial dimensions. Similar equations occur in various applications, including nuclear physics, semiconductor physics, population genetics, astrophysics, and combustion. In this thesis, we study the PBE, discretizations, and develop multilevel-based methods for approximating the solutions of these types of equations. We first outline the physical model and derive the PBE, which describes the electrostatic potential of a large complex biomolecule lying in a solvent. We next study the theoretical properties of the linearized and nonlinear PBE using standard function space methods; since this equation has not been previously studied theoretically, we provide existence and uniqueness proofs in both the linearized and nonlinear cases. We also analyze box-method discretizations of the PBE, establishing several properties of the discrete equations which are produced. In particular, we show that the discrete nonlinear problem is well-posed. We study and develop linear multilevel methods for interface problems, based on algebraic enforcement of Galerkin or variational conditions, and on coefficient averaging procedures. Using a stencil calculus, we show that in certain simplified cases the two approaches are equivalent, with different averaging procedures corresponding to different prolongation operators. We also develop methods for nonlinear problems based on a nonlinear multilevel method, and on linear multilevel methods combined with a globally convergent damped-inexact-Newton method. We derive a necessary and sufficient descent condition for the inexact-Newton direction, enabling the development of extremely
Correlates of Root Caries Experience in Middle-Aged and Older Adults within the Northwest PRECEDENT
Chi, Donald L.; Berg, Joel H.; Kim, Amy S.; Scott, JoAnna
2014-01-01
STRUCTURED ABSTRACT Background We examined the correlates of root caries experience for middle-aged (ages 45–64 years) and older adults (ages 65+ years) to test the hypothesis that the factors related to root caries are different for middle-aged versus older adults. Methods This observational cross-sectional study focused on adult patients ages 45–97 years recruited from the Northwest PRECEDENT (N=775 adults). The outcome variable was any root caries experience (no/yes). Sociodemographic, intraoral, and behavioral factors were hypothesized as potential root caries correlates. We used Poisson regression models to generate overall and age-stratified prevalence ratios (PR) of root caries and Generalized Estimating Equations to account for practice-level clustering of participants. Results About 20% of adults had any root caries. Dentists’ assessment that the patient was at high risk for any caries was associated with greater prevalence of root caries experience in both middle-aged adults (PR=2.70, 95% CI: 1.63,4.46) and older adults (PR=1.87, 95% CI: 1.19,2.95). The following factors were significantly associated with increased root caries prevalence, but only for middle-aged adults: male sex (P=.02), self-reported dry mouth (P<.0001), exposed roots (P=.03), and increased frequency of eating or drinking between meals (P=.03). No other covariates were related to root caries experience for older adults. Conclusions Within a practice-based research network, the factors associated with root caries experience were different for middle-aged and older adults. Future work should identify relevant root caries correlates for adults ages 65+ years. Clinical Implications Interventions aimed at preventing root caries are likely to be different for middle-aged and older adults. Root caries prevention programs should address the appropriate aged-based risk factors. PMID:23633699
Regression analysis of cytopathological data
Whittemore, A.S.; McLarty, J.W.; Fortson, N.; Anderson, K.
1982-12-01
Epithelial cells from the human body are frequently labelled according to one of several ordered levels of abnormality, ranging from normal to malignant. The label of the most abnormal cell in a specimen determines the score for the specimen. This paper presents a model for the regression of specimen scores against continuous and discrete variables, as in host exposure to carcinogens. Application to data and tests for adequacy of model fit are illustrated using sputum specimens obtained from a cohort of former asbestos workers.
Multiatlas segmentation as nonparametric regression.
Awate, Suyash P; Whitaker, Ross T
2014-09-01
This paper proposes a novel theoretical framework to model and analyze the statistical characteristics of a wide range of segmentation methods that incorporate a database of label maps or atlases; such methods are termed as label fusion or multiatlas segmentation. We model these multiatlas segmentation problems as nonparametric regression problems in the high-dimensional space of image patches. We analyze the nonparametric estimator's convergence behavior that characterizes expected segmentation error as a function of the size of the multiatlas database. We show that this error has an analytic form involving several parameters that are fundamental to the specific segmentation problem (determined by the chosen anatomical structure, imaging modality, registration algorithm, and label-fusion algorithm). We describe how to estimate these parameters and show that several human anatomical structures exhibit the trends modeled analytically. We use these parameter estimates to optimize the regression estimator. We show that the expected error for large database sizes is well predicted by models learned on small databases. Thus, a few expert segmentations can help predict the database sizes required to keep the expected error below a specified tolerance level. Such cost-benefit analysis is crucial for deploying clinical multiatlas segmentation systems. PMID:24802528
On the validity of the Poisson assumption in sampling nanometer-sized aerosols
Damit, Brian E; Wu, Dr. Chang-Yu; Cheng, Mengdawn
2014-01-01
A Poisson process is traditionally believed to apply to the sampling of aerosols. For a constant aerosol concentration, it is assumed that a Poisson process describes the fluctuation in the measured concentration because aerosols are stochastically distributed in space. Recent studies, however, have shown that sampling of micrometer-sized aerosols has non-Poissonian behavior with positive correlations. The validity of the Poisson assumption for nanometer-sized aerosols has not been examined and thus was tested in this study. Its validity was tested for four particle sizes - 10 nm, 25 nm, 50 nm and 100 nm - by sampling from indoor air with a DMA- CPC setup to obtain a time series of particle counts. Five metrics were calculated from the data: pair-correlation function (PCF), time-averaged PCF, coefficient of variation, probability of measuring a concentration at least 25% greater than average, and posterior distributions from Bayesian inference. To identify departures from Poissonian behavior, these metrics were also calculated for 1,000 computer-generated Poisson time series with the same mean as the experimental data. For nearly all comparisons, the experimental data fell within the range of 80% of the Poisson-simulation values. Essentially, the metrics for the experimental data were indistinguishable from a simulated Poisson process. The greater influence of Brownian motion for nanometer-sized aerosols may explain the Poissonian behavior observed for smaller aerosols. Although the Poisson assumption was found to be valid in this study, it must be carefully applied as the results here do not definitively prove applicability in all sampling situations.
Early age at menarche and wheezing in adolescence. The 1993 Pelotas (Brazil) birth cohort study
Joseph, Gary; Baptista Menezes, Ana Maria; Wehrmeister, Fernando C.
2015-01-01
Objective To evaluate the effect of menarche before 11 years of age on the incidence of wheezing/asthma in girls 11 to 18 years of age. Methods The study sample comprised 1,350 girls from a birth cohort that started in 1993 in the urban area of the city of Pelotas, southern Brazil; this cohort was followed until 18 years of age. We assessed wheezing by the question, “Have you ever had wheezing in the chest at any time in the past?,” from the International Study of Asthma and Allergies in Childhood (ISAAC) questionnaire. Early menarche was defined as occurring before 11 years of age. We estimated the cumulative incidence of wheezing excluding from the analysis all those participants who reported wheezing before age of 11 years. We performed the chi-square test to assess the association between ever wheezing and independent variables. Poisson regression models with robust variance were used to estimate cumulative incidence ratios. Results The average age at menarche in the cohort girls was 12 years (95% CI: 11.1–12.1). The prevalence of early menarche before 11 years of age was 11% (95% CI: 9.7–12.3). The cumulative incidence of wheezing from 11 to 18 years of age was 33.5% (95% CI: 30.9– 36.0). The crude association between ever wheezing in adolescence and early menarche before age 11 was 1.19 (95% CI: 0.96–1.48). After adjusting for early childhood and contemporaneous variables, no significant association for early menarche before 11 years of age and wheezing during adolescence was found (CIR: 1.18; CI95%: 0.93-1.49). Conclusion Early menarche before 11 years of age is not associated with an increased risk of wheezing during adolescence. PMID:26870751
Kleinman, Lawrence C; Norton, Edward C
2009-01-01
Objective To develop and validate a general method (called regression risk analysis) to estimate adjusted risk measures from logistic and other nonlinear multiple regression models. We show how to estimate standard errors for these estimates. These measures could supplant various approximations (e.g., adjusted odds ratio [AOR]) that may diverge, especially when outcomes are common. Study Design Regression risk analysis estimates were compared with internal standards as well as with Mantel–Haenszel estimates, Poisson and log-binomial regressions, and a widely used (but flawed) equation to calculate adjusted risk ratios (ARR) from AOR. Data Collection Data sets produced using Monte Carlo simulations. Principal Findings Regression risk analysis accurately estimates ARR and differences directly from multiple regression models, even when confounders are continuous, distributions are skewed, outcomes are common, and effect size is large. It is statistically sound and intuitive, and has properties favoring it over other methods in many cases. Conclusions Regression risk analysis should be the new standard for presenting findings from multiple regression analysis of dichotomous outcomes for cross-sectional, cohort, and population-based case–control studies, particularly when outcomes are common or effect size is large. PMID:18793213
Birthweight Related Factors in Northwestern Iran: Using Quantile Regression Method
Fallah, Ramazan; Kazemnejad, Anoshirvan; Zayeri, Farid; Shoghli, Alireza
2016-01-01
Introduction: Birthweight is one of the most important predicting indicators of the health status in adulthood. Having a balanced birthweight is one of the priorities of the health system in most of the industrial and developed countries. This indicator is used to assess the growth and health status of the infants. The aim of this study was to assess the birthweight of the neonates by using quantile regression in Zanjan province. Methods: This analytical descriptive study was carried out using pre-registered (March 2010 - March 2012) data of neonates in urban/rural health centers of Zanjan province using multiple-stage cluster sampling. Data were analyzed using multiple linear regressions andquantile regression method and SAS 9.2 statistical software. Results: From 8456 newborn baby, 4146 (49%) were female. The mean age of the mothers was 27.1±5.4 years. The mean birthweight of the neonates was 3104 ± 431 grams. Five hundred and seventy-three patients (6.8%) of the neonates were less than 2500 grams. In all quantiles, gestational age of neonates (p<0.05), weight and educational level of the mothers (p<0.05) showed a linear significant relationship with the i of the neonates. However, sex and birth rank of the neonates, mothers age, place of residence (urban/rural) and career were not significant in all quantiles (p>0.05). Conclusion: This study revealed the results of multiple linear regression and quantile regression were not identical. We strictly recommend the use of quantile regression when an asymmetric response variable or data with outliers is available. PMID:26925889
Practical Session: Multiple Linear Regression
NASA Astrophysics Data System (ADS)
Clausel, M.; Grégoire, G.
2014-12-01
Three exercises are proposed to illustrate the simple linear regression. In the first one investigates the influence of several factors on atmospheric pollution. It has been proposed by D. Chessel and A.B. Dufour in Lyon 1 (see Sect. 6 of http://pbil.univ-lyon1.fr/R/pdf/tdr33.pdf) and is based on data coming from 20 cities of U.S. Exercise 2 is an introduction to model selection whereas Exercise 3 provides a first example of analysis of variance. Exercises 2 and 3 have been proposed by A. Dalalyan at ENPC (see Exercises 2 and 3 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_5.pdf).
A flexible count data regression model for risk analysis.
Guikema, Seth D; Coffelt, Jeremy P; Goffelt, Jeremy P
2008-02-01
In many cases, risk and reliability analyses involve estimating the probabilities of discrete events such as hardware failures and occurrences of disease or death. There is often additional information in the form of explanatory variables that can be used to help estimate the likelihood of different numbers of events in the future through the use of an appropriate regression model, such as a generalized linear model. However, existing generalized linear models (GLM) are limited in their ability to handle the types of variance structures often encountered in using count data in risk and reliability analysis. In particular, standard models cannot handle both underdispersed data (variance less than the mean) and overdispersed data (variance greater than the mean) in a single coherent modeling framework. This article presents a new GLM based on a reformulation of the Conway-Maxwell Poisson (COM) distribution that is useful for both underdispersed and overdispersed count data and demonstrates this model by applying it to the assessment of electric power system reliability. The results show that the proposed COM GLM can provide as good of fits to data as the commonly used existing models for overdispered data sets while outperforming these commonly used models for underdispersed data sets. PMID:18304118
Ultra-soft 100 nm thick zero Poisson's ratio film with 60% reversible compressibility
NASA Astrophysics Data System (ADS)
Nguyen, Chieu; Szalewski, Steve; Saraf, Ravi
2013-03-01
Squeezing films of most solids, liquids and granular materials causes dilation in the lateral dimension which is characterized by a positive Poisson's ratio. Auxetic materials, such as, special foams, crumpled graphite, zeolites, spectrin/actin membrane, and carbon nanotube laminates shrink, i.e., their Poisson's ratio is negative. As a result of Poisson's effect, the force to squeeze an amorphous material, such as a viscous thin film coating adhered to rigid surface increases by over million fold as the thickness decreases from 10 μm to 100 nm due to constrain on lateral deformations and off-plane relaxation. We demonstrate, ultra-soft, 100 nm films of polymer/nanoparticle composite adhered to 1.25 cm diameter glass that can be reversibly squeezed over 60% strain between rigid plates requiring (very) low stresses below 100 KPa. Unlike non-zero Poisson's ratio materials, stiffness decreases with thickness, and the stress distribution is uniform over the film as mapped electro-optically. The high deformability at very low stresses is explained by considering reentrant cellular structure found in cork and the wings of beetles that have Poisson's ratio near zero.
A Novel Method for the Accurate Evaluation of Poisson's Ratio of Soft Polymer Materials
Lee, Jae-Hoon; Lee, Sang-Soo; Chang, Jun-Dong; Thompson, Mark S.; Kang, Dong-Joong; Park, Sungchan
2013-01-01
A new method with a simple algorithm was developed to accurately measure Poisson's ratio of soft materials such as polyvinyl alcohol hydrogel (PVA-H) with a custom experimental apparatus consisting of a tension device, a micro X-Y stage, an optical microscope, and a charge-coupled device camera. In the proposed method, the initial positions of the four vertices of an arbitrarily selected quadrilateral from the sample surface were first measured to generate a 2D 1st-order 4-node quadrilateral element for finite element numerical analysis. Next, minimum and maximum principal strains were calculated from differences between the initial and deformed shapes of the quadrilateral under tension. Finally, Poisson's ratio of PVA-H was determined by the ratio of minimum principal strain to maximum principal strain. This novel method has an advantage in the accurate evaluation of Poisson's ratio despite misalignment between specimens and experimental devices. In this study, Poisson's ratio of PVA-H was 0.44 ± 0.025 (n = 6) for 2.6–47.0% elongations with a tendency to decrease with increasing elongation. The current evaluation method of Poisson's ratio with a simple measurement system can be employed to a real-time automated vision-tracking system which is used to accurately evaluate the material properties of various soft materials. PMID:23737733
Species abundance in a forest community in South China: A case of poisson lognormal distribution
Yin, Z.-Y.; Ren, H.; Zhang, Q.-M.; Peng, S.-L.; Guo, Q.-F.; Zhou, G.-Y.
2005-01-01
Case studies on Poisson lognormal distribution of species abundance have been rare, especially in forest communities. We propose a numerical method to fit the Poisson lognormal to the species abundance data at an evergreen mixed forest in the Dinghushan Biosphere Reserve, South China. Plants in the tree, shrub and herb layers in 25 quadrats of 20 m??20 m, 5 m??5 m, and 1 m??1 m were surveyed. Results indicated that: (i) for each layer, the observed species abundance with a similarly small median, mode, and a variance larger than the mean was reverse J-shaped and followed well the zero-truncated Poisson lognormal; (ii) the coefficient of variation, skewness and kurtosis of abundance, and two Poisson lognormal parameters (?? and ??) for shrub layer were closer to those for the herb layer than those for the tree layer; and (iii) from the tree to the shrub to the herb layer, the ?? and the coefficient of variation decreased, whereas diversity increased. We suggest that: (i) the species abundance distributions in the three layers reflects the overall community characteristics; (ii) the Poisson lognormal can describe the species abundance distribution in diverse communities with a few abundant species but many rare species; and (iii) 1/?? should be an alternative measure of diversity.
Residuals and regression diagnostics: focusing on logistic regression.
Zhang, Zhongheng
2016-05-01
Up to now I have introduced most steps in regression model building and validation. The last step is to check whether there are observations that have significant impact on model coefficient and specification. The article firstly describes plotting Pearson residual against predictors. Such plots are helpful in identifying non-linearity and provide hints on how to transform predictors. Next, I focus on observations of outlier, leverage and influence that may have significant impact on model building. Outlier is such an observation that its response value is unusual conditional on covariate pattern. Leverage is an observation with covariate pattern that is far away from the regressor space. Influence is the product of outlier and leverage. That is, when influential observation is dropped from the model, there will be a significant shift of the coefficient. Summary statistics for outlier, leverage and influence are studentized residuals, hat values and Cook's distance. They can be easily visualized with graphs and formally tested using the car package. PMID:27294091
Semiparametric regression during 2003–2007*
Ruppert, David; Wand, M.P.; Carroll, Raymond J.
2010-01-01
Semiparametric regression is a fusion between parametric regression and nonparametric regression that integrates low-rank penalized splines, mixed model and hierarchical Bayesian methodology – thus allowing more streamlined handling of longitudinal and spatial correlation. We review progress in the field over the five-year period between 2003 and 2007. We find semiparametric regression to be a vibrant field with substantial involvement and activity, continual enhancement and widespread application. PMID:20305800
Building Regression Models: The Importance of Graphics.
ERIC Educational Resources Information Center
Dunn, Richard
1989-01-01
Points out reasons for using graphical methods to teach simple and multiple regression analysis. Argues that a graphically oriented approach has considerable pedagogic advantages in the exposition of simple and multiple regression. Shows that graphical methods may play a central role in the process of building regression models. (Author/LS)
Regression Analysis by Example. 5th Edition
ERIC Educational Resources Information Center
Chatterjee, Samprit; Hadi, Ali S.
2012-01-01
Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. "Regression Analysis by Example, Fifth Edition" has been expanded and thoroughly…
Bayesian Unimodal Density Regression for Causal Inference
ERIC Educational Resources Information Center
Karabatsos, George; Walker, Stephen G.
2011-01-01
Karabatsos and Walker (2011) introduced a new Bayesian nonparametric (BNP) regression model. Through analyses of real and simulated data, they showed that the BNP regression model outperforms other parametric and nonparametric regression models of common use, in terms of predictive accuracy of the outcome (dependent) variable. The other,…
Standards for Standardized Logistic Regression Coefficients
ERIC Educational Resources Information Center
Menard, Scott
2011-01-01
Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…
Developmental Regression in Autism Spectrum Disorders
ERIC Educational Resources Information Center
Rogers, Sally J.
2004-01-01
The occurrence of developmental regression in autism is one of the more puzzling features of this disorder. Although several studies have documented the validity of parental reports of regression using home videos, accumulating data suggest that most children who demonstrate regression also demonstrated previous, subtle, developmental differences.…
NASA Astrophysics Data System (ADS)
Masaki, Satoshi; Ogawa, Takayoshi
2015-12-01
In this paper, we study a dispersive Euler-Poisson system in two dimensional Euclidean space. Our aim is to show unique existence and the zero-dispersion limit of the time-local weak solution. Since one may not use dispersive structure in the zero-dispersion limit, when reducing the regularity, lack of critical embedding H1⊊L∞ becomes a bottleneck. We hence employ an estimate on the best constant of the Gagliardo-Nirenberg inequality. By this argument, a reasonable convergence rate for the zero-dispersion limit is deduced with a slight loss. We also consider the semiclassical limit problem of the Schrödinger-Poisson system in two dimensions.
Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.
Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence
2012-12-01
A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA. PMID:23741284
Stationary response of multi-degree-of-freedom vibro-impact systems to Poisson white noises
NASA Astrophysics Data System (ADS)
Wu, Y.; Zhu, W. Q.
2008-01-01
The stationary response of multi-degree-of-freedom (MDOF) vibro-impact (VI) systems to random pulse trains is studied. The system is formulated as a stochastically excited and dissipated Hamiltonian system. The constraints are modeled as non-linear springs according to the Hertz contact law. The random pulse trains are modeled as Poisson white noises. The approximate stationary probability density function (PDF) for the response of MDOF dissipated Hamiltonian systems to Poisson white noises is obtained by solving the fourth-order generalized Fokker-Planck-Kolmogorov (FPK) equation using perturbation approach. As examples, two-degree-of-freedom (2DOF) VI systems under external and parametric Poisson white noise excitations, respectively, are investigated. The validity of the proposed approach is confirmed by using the results obtained from Monte Carlo simulation. It is shown that the non-Gaussian behaviour depends on the product of the mean arrival rate of the impulses and the relaxation time of the oscillator.
NASA Astrophysics Data System (ADS)
Basin, M.; Maldonado, J. J.; Zendejo, O.
2016-07-01
This paper proposes new mean-square filter and parameter estimator design for linear stochastic systems with unknown parameters over linear observations, where unknown parameters are considered as combinations of Gaussian and Poisson white noises. The problem is treated by reducing the original problem to a filtering problem for an extended state vector that includes parameters as additional states, modelled as combinations of independent Gaussian and Poisson processes. The solution to this filtering problem is based on the mean-square filtering equations for incompletely polynomial states confused with Gaussian and Poisson noises over linear observations. The resulting mean-square filter serves as an identifier for the unknown parameters. Finally, a simulation example shows effectiveness of the proposed mean-square filter and parameter estimator.
Resonant ultrasound spectroscopy of cylinders over the full range of Poisson's ratio
NASA Astrophysics Data System (ADS)
Jaglinski, Tim; Lakes, Roderic S.
2011-03-01
Mode structure maps for freely vibrating cylinders over a range of Poisson's ratio, ν, are desirable for the design and interpretation of experiments using resonant ultrasound spectroscopy (RUS). The full range of isotropic ν (-1 to +0.5) is analyzed here using a finite element method to accommodate materials with a negative Poisson's ratio. The fundamental torsional mode has the lowest frequency provided ν is between about -0.24 and +0.5. For any ν, the torsional mode can be identified utilizing the polarization sensitivity of the shear transducers. RUS experimental results for materials with Poisson's ratio +0.3, +0.16, and -0.3 and a previous numerical study for ν = 0.33 are compared with the present analysis. Interpretation of results is easiest if the length/diameter ratio of the cylinder is close to 1. Slight material anisotropy leads to splitting of the higher modes but not of the fundamental torsion mode.
NASA Astrophysics Data System (ADS)
Giona, Massimiliano; Brasiello, Antonio; Crescitelli, Silvestro
2016-05-01
We analyze the influence of reflective boundary conditions on the statistics of Poisson-Kac diffusion processes, and specifically how they modify the Poissonian switching-time statistics. After addressing simple cases such as diffusion in a channel, and the switching statistics in the presence of a polarization potential, we thoroughly study Poisson-Kac diffusion in fractal domains. Diffusion in fractal spaces highlights neatly how the modification in the switching-time statistics associated with reflections against a complex and fractal boundary induces new emergent features of Poisson-Kac diffusion leading to a transition from a regular behavior at shorter timescales to emerging anomalous diffusion properties controlled by walk dimensionality of the fractal set.
A special relation between Young's modulus, Rayleigh-wave velocity, and Poisson's ratio.
Malischewsky, Peter G; Tuan, Tran Thanh
2009-12-01
Bayon et al. [(2005). J. Acoust. Soc. Am. 117, 3469-3477] described a method for the determination of Young's modulus by measuring the Rayleigh-wave velocity and the ellipticity of Rayleigh waves, and found a peculiar almost linear relation between a non-dimensional quantity connecting Young's modulus, Rayleigh-wave velocity and density, and Poisson's ratio. The analytical reason for this special behavior remained unclear. It is demonstrated here that this behavior is a simple consequence of the mathematical form of the Rayleigh-wave velocity as a function of Poisson's ratio. The consequences for auxetic materials (those materials for which Poisson's ratio is negative) are discussed, as well as the determination of the shear and bulk moduli. PMID:20000895
Heterogeneous PVA hydrogels with micro-cells of both positive and negative Poisson's ratios.
Ma, Yanxuan; Zheng, Yudong; Meng, Haoye; Song, Wenhui; Yao, Xuefeng; Lv, Hexiang
2013-07-01
Many models describing the deformation of general foam or auxetic materials are based on the assumption of homogeneity and order within the materials. However, non-uniform heterogeneity is often an inherent nature in many porous materials and composites, but difficult to measure. In this work, inspired by the structures of auxetic materials, the porous PVA hydrogels with internal inby-concave pores (IICP) or interconnected pores (ICP) were designed and processed. The deformation of the PVA hydrogels under compression was tested and their Poisson's ratio was characterized. The results indicated that the size, shape and distribution of the pores in the hydrogel matrix had strong influence on the local Poisson's ratio, which varying from positive to negative at micro-scale. The size-dependency of their local Poisson's ratio reflected and quantified the uniformity and heterogeneity of the micro-porous structures in the PVA hydrogels. PMID:23648366
Pointwise estimates of solutions for the multi-dimensional bipolar Euler-Poisson system
NASA Astrophysics Data System (ADS)
Wu, Zhigang; Li, Yeping
2016-06-01
In the paper, we consider a multi-dimensional bipolar hydrodynamic model from semiconductor devices and plasmas. This system takes the form of Euler-Poisson with electric field and frictional damping added to the momentum equations. By making a new analysis on Green's functions for the Euler system with damping and the Euler-Poisson system with damping, we obtain the pointwise estimates of the solution for the multi-dimensions bipolar Euler-Poisson system. As a by-product, we extend decay rates of the densities {ρ_i(i=1,2)} in the usual L 2-norm to the L p -norm with {p≥1} and the time-decay rates of the momentums m i ( i = 1,2) in the L 2-norm to the L p -norm with p > 1 and all of the decay rates here are optimal.
Wang, Haiying; Zheng, Huiru; Azuaje, Francisco
2007-01-01
Serial analysis of gene expression (SAGE) is a powerful technique for global gene expression profiling, allowing simultaneous analysis of thousands of transcripts without prior structural and functional knowledge. Pattern discovery and visualization have become fundamental approaches to analyzing such large-scale gene expression data. From the pattern discovery perspective, clustering techniques have received great attention. However, due to the statistical nature of SAGE data (i.e., underlying distribution), traditional clustering techniques may not be suitable for SAGE data analysis. Based on the adaptation and improvement of Self-Organizing Maps and hierarchical clustering techniques, this paper presents two new clustering algorithms, namely, PoissonS and PoissonHC, for SAGE data analysis. Tested on synthetic and experimental SAGE data, these algorithms demonstrate several advantages over traditional pattern discovery techniques. The results indicate that, by incorporating statistical properties of SAGE data, PoissonS and PoissonHC, as well as a hybrid approach (neuro-hierarchical approach) based on the combination of PoissonS and PoissonHC, offer significant improvements in pattern discovery and visualization for SAGE data. Moreover, a user-friendly platform, which may improve and accelerate SAGE data mining, was implemented. The system is freely available on request from the authors for nonprofit use. PMID:17473311
Ni, Wei; Ding, Guoyong; Li, Yifei; Li, Hongkai; Jiang, Baofa
2014-01-01
Background Xinxiang, a city in Henan Province, suffered from frequent floods due to persistent and heavy precipitation from 2004 to 2010. In the same period, dysentery was a common public health problem in Xinxiang, with the proportion of reported cases being the third highest among all the notified infectious diseases. Objectives We focused on dysentery disease consequences of different degrees of floods and examined the association between floods and the morbidity of dysentery on the basis of longitudinal data during the study period. Design A time-series Poisson regression model was conducted to examine the relationship between 10 times different degrees of floods and the monthly morbidity of dysentery from 2004 to 2010 in Xinxiang. Relative risks (RRs) of moderate and severe floods on the morbidity of dysentery were calculated in this paper. In addition, we estimated the attributable contributions of moderate and severe floods to the morbidity of dysentery. Results A total of 7591 cases of dysentery were notified in Xinxiang during the study period. The effect of floods on dysentery was shown with a 0-month lag. Regression analysis showed that the risk of moderate and severe floods on the morbidity of dysentery was 1.55 (95% CI: 1.42–1.670) and 1.74 (95% CI: 1.56–1.94), respectively. The attributable risk proportions (ARPs) of moderate and severe floods to the morbidity of dysentery were 35.53 and 42.48%, respectively. Conclusions This study confirms that floods have significantly increased the risk of dysentery in the study area. In addition, severe floods have a higher proportional contribution to the morbidity of dysentery than moderate floods. Public health action should be taken to avoid and control a potential risk of dysentery epidemics after floods. PMID:25098726
Estimating equivalence with quantile regression.
Cade, Brian S
2011-01-01
Equivalence testing and corresponding confidence interval estimates are used to provide more enlightened statistical statements about parameter estimates by relating them to intervals of effect sizes deemed to be of scientific or practical importance rather than just to an effect size of zero. Equivalence tests and confidence interval estimates are based on a null hypothesis that a parameter estimate is either outside (inequivalence hypothesis) or inside (equivalence hypothesis) an equivalence region, depending on the question of interest and assignment of risk. The former approach, often referred to as bioequivalence testing, is often used in regulatory settings because it reverses the burden of proof compared to a standard test of significance, following a precautionary principle for environmental protection. Unfortunately, many applications of equivalence testing focus on establishing average equivalence by estimating differences in means of distributions that do not have homogeneous variances. I discuss how to compare equivalence across quantiles of distributions using confidence intervals on quantile regression estimates that detect differences in heterogeneous distributions missed by focusing on means. I used one-tailed confidence intervals based on inequivalence hypotheses in a two-group treatment-control design for estimating bioequivalence of arsenic concentrations in soils at an old ammunition testing site and bioequivalence of vegetation biomass at a reclaimed mining site. Two-tailed confidence intervals based both on inequivalence and equivalence hypotheses were used to examine quantile equivalence for negligible trends over time for a continuous exponential model of amphibian abundance. PMID:21516905
Streamflow forecasting using functional regression
NASA Astrophysics Data System (ADS)
Masselot, Pierre; Dabo-Niang, Sophie; Chebana, Fateh; Ouarda, Taha B. M. J.
2016-07-01
Streamflow, as a natural phenomenon, is continuous in time and so are the meteorological variables which influence its variability. In practice, it can be of interest to forecast the whole flow curve instead of points (daily or hourly). To this end, this paper introduces the functional linear models and adapts it to hydrological forecasting. More precisely, functional linear models are regression models based on curves instead of single values. They allow to consider the whole process instead of a limited number of time points or features. We apply these models to analyse the flow volume and the whole streamflow curve during a given period by using precipitations curves. The functional model is shown to lead to encouraging results. The potential of functional linear models to detect special features that would have been hard to see otherwise is pointed out. The functional model is also compared to the artificial neural network approach and the advantages and disadvantages of both models are discussed. Finally, future research directions involving the functional model in hydrology are presented.
Harmonic regression and scale stability.
Lee, Yi-Hsuan; Haberman, Shelby J
2013-10-01
Monitoring a very frequently administered educational test with a relatively short history of stable operation imposes a number of challenges. Test scores usually vary by season, and the frequency of administration of such educational tests is also seasonal. Although it is important to react to unreasonable changes in the distributions of test scores in a timely fashion, it is not a simple matter to ascertain what sort of distribution is really unusual. Many commonly used approaches for seasonal adjustment are designed for time series with evenly spaced observations that span many years and, therefore, are inappropriate for data from such educational tests. Harmonic regression, a seasonal-adjustment method, can be useful in monitoring scale stability when the number of years available is limited and when the observations are unevenly spaced. Additional forms of adjustments can be included to account for variability in test scores due to different sources of population variations. To illustrate, real data are considered from an international language assessment. PMID:24092490
Measurement of Poisson's ratio of nonmetallic materials by laser holographic interferometry
NASA Astrophysics Data System (ADS)
Zhu, Jian T.
1991-12-01
By means of the off-axis collimated plane wave coherent light arrangement and a loading device by pure bending, Poisson's ratio values of CFRP (carbon fiber-reinforced plactics plates, lay-up 0 degree(s), 90 degree(s)), GFRP (glass fiber-reinforced plactics plates, radial direction) and PMMA (polymethyl methacrylate, x, y direction) have been measured. In virtue of this study, the ministry standard for the Ministry of Aeronautical Industry (Testing method for the measurement of Poisson's ratio of non-metallic by laser holographic interferometry) has been published. The measurement process is fast and simple. The measuring results are reliable and accurate.
On Poisson's ratio for metal matrix composite laminates. [aluminum boron composites
NASA Technical Reports Server (NTRS)
Herakovich, C. T.; Shuart, M. J.
1978-01-01
The definition of Poisson's ratio for nonlinear behavior of metal matrix composite laminates is discussed and experimental results for tensile and compressive loading of five different boron-aluminum laminates are presented. It is shown that there may be considerable difference in the value of Poisson's ratio as defined by a total strain or an incremental strain definition. It is argued that the incremental definition is more appropriate for nonlinear material behavior. Results from a (0) laminate indicate that the incremental definition provides a precursor to failure which is not evident if the total strain definition is used.
iAPBS: a programming interface to Adaptive Poisson-Boltzmann Solver (APBS).
Konecny, Robert; Baker, Nathan A; McCammon, J Andrew
2012-07-26
The Adaptive Poisson-Boltzmann Solver (APBS) is a state-of-the-art suite for performing Poisson-Boltzmann electrostatic calculations on biomolecules. The iAPBS package provides a modular programmatic interface to the APBS library of electrostatic calculation routines. The iAPBS interface library can be linked with a FORTRAN or C/C++ program thus making all of the APBS functionality available from within the application. Several application modules for popular molecular dynamics simulation packages - Amber, NAMD and CHARMM are distributed with iAPBS allowing users of these packages to perform implicit solvent electrostatic calculations with APBS. PMID:22905037
NASA Astrophysics Data System (ADS)
Watanabe, Hirofumi; Okiyama, Yoshio; Nakano, Tatsuya; Tanaka, Shigenori
2010-11-01
We developed FMO-PB method, which incorporates solvation effects into the Fragment Molecular Orbital calculation with the Poisson-Boltzmann equation. This method retains good accuracy in energy calculations with reduced computational time. We calculated the solvation free energies for polyalanines, Alpha-1 peptide, tryptophan cage, and complex of estrogen receptor and 17 β-estradiol to show the applicability of this method for practical systems. From the calculated results, it has been confirmed that the FMO-PB method is useful for large biomolecules in solution. We also discussed the electric charges which are used in solving the Poisson-Boltzmann equation.
Linear stability of stationary solutions of the Vlasov-Poisson system in three dimensions
Batt, J.; Rein, G. . Mathematisches Inst.); Morrison, P.J. )
1993-03-01
Rigorous results on the stability of stationary solutions of the Vlasov-Poisson system are obtained in both the plasma physics and stellar dynamics contexts. It is proven that stationary solutions in the plasma physics (stellar dynamics) case are linearly stable if they are decreasing (increasing) functions of the local, i.e. particle, energy. The main tool in the analysis is the free energy of the system, a conserved quantity. In addition, an appropriate global existence result is proven for the linearized Vlasov-Poisson system and the existence of stationary solutions that satisfy the above stability condition is established.
Dielectric Boundary Forces in Numerical Poisson-Boltzmann Methods: Theory and Numerical Strategies.
Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray
2011-10-01
Continuum modeling of electrostatic interactions based upon the numerical solutions of the Poisson-Boltzmann equation has been widely adopted in biomolecular applications. To extend their applications to molecular dynamics and energy minimization, robust and efficient methodologies to compute solvation forces must be developed. In this study, we have first reviewed the theory for the computation of dielectric boundary forces based on the definition of the Maxwell stress tensor. This is followed by a new formulation of the dielectric boundary force suitable for the finite-difference Poisson-Boltzmann methods. We have validated the new formulation with idealized analytical systems and realistic molecular systems. PMID:22125339
Dielectric boundary force in numerical Poisson-Boltzmann methods: Theory and numerical strategies
NASA Astrophysics Data System (ADS)
Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray
2011-10-01
Continuum modeling of electrostatic interactions based upon the numerical solutions of the Poisson-Boltzmann equation has been widely adopted in biomolecular applications. To extend their applications to molecular dynamics and energy minimization, robust and efficient methodologies to compute solvation forces must be developed. In this study, we have first reviewed the theory for the computation of dielectric boundary force based on the definition of the Maxwell stress tensor. This is followed by a new formulation of the dielectric boundary force suitable for the finite-difference Poisson-Boltzmann methods. We have validated the new formulation with idealized analytical systems and realistic molecular systems.
iAPBS: a programming interface to Adaptive Poisson-Boltzmann Solver
Konecny, Robert; Baker, Nathan A.; McCammon, J. A.
2012-07-26
The Adaptive Poisson-Boltzmann Solver (APBS) is a state-of-the-art suite for performing Poisson-Boltzmann electrostatic calculations on biomolecules. The iAPBS package provides a modular programmatic interface to the APBS library of electrostatic calculation routines. The iAPBS interface library can be linked with a Fortran or C/C++ program thus making all of the APBS functionality available from within the application. Several application modules for popular molecular dynamics simulation packages -- Amber, NAMD and CHARMM are distributed with iAPBS allowing users of these packages to perform implicit solvent electrostatic calculations with APBS.
NASA Technical Reports Server (NTRS)
Sohn, J. L.; Heinrich, J. C.
1990-01-01
The calculation of pressures when the penalty-function approximation is used in finite-element solutions of laminar incompressible flows is addressed. A Poisson equation for the pressure is formulated that involves third derivatives of the velocity field. The second derivatives appearing in the weak formulation of the Poisson equation are calculated from the C0 velocity approximation using a least-squares method. The present scheme is shown to be efficient, free of spurious oscillations, and accurate. Examples of applications are given and compared with results obtained using mixed formulations.
Arnold, J.; Kosson, D.S.; Garrabrants, A.; Meeussen, J.C.L.; Sloot, H.A. van der
2013-02-15
A robust numerical solution of the nonlinear Poisson-Boltzmann equation for asymmetric polyelectrolyte solutions in discrete pore geometries is presented. Comparisons to the linearized approximation of the Poisson-Boltzmann equation reveal that the assumptions leading to linearization may not be appropriate for the electrochemical regime in many cementitious materials. Implications of the electric double layer on both partitioning of species and on diffusive release are discussed. The influence of the electric double layer on anion diffusion relative to cation diffusion is examined.
Improved blowup theorems for the Euler-Poisson equations with attractive forces
NASA Astrophysics Data System (ADS)
Li, Rui; Lin, Xing; Ma, Zongwei
2016-07-01
Our discussion here mainly focuses on the formation of singularities for solutions to the N-dimensional Euler-Poisson equations with attractive forces, in radial symmetry. Motivated by the integration method of Yuen, we prove two blow-up results under the conditions that the solutions have compact radius R(t) or have no compact support restriction, which generalize the ones Yuen obtained in 2011 [M. W. Yuen, "Blowup for the C1 solution of the Euler-Poisson equations of gaseous stars in RN," J. Math. Anal. Appl. 383, 627-633 (2011)].
Noise parameter estimation for poisson corrupted images using variance stabilization transforms.
Jin, Xiaodan; Xu, Zhenyu; Hirakawa, Keigo
2014-03-01
Noise is present in all images captured by real-world image sensors. Poisson distribution is said to model the stochastic nature of the photon arrival process and agrees with the distribution of measured pixel values. We propose a method for estimating unknown noise parameters from Poisson corrupted images using properties of variance stabilization. With a significantly lower computational complexity and improved stability, the proposed estimation technique yields noise parameters that are comparable in accuracy to the state-of-art methods. PMID:24723530
Reentrant Origami-Based Metamaterials with Negative Poisson's Ratio and Bistability.
Yasuda, H; Yang, J
2015-05-01
We investigate the unique mechanical properties of reentrant 3D origami structures based on the Tachi-Miura polyhedron (TMP). We explore the potential usage as mechanical metamaterials that exhibit tunable negative Poisson's ratio and structural bistability simultaneously. We show analytically and experimentally that the Poisson's ratio changes from positive to negative and vice versa during its folding motion. In addition, we verify the bistable mechanism of the reentrant 3D TMP under rigid origami configurations without relying on the buckling motions of planar origami surfaces. This study forms a foundation in designing and constructing TMP-based metamaterials in the form of bellowslike structures for engineering applications. PMID:26001009
The noncommutative Poisson bracket and the deformation of the family algebras
Wei, Zhaoting
2015-07-15
The family algebras are introduced by Kirillov in 2000. In this paper, we study the noncommutative Poisson bracket P on the classical family algebra C{sub τ}(g). We show that P controls the first-order 1-parameter formal deformation from C{sub τ}(g) to Q{sub τ}(g) where the latter is the quantum family algebra. Moreover, we will prove that the noncommutative Poisson bracket is in fact a Hochschild 2-coboundary, and therefore, the deformation is infinitesimally trivial. In the last part of this paper, we discuss the relation between Mackey’s analogue and the quantization problem of the family algebras.
Developmental regression in autism spectrum disorder
Al Backer, Nouf Backer
2015-01-01
The occurrence of developmental regression in autism spectrum disorder (ASD) is one of the most puzzling phenomena of this disorder. A little is known about the nature and mechanism of developmental regression in ASD. About one-third of young children with ASD lose some skills during the preschool period, usually speech, but sometimes also nonverbal communication, social or play skills are also affected. There is a lot of evidence suggesting that most children who demonstrate regression also had previous, subtle, developmental differences. It is difficult to predict the prognosis of autistic children with developmental regression. It seems that the earlier development of social, language, and attachment behaviors followed by regression does not predict the later recovery of skills or better developmental outcomes. The underlying mechanisms that lead to regression in autism are unknown. The role of subclinical epilepsy in the developmental regression of children with autism remains unclear. PMID:27493417
A Survey of UML Based Regression Testing
NASA Astrophysics Data System (ADS)
Fahad, Muhammad; Nadeem, Aamer
Regression testing is the process of ensuring software quality by analyzing whether changed parts behave as intended, and unchanged parts are not affected by the modifications. Since it is a costly process, a lot of techniques are proposed in the research literature that suggest testers how to build regression test suite from existing test suite with minimum cost. In this paper, we discuss the advantages and drawbacks of using UML diagrams for regression testing and analyze that UML model helps in identifying changes for regression test selection effectively. We survey the existing UML based regression testing techniques and provide an analysis matrix to give a quick insight into prominent features of the literature work. We discuss the open research issues like managing and reducing the size of regression test suite, prioritization of the test cases that would be helpful during strict schedule and resources that remain to be addressed for UML based regression testing.
Wing, S; Richardson, D
2005-01-01
Background: Studies of workers at the plutonium production factory in Hanford, WA have led to conflicting conclusions about the role of age at exposure as a modifier of associations between ionising radiation and cancer. Aims: To evaluate the influence of age at exposure on radiation risk estimates in an updated follow up of Hanford workers. Methods: A cohort of 26 389 workers hired between 1944 and 1978 was followed through 1994 to ascertain vital status and causes of death. External radiation dose estimates were derived from personal dosimeters. Poisson regression was used to estimate associations between mortality and cumulative external radiation dose at all ages, and in specific age ranges. Results: A total of 8153 deaths were identified, 2265 of which included cancer as an underlying or contributory cause. Estimates of the excess relative risk per Sievert (ERR/Sv) for cumulative radiation doses at all ages combined were negative for all cause and leukaemia and positive for all cancer and lung cancer. Cumulative doses accrued at ages below 35, 35–44, and 45–54 showed little association with mortality. For cumulative dose accrued at ages 55 and above (10 year lag), the estimated ERR/Sv for all cancers was 3.24 (90% CI: 0.80 to 6.17), primarily due to an association with lung cancer (ERR/Sv: 9.05, 90% CI: 2.96 to 17.92). Conclusions: Associations between radiation and cancer mortality in this cohort are primarily a function of doses at older ages and deaths from lung cancer. The association of older age radiation exposures and cancer mortality is similar to observations from several other occupational studies. PMID:15961623
Age-Based Methods to Explore Time-Related Variables in Occupational Epidemiology Studies
Janice P. Watkins, Edward L. Frome, Donna L. Cragle
2005-08-31
Although age is recognized as the strongest predictor of mortality in chronic disease epidemiology, a calendar-based approach is often employed when evaluating time-related variables. An age-based analysis file, created by determining the value of each time-dependent variable for each age that a cohort member is followed, provides a clear definition of age at exposure and allows development of diverse analytic models. To demonstrate methods, the relationship between cancer mortality and external radiation was analyzed with Poisson regression for 14,095 Oak Ridge National Laboratory workers. Based on previous analysis of this cohort, a model with ten-year lagged cumulative radiation doses partitioned by receipt before (dose-young) or after (dose-old) age 45 was examined. Dose-response estimates were similar to calendar-year-based results with elevated risk for dose-old, but not when film badge readings were weekly before 1957. Complementary results showed increasing risk with older hire ages and earlier birth cohorts, since workers hired after age 45 were born before 1915, and dose-young and dose-old were distributed differently by birth cohorts. Risks were generally higher for smokingrelated than non-smoking-related cancers. It was difficult to single out specific variables associated with elevated cancer mortality because of: (1) birth cohort differences in hire age and mortality experience completeness, and (2) time-period differences in working conditions, dose potential, and exposure assessment. This research demonstrated the utility and versatility of the age-based approach.
A Negative Binomial Regression Model for Accuracy Tests
ERIC Educational Resources Information Center
Hung, Lai-Fa
2012-01-01
Rasch used a Poisson model to analyze errors and speed in reading tests. An important property of the Poisson distribution is that the mean and variance are equal. However, in social science research, it is very common for the variance to be greater than the mean (i.e., the data are overdispersed). This study embeds the Rasch model within an…
A Multiple Regression Approach to Normalization of Spatiotemporal Gait Features.
Wahid, Ferdous; Begg, Rezaul; Lythgo, Noel; Hass, Chris J; Halgamuge, Saman; Ackland, David C
2016-04-01
Normalization of gait data is performed to reduce the effects of intersubject variations due to physical characteristics. This study reports a multiple regression normalization approach for spatiotemporal gait data that takes into account intersubject variations in self-selected walking speed and physical properties including age, height, body mass, and sex. Spatiotemporal gait data including stride length, cadence, stance time, double support time, and stride time were obtained from healthy subjects including 782 children, 71 adults, 29 elderly subjects, and 28 elderly Parkinson's disease (PD) patients. Data were normalized using standard dimensionless equations, a detrending method, and a multiple regression approach. After normalization using dimensionless equations and the detrending method, weak to moderate correlations between walking speed, physical properties, and spatiotemporal gait features were observed (0.01 < |r| < 0.88), whereas normalization using the multiple regression method reduced these correlations to weak values (|r| <0.29). Data normalization using dimensionless equations and detrending resulted in significant differences in stride length and double support time of PD patients; however the multiple regression approach revealed significant differences in these features as well as in cadence, stance time, and stride time. The proposed multiple regression normalization may be useful in machine learning, gait classification, and clinical evaluation of pathological gait patterns. PMID:26426798
Regression analysis for solving diagnosis problem of children's health
NASA Astrophysics Data System (ADS)
Cherkashina, Yu A.; Gerget, O. M.
2016-04-01
The paper includes results of scientific researches. These researches are devoted to the application of statistical techniques, namely, regression analysis, to assess the health status of children in the neonatal period based on medical data (hemostatic parameters, parameters of blood tests, the gestational age, vascular-endothelial growth factor) measured at 3-5 days of children's life. In this paper a detailed description of the studied medical data is given. A binary logistic regression procedure is discussed in the paper. Basic results of the research are presented. A classification table of predicted values and factual observed values is shown, the overall percentage of correct recognition is determined. Regression equation coefficients are calculated, the general regression equation is written based on them. Based on the results of logistic regression, ROC analysis was performed, sensitivity and specificity of the model are calculated and ROC curves are constructed. These mathematical techniques allow carrying out diagnostics of health of children providing a high quality of recognition. The results make a significant contribution to the development of evidence-based medicine and have a high practical importance in the professional activity of the author.
ERIC Educational Resources Information Center
Parr, Jeremy R.; Le Couteur, Ann; Baird, Gillian; Rutter, Michael; Pickles, Andrew; Fombonne, Eric; Bailey, Anthony J.
2011-01-01
The characteristics of early developmental regression (EDR) were investigated in individuals with ASD from affected relative pairs recruited to the International Molecular Genetic Study of Autism Consortium (IMGSAC). Four hundred and fifty-eight individuals with ASD were recruited from 226 IMGSAC families. Regression before age 36 months occurred…
About solvability of some boundary value problems for Poisson equation in a ball
NASA Astrophysics Data System (ADS)
Koshanova, Maira D.; Usmanov, Kairat I.; Turmetov, Batirkhan Kh.
2016-08-01
In the present paper, we study properties of some integro-differential operators of fractional order. As an application of the properties of these operators for Poisson equation we examine questions on solvability of a fractional analogue of the Neumann problem and analogues of periodic boundary value problems for circular domains. The exact conditions for solvability of these problems are found.
Poisson ratio and excess low-frequency vibrational states in glasses
NASA Astrophysics Data System (ADS)
Duval, Eugène; Deschamps, Thierry; Saviot, Lucien
2013-08-01
In glass, starting from a dependence of the Angell's fragility on the Poisson ratio [V. N. Novikov and A. P. Sokolov, Nature 431, 961 (2004)], 10.1038/nature02947, and a dependence of the Poisson ratio on the atomic packing density [G. N. Greaves, A. L. Greer, R. S. Lakes, and T. Rouxel, Nature Mater. 10, 823 (2011)], 10.1038/nmat3134, we propose that the heterogeneities are predominantly density fluctuations in strong glasses (lower Poisson ratio) and shear elasticity fluctuations in fragile glasses (higher Poisson ratio). Because the excess of low-frequency vibration modes in comparison with the Debye regime (boson peak) is strongly connected to these fluctuations, we propose that they are breathing-like (with change of volume) in strong glasses and shear-like (without change of volume) in fragile glasses. As a verification, it is confirmed that the excess modes in the strong silica glass are predominantly breathing-like. Moreover, it is shown that the excess breathing-like modes in a strong polymeric glass are replaced by shear-like modes under hydrostatic pressure as the glass becomes more compact.
Measurement of Young's modulus and Poisson's ratio of human hair using optical techniques
NASA Astrophysics Data System (ADS)
Hu, Zhenxing; Li, Gaosheng; Xie, Huimin; Hua, Tao; Chen, Pengwan; Huang, Fenglei
2009-12-01
Human hair is a complex nanocomposite fiber whose physical appearance and mechanical strength are governed by a variety of factors like ethnicity, cleaning, grooming, chemical treatments and environment. Characterization of mechanical properties of hair is essential to develop better cosmetic products and advance biological and cosmetic science. Hence the behavior of hair under tension is of interest to beauty care science. Human hair fibers experience tensile forces as they are groomed and styled. Previous researches about tensile testing of human hair were seemingly focused on the longitudinal direction, such as elastic modulus, yield strength, breaking strength and strain at break after different treatment. In this research, experiment of evaluating the mechanical properties of human hair, such as Young's modulus and Poisson's ratio, was designed and conducted. The principle of the experimental instrument was presented. The system of testing instrument to evaluate the Young's modulus and Poisson's ratio was introduced. The range of Poisson's ratio of the hair from the identical person was evaluated. Experiments were conducted for testing the mechanical properties after acid, aqueous alkali and neutral solution treatment of human hair. Explanation of Young's modulus and Poisson's ratio was conducted base on these results of experiments. These results can be useful to hair treatment and cosmetic product.
Measurement of Young's modulus and Poisson's ratio of human hair using optical techniques
NASA Astrophysics Data System (ADS)
Hu, Zhenxing; Li, Gaosheng; Xie, Huimin; Hua, Tao; Chen, Pengwan; Huang, Fenglei
2010-03-01
Human hair is a complex nanocomposite fiber whose physical appearance and mechanical strength are governed by a variety of factors like ethnicity, cleaning, grooming, chemical treatments and environment. Characterization of mechanical properties of hair is essential to develop better cosmetic products and advance biological and cosmetic science. Hence the behavior of hair under tension is of interest to beauty care science. Human hair fibers experience tensile forces as they are groomed and styled. Previous researches about tensile testing of human hair were seemingly focused on the longitudinal direction, such as elastic modulus, yield strength, breaking strength and strain at break after different treatment. In this research, experiment of evaluating the mechanical properties of human hair, such as Young's modulus and Poisson's ratio, was designed and conducted. The principle of the experimental instrument was presented. The system of testing instrument to evaluate the Young's modulus and Poisson's ratio was introduced. The range of Poisson's ratio of the hair from the identical person was evaluated. Experiments were conducted for testing the mechanical properties after acid, aqueous alkali and neutral solution treatment of human hair. Explanation of Young's modulus and Poisson's ratio was conducted base on these results of experiments. These results can be useful to hair treatment and cosmetic product.
Dual Poisson-Disk Tiling: an efficient method for distributing features on arbitrary surfaces.
Li, Hongwei; Lo, Kui-Yip; Leung, Man-Kang; Fu, Chi-Wing
2008-01-01
This paper introduces a novel surface-modeling method to stochastically distribute features on arbitrary topological surfaces. The generated distribution of features follows the Poisson disk distribution, so we can have a minimum separation guarantee between features and avoid feature overlap. With the proposed method, we not only can interactively adjust and edit features with the help of the proposed Poisson disk map, but can also efficiently re-distribute features on object surfaces. The underlying mechanism is our dual tiling scheme, known as the Dual Poisson-Disk Tiling. First, we compute the dual of a given surface parameterization, and tile the dual surface by our specially-designed dual tiles; during the pre-processing, the Poisson disk distribution has been pre-generated on these tiles. By dual tiling, we can nicely avoid the problem of corner heterogeneity when tiling arbitrary parameterized surfaces, and can also reduce the tile set complexity. Furthermore, the dual tiling scheme is non-periodic, and we can also maintain a manageable tile set. To demonstrate the applicability of this technique, we explore a number of surface-modeling applications: pattern and shape distribution, bump-mapping, illustrative rendering, mold simulation, the modeling of separable features in texture and BTF, and the distribution of geometric textures in shell space. PMID:18599912
Continuum description of the Poisson's ratio of ligament and tendon under finite deformation.
Swedberg, Aaron M; Reese, Shawn P; Maas, Steve A; Ellis, Benjamin J; Weiss, Jeffrey A
2014-09-22
Ligaments and tendons undergo volume loss when stretched along the primary fiber axis, which is evident by the large, strain-dependent Poisson's ratios measured during quasi-static tensile tests. Continuum constitutive models that have been used to describe ligament material behavior generally assume incompressibility, which does not reflect the volumetric material behavior seen experimentally. We developed a strain energy equation that describes large, strain dependent Poisson's ratios and nonlinear, transversely isotropic behavior using a novel method to numerically enforce the desired volumetric behavior. The Cauchy stress and spatial elasticity tensors for this strain energy equation were derived and implemented in the FEBio finite element software (www.febio.org). As part of this objective, we derived the Cauchy stress and spatial elasticity tensors for a compressible transversely isotropic material, which to our knowledge have not appeared previously in the literature. Elastic simulations demonstrated that the model predicted the nonlinear, upwardly concave uniaxial stress-strain behavior while also predicting a strain-dependent Poisson's ratio. Biphasic simulations of stress relaxation predicted a large outward fluid flux and substantial relaxation of the peak stress. Thus, the results of this study demonstrate that the viscoelastic behavior of ligaments and tendons can be predicted by modeling fluid movement when combined with a large Poisson's ratio. Further, the constitutive framework provides the means for accurate simulations of ligament volumetric material behavior without the need to resort to micromechanical or homogenization methods, thus facilitating its use in large scale, whole joint models. PMID:25134434
Continuum Description of the Poisson's Ratio of Ligament and Tendon Under Finite Deformation
Swedberg, Aaron M.; Reese, Shawn P.; Maas, Steve A.; Ellis, Benjamin J.; Weiss, Jeffrey A.
2014-01-01
Ligaments and tendons undergo volume loss when stretched along the primary fiber axis, which is evident by the large, strain-dependent Poisson's ratios measured during quasi-static tensile tests. Continuum constitutive models that have been used to describe ligament material behavior generally assume incompressibility, which does not reflect the volumetric material behavior seen experimentally. We developed a strain energy equation that describes large, strain dependent Poisson's ratios and nonlinear, transversely isotropic behavior using a novel method to numerically enforce the desired volumetric behavior. The Cauchy stress and spatial elasticity tensors for this strain energy equation were derived and implemented in the FEBio finite element software (www.febio.org). As part of this objective, we derived the Cauchy stress and spatial elasticity tensors for a compressible transversely isotropic material, which to our knowledge have not appeared previously in the literature. Elastic simulations demonstrated that the model predicted the nonlinear, upwardly concave uniaxial stress-strain behavior while also predicting a strain-dependent Poisson's ratio. Biphasic simulations of stress relaxation predicted a large outward fluid flux and substantial relaxation of the peak stress. Thus, the results of this study demonstrate that the viscoelastic behavior of ligaments and tendons can be predicted by modeling fluid movement when combined with a large Poisson's ratio. Further, the constitutive framework provides the means for accurate simulations of ligament volumetric material behavior without the need to resort to micromechanical or homogenization methods, thus facilitating its use in large scale, whole joint models. PMID:25134434
Multi-Parameter Linear Least-Squares Fitting to Poisson Data One Count at a Time
NASA Technical Reports Server (NTRS)
Wheaton, W.; Dunklee, A.; Jacobson, A.; Ling, J.; Mahoney, W.; Radocinski, R.
1993-01-01
A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multi-component linear model, with underlying physical count rates or fluxes which are to be estimated from the data.
Lie-Poisson integrators for rigid body dynamics in the solar system
NASA Astrophysics Data System (ADS)
Touma, J.; Wisdom, J.
1994-03-01
The n-body mapping method of Wisdom & Holman (1991) is generalized to encompass rotational dynamics. The Lie-Poisson structure of rigid body dynamics is discussed. Integrators which preserve that structure are derived for the motion of a free rigid body and for the motion of rigid bodies interacting gravitationally with mass points.
Recovering doping profiles in semiconductor devices with the Boltzmann-Poisson model
NASA Astrophysics Data System (ADS)
Cheng, Yingda; Gamba, Irene M.; Ren, Kui
2011-05-01
We investigate numerically an inverse problem related to the Boltzmann-Poisson system of equations for transport of electrons in semiconductor devices. The objective of the (ill-posed) inverse problem is to recover the doping profile of a device, presented as a source function in the mathematical model, from its current-voltage characteristics. To reduce the degree of ill-posedness of the inverse problem, we proposed to parameterize the unknown doping profile function to limit the number of unknowns in the inverse problem. We showed by numerical examples that the reconstruction of a few low moments of the doping profile is possible when relatively accurate time-dependent or time-independent measurements are available, even though the later reconstruction is less accurate than the former. We also compare reconstructions from the Boltzmann-Poisson (BP) model to those from the classical drift-diffusion-Poisson (DDP) model, assuming that measurements are generated with the BP model. We show that the two type of reconstructions can be significantly different in regimes where drift-diffusion-Poisson equation fails to model the physics accurately. However, when noise presented in measured data is high, no difference in the reconstructions can be observed.
A Portrait of Poisson: A Fish out of Water Who Found His Calling.
ERIC Educational Resources Information Center
Geller, B.; Bruk, Y.
1991-01-01
Presents a brief historical sketch of the life and work of one of the founders of modern mathematical physics. Discusses three problem-solving applications of the Poisson distribution with examples from elementary probability theory. Provides background on two of his noteworthy results from the physics of oscillations and the deformation of rigid…
Updating a Classic: "The Poisson Distribution and the Supreme Court" Revisited
ERIC Educational Resources Information Center
Cole, Julio H.
2010-01-01
W. A. Wallis studied vacancies in the US Supreme Court over a 96-year period (1837-1932) and found that the distribution of the number of vacancies per year could be characterized by a Poisson model. This note updates this classic study.
The Dependent Poisson Race Model and Modeling Dependence in Conjoint Choice Experiments
ERIC Educational Resources Information Center
Ruan, Shiling; MacEachern, Steven N.; Otter, Thomas; Dean, Angela M.
2008-01-01
Conjoint choice experiments are used widely in marketing to study consumer preferences amongst alternative products. We develop a class of choice models, belonging to the class of Poisson race models, that describe a "random utility" which lends itself to a process-based description of choice. The models incorporate a dependence structure which…
ERIC Educational Resources Information Center
Shiyko, Mariya P.; Li, Yuelin; Rindskopf, David
2012-01-01
Intensive longitudinal data (ILD) have become increasingly common in the social and behavioral sciences; count variables, such as the number of daily smoked cigarettes, are frequently used outcomes in many ILD studies. We demonstrate a generalized extension of growth mixture modeling (GMM) to Poisson-distributed ILD for identifying qualitatively…
Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation
ERIC Educational Resources Information Center
Prentice, J. S. C.
2012-01-01
An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…
NASA Astrophysics Data System (ADS)
Ngai, K. L.; Wang, Li-Min; Liu, Riping; Wang, W. H.
2014-01-01
In metallic glasses a clear correlation had been established between plasticity or ductility with the Poisson's ratio νPoisson and alternatively the ratio of the elastic bulk modulus to the shear modulus, K/G. Such a correlation between these two macroscopic mechanical properties is intriguing and is challenging to explain from the dynamics on a microscopic level. A recent experimental study has found a connection of ductility to the secondary β-relaxation in metallic glasses. The strain rate and temperature dependencies of the ductile-brittle transition are similar to the reciprocal of the secondary β-relaxation time, τβ. Moreover, metallic glass is more ductile if the relaxation strength of the β-relaxation is larger and τβ is shorter. The findings indicate the β-relaxation is related to and instrumental for ductility. On the other hand, K/G or νPoisson is related to the effective Debye-Waller factor (i.e., the non-ergodicity parameter), f0, characterizing the dynamics of a structural unit inside a cage formed by other units, and manifested as the nearly constant loss shown in the frequency dependent susceptibility. We make the connection of f0 to the non-exponentiality parameter n in the Kohlrausch stretched exponential correlation function of the structural α-relaxation function, φ (t) = exp [ { - ( {t/{τ _α }})^{1 - n} }]. This connection follows from the fact that both f0 and n are determined by the inter-particle potential, and 1/f0 or (1 - f0) and n both increase with anharmonicity of the potential. A well tested result from the Coupling Model is used to show that τβ is completely determined by τα and n. From the string of relations, (i) K/G or νPoisson with 1/f0 or (1 - f0), (ii) 1/f0 or (1 - f0) with n, and (iii) τα and n with τβ, we arrive at the desired relation between K/G or νPoisson and τβ. On combining this relation with that between ductility and τβ, we have finally an explanation of the empirical correlation between
Davie, Gabrielle; McElduff, Patrick; Connor, Jennie; Langley, John
2014-01-01
Objectives. We estimated the effects on assault rates of lowering the minimum alcohol purchasing age in New Zealand from 20 to 18 years. We hypothesized that the law change would increase assaults among young people aged 18 to 19 years (the target group) and those aged 15 to 17 years via illegal sales or alcohol supplied by older friends or family members. Methods. Using Poisson regression, we examined weekend assaults resulting in hospitalization from 1995 to 2011. Outcomes were assessed separately by gender among young people aged 15 to 17 years and those aged 18 to 19 years, with those aged 20 and 21 years included as a control group. Results. Relative to young men aged 20 to 21 years, assaults increased significantly among young men aged 18 to 19 years between 1995 and 1999 (the period before the law change), as well as the postchange periods 2003 to 2007 (incidence rate ratio [IRR] = 1.21; 95% confidence interval [CI] = 1.05, 1.39) and 2008 to 2011 (IRR = 1.20; 95% CI = 1.05, 1.37). Among boys aged 15 to 17 years, assaults increased during the postchange periods 1999 to 2003 (IRR = 1.28; 95% CI = 1.10, 1.49) and 2004 to 2007 (IRR = 1.25; 95% CI = 1.08, 1.45). There were no statistically significant effects among girls and young women. Conclusions. Lowering the minimum alcohol purchasing age increased weekend assaults resulting in hospitalization among young males 15 to 19 years of age. PMID:24922142
Regression in schizophrenia and its therapeutic value.
Yazaki, N
1992-03-01
Using the regression evaluation scale, 25 schizophrenic patients were classified into three groups of Dissolution/autism (DAUG), Dissolution----attachment (DATG) and Non-regression (NRG). The regression of DAUG was of the type in which autism occurred when destructiveness emerged, while the regression of DATG was of the type in which attachment occurred when destructiveness emerged. This suggests that the regressive phenomena are an actualized form of the approach complex. In order to determine the factors distinguishing these two groups, I investigated psychiatric symptoms, mother-child relationships, premorbid personalities and therapeutic interventions. I believe that these factors form a continuity in which they interrelatedly determine the regressive state. Foremost among them, I stressed the importance of the mother-child relationship. PMID:1353128
Data Mining within a Regression Framework
NASA Astrophysics Data System (ADS)
Berk, Richard A.
Regression analysis can imply a far wider range of statistical procedures than often appreciated. In this chapter, a number of common Data Mining procedures are discussed within a regression framework. These include non-parametric smoothers, classification and regression trees, bagging, and random forests. In each case, the goal is to characterize one or more of the distributional features of a response conditional on a set of predictors.
LRGS: Linear Regression by Gibbs Sampling
NASA Astrophysics Data System (ADS)
Mantz, Adam B.
2016-02-01
LRGS (Linear Regression by Gibbs Sampling) implements a Gibbs sampler to solve the problem of multivariate linear regression with uncertainties in all measured quantities and intrinsic scatter. LRGS extends an algorithm by Kelly (2007) that used Gibbs sampling for performing linear regression in fairly general cases in two ways: generalizing the procedure for multiple response variables, and modeling the prior distribution of covariates using a Dirichlet process.
Geodesic least squares regression on information manifolds
Verdoolaege, Geert
2014-12-05
We present a novel regression method targeted at situations with significant uncertainty on both the dependent and independent variables or with non-Gaussian distribution models. Unlike the classic regression model, the conditional distribution of the response variable suggested by the data need not be the same as the modeled distribution. Instead they are matched by minimizing the Rao geodesic distance between them. This yields a more flexible regression method that is less constrained by the assumptions imposed through the regression model. As an example, we demonstrate the improved resistance of our method against some flawed model assumptions and we apply this to scaling laws in magnetic confinement fusion.
Quantile regression applied to spectral distance decay
Rocchini, D.; Cade, B.S.
2008-01-01
Remotely sensed imagery has long been recognized as a powerful support for characterizing and estimating biodiversity. Spectral distance among sites has proven to be a powerful approach for detecting species composition variability. Regression analysis of species similarity versus spectral distance allows us to quantitatively estimate the amount of turnover in species composition with respect to spectral and ecological variability. In classical regression analysis, the residual sum of squares is minimized for the mean of the dependent variable distribution. However, many ecological data sets are characterized by a high number of zeroes that add noise to the regression model. Quantile regressions can be used to evaluate trend in the upper quantiles rather than a mean trend across the whole distribution of the dependent variable. In this letter, we used ordinary least squares (OLS) and quantile regressions to estimate the decay of species similarity versus spectral distance. The achieved decay rates were statistically nonzero (p < 0.01), considering both OLS and quantile regressions. Nonetheless, the OLS regression estimate of the mean decay rate was only half the decay rate indicated by the upper quantiles. Moreover, the intercept value, representing the similarity reached when the spectral distance approaches zero, was very low compared with the intercepts of the upper quantiles, which detected high species similarity when habitats are more similar. In this letter, we demonstrated the power of using quantile regressions applied to spectral distance decay to reveal species diversity patterns otherwise lost or underestimated by OLS regression. ?? 2008 IEEE.
Image segmentation via piecewise constant regression
NASA Astrophysics Data System (ADS)
Acton, Scott T.; Bovik, Alan C.
1994-09-01
We introduce a novel unsupervised image segmentation technique that is based on piecewise constant (PICO) regression. Given an input image, a PICO output image for a specified feature size (scale) is computed via nonlinear regression. The regression effectively provides the constant region segmentation of the input image that has a minimum deviation from the input image. PICO regression-based segmentation avoids the problems of region merging, poor localization, region boundary ambiguity, and region fragmentation. Additionally, our segmentation method is particularly well-suited for corrupted (noisy) input data. An application to segmentation and classification of remotely sensed imagery is provided.
Hybrid fuzzy regression with trapezoidal fuzzy data
NASA Astrophysics Data System (ADS)
Razzaghnia, T.; Danesh, S.; Maleki, A.
2011-12-01
In this regard, this research deals with a method for hybrid fuzzy least-squares regression. The extension of symmetric triangular fuzzy coefficients to asymmetric trapezoidal fuzzy coefficients is considered as an effective measure for removing unnecessary fuzziness of the linear fuzzy model. First, trapezoidal fuzzy variable is applied to derive a bivariate regression model. In the following, normal equations are formulated to solve the four parts of hybrid regression coefficients. Also the model is extended to multiple regression analysis. Eventually, method is compared with Y-H.O. chang's model.
Age Disparity in Palliative Radiation Therapy Among Patients With Advanced Cancer
Wong, Jonathan; Xu, Beibei; Yeung, Heidi N.; Roeland, Eric J.; Martinez, Maria Elena; Le, Quynh-Thu; Mell, Loren K.; Murphy, James D.
2014-09-01
Purpose/Objective: Palliative radiation therapy represents an important treatment option among patients with advanced cancer, although research shows decreased use among older patients. This study evaluated age-related patterns of palliative radiation use among an elderly Medicare population. Methods and Materials: We identified 63,221 patients with metastatic lung, breast, prostate, or colorectal cancer diagnosed between 2000 and 2007 from the Surveillance, Epidemiology, and End Results (SEER)-Medicare linked database. Receipt of palliative radiation therapy was extracted from Medicare claims. Multivariate Poisson regression analysis determined residual age-related disparity in the receipt of palliative radiation therapy after controlling for confounding covariates including age-related differences in patient and demographic covariates, length of life, and patient preferences for aggressive cancer therapy. Results: The use of radiation decreased steadily with increasing patient age. Forty-two percent of patients aged 66 to 69 received palliative radiation therapy. Rates of palliative radiation decreased to 38%, 32%, 24%, and 14% among patients aged 70 to 74, 75 to 79, 80 to 84, and over 85, respectively. Multivariate analysis found that confounding covariates attenuated these findings, although the decreased relative rate of palliative radiation therapy among the elderly remained clinically and statistically significant. On multivariate analysis, compared to patients 66 to 69 years old, those aged 70 to 74, 75 to 79, 80 to 84, and over 85 had a 7%, 15%, 25%, and 44% decreased rate of receiving palliative radiation, respectively (all P<.0001). Conclusions: Age disparity with palliative radiation therapy exists among older cancer patients. Further research should strive to identify barriers to palliative radiation among the elderly, and extra effort should be made to give older patients the opportunity to receive this quality of life-enhancing treatment at the end
NASA Astrophysics Data System (ADS)
Wei, Zigen; Chen, Ling; Li, Zhiwei; Ling, Yuan; Li, Jing
2016-01-01
Eastern China comprises a complex amalgamation of geotectonic blocks of different ages and undergone significant modification of lithosphere during the Meso-Cenozoic time. To better characterize its deep structure, we conducted H-κ stacking of receiver functions using teleseismic data collected from 1143 broadband stations and produced a unified and detailed map of Moho depth and average Poisson's ratio (σ) of eastern China. A coexistence of modified and preserved crust with generally in Airy-type isostatic equilibrium was revealed in eastern China, which correlates well with regional geological and tectonic features. Crust is obviously thicker to the west of the North-South Gravity Lineament but exhibits complex variations in σ with an overall felsic to intermediate bulk crustal composition. Moho depth and σ values show striking differences as compared to the surrounding areas in the rifts and tectonic boundary zones, where earthquakes usually occur. Systematic comparison of Moho depth and σ values demonstrated that there are both similarities and differences in the crustal structure among the Northeast China, North China Craton, South China, and the Qinling-Dabie Orogen as well as different areas within these blocks, which may result from their different evolutionary histories and strong tectonic-magma events since the Mesozoic. Using new data from dense temporary arrays, we observed a change of Moho depth by ∼3 km and of σ by ∼0.04 beneath the Tanlu Fault Zone and an alteration of Moho depth by ∼5 km and of σ by ∼0.05 beneath the Xuefeng Mountains. In addition, striking E-W difference in crustal structure occur across the Xuefeng Mountains: to the east, the Moho depth is overall <35 km and σ has values of <0.26; to the west, the Moho depth is generally >40 km and σ shows complex and large-range variation with values between 0.22 and 0.32. These, together with waveform inversion of receiver functions and SKS shear-wave splitting measurements
NASA Astrophysics Data System (ADS)
Michael, A. J.
2012-12-01
Detecting trends in the rate of sporadic events is a problem for earthquakes and other natural hazards such as storms, floods, or landslides. I use synthetic events to judge the tests used to address this problem in seismology and consider their application to other hazards. Recent papers have analyzed the record of magnitude ≥7 earthquakes since 1900 and concluded that the events are consistent with a constant rate Poisson process plus localized aftershocks (Michael, GRL, 2011; Shearer and Stark, PNAS, 2012; Daub et al., GRL, 2012; Parsons and Geist, BSSA, 2012). Each paper removed localized aftershocks and then used a different suite of statistical tests to test the null hypothesis that the remaining data could be drawn from a constant rate Poisson process. The methods include KS tests between event times or inter-event times and predictions from a Poisson process, the autocorrelation function on inter-event times, and two tests on the number of events in time bins: the Poisson dispersion test and the multinomial chi-square test. The range of statistical tests gives us confidence in the conclusions; which are robust with respect to the choice of tests and parameters. But which tests are optimal and how sensitive are they to deviations from the null hypothesis? The latter point was raised by Dimer (arXiv, 2012), who suggested that the lack of consideration of Type 2 errors prevents these papers from being able to place limits on the degree of clustering and rate changes that could be present in the global seismogenic process. I produce synthetic sets of events that deviate from a constant rate Poisson process using a variety of statistical simulation methods including Gamma distributed inter-event times and random walks. The sets of synthetic events are examined with the statistical tests described above. Preliminary results suggest that with 100 to 1000 events, a data set that does not reject the Poisson null hypothesis could have a variability that is 30% to
Deriving the Regression Equation without Using Calculus
ERIC Educational Resources Information Center
Gordon, Sheldon P.; Gordon, Florence S.
2004-01-01
Probably the one "new" mathematical topic that is most responsible for modernizing courses in college algebra and precalculus over the last few years is the idea of fitting a function to a set of data in the sense of a least squares fit. Whether it be simple linear regression or nonlinear regression, this topic opens the door to applying the…
Regression Analysis and the Sociological Imagination
ERIC Educational Resources Information Center
De Maio, Fernando
2014-01-01
Regression analysis is an important aspect of most introductory statistics courses in sociology but is often presented in contexts divorced from the central concerns that bring students into the discipline. Consequently, we present five lesson ideas that emerge from a regression analysis of income inequality and mortality in the USA and Canada.
Illustration of Regression towards the Means
ERIC Educational Resources Information Center
Govindaraju, K.; Haslett, S. J.
2008-01-01
This article presents a procedure for generating a sequence of data sets which will yield exactly the same fitted simple linear regression equation y = a + bx. Unless rescaled, the generated data sets will have progressively smaller variability for the two variables, and the associated response and covariate will "regress" towards their…
Stepwise versus Hierarchical Regression: Pros and Cons
ERIC Educational Resources Information Center
Lewis, Mitzi
2007-01-01
Multiple regression is commonly used in social and behavioral data analysis. In multiple regression contexts, researchers are very often interested in determining the "best" predictors in the analysis. This focus may stem from a need to identify those predictors that are supportive of theory. Alternatively, the researcher may simply be interested…
Cross-Validation, Shrinkage, and Multiple Regression.
ERIC Educational Resources Information Center
Hynes, Kevin
One aspect of multiple regression--the shrinkage of the multiple correlation coefficient on cross-validation is reviewed. The paper consists of four sections. In section one, the distinction between a fixed and a random multiple regression model is made explicit. In section two, the cross-validation paradigm and an explanation for the occurrence…
Principles of Quantile Regression and an Application
ERIC Educational Resources Information Center
Chen, Fang; Chalhoub-Deville, Micheline
2014-01-01
Newer statistical procedures are typically introduced to help address the limitations of those already in practice or to deal with emerging research needs. Quantile regression (QR) is introduced in this paper as a relatively new methodology, which is intended to overcome some of the limitations of least squares mean regression (LMR). QR is more…
Regression Analysis: Legal Applications in Institutional Research
ERIC Educational Resources Information Center
Frizell, Julie A.; Shippen, Benjamin S., Jr.; Luna, Andrew L.
2008-01-01
This article reviews multiple regression analysis, describes how its results should be interpreted, and instructs institutional researchers on how to conduct such analyses using an example focused on faculty pay equity between men and women. The use of multiple regression analysis will be presented as a method with which to compare salaries of…
Dealing with Outliers: Robust, Resistant Regression
ERIC Educational Resources Information Center
Glasser, Leslie
2007-01-01
Least-squares linear regression is the best of statistics and it is the worst of statistics. The reasons for this paradoxical claim, arising from possible inapplicability of the method and the excessive influence of "outliers", are discussed and substitute regression methods based on median selection, which is both robust and resistant, are…
A Practical Guide to Regression Discontinuity
ERIC Educational Resources Information Center
Jacob, Robin; Zhu, Pei; Somers, Marie-Andrée; Bloom, Howard
2012-01-01
Regression discontinuity (RD) analysis is a rigorous nonexperimental approach that can be used to estimate program impacts in situations in which candidates are selected for treatment based on whether their value for a numeric rating exceeds a designated threshold or cut-point. Over the last two decades, the regression discontinuity approach has…
Sulphasalazine and regression of rheumatoid nodules.
Englert, H J; Hughes, G R; Walport, M J
1987-03-01
The regression of small rheumatoid nodules was noted in four patients after starting sulphasalazine therapy. This coincided with an improvement in synovitis and also falls in erythrocyte sedimentation rate (ESR) and C reactive protein (CRP). The relation between the nodule regression and the sulphasalazine therapy is discussed. PMID:2883940
A Simulation Investigation of Principal Component Regression.
ERIC Educational Resources Information Center
Allen, David E.
Regression analysis is one of the more common analytic tools used by researchers. However, multicollinearity between the predictor variables can cause problems in using the results of regression analyses. Problems associated with multicollinearity include entanglement of relative influences of variables due to reduced precision of estimation,…
Three-Dimensional Modeling in Linear Regression.
ERIC Educational Resources Information Center
Herman, James D.
Linear regression examines the relationship between one or more independent (predictor) variables and a dependent variable. By using a particular formula, regression determines the weights needed to minimize the error term for a given set of predictors. With one predictor variable, the relationship between the predictor and the dependent variable…
Exercise in youth: High bone mass, large bone size, and low fracture risk in old age.
Tveit, M; Rosengren, B E; Nilsson, J Å; Karlsson, M K
2015-08-01
Physical activity is favorable for peak bone mass but if the skeletal benefits remain and influence fracture risk in old age is debated. In a cross-sectional controlled mixed model design, we compared dual X-ray absorptiometry-derived bone mineral density (BMD) and bone size in 193 active and retired male elite soccer players and 280 controls, with duplicate measurements of the same individual done a mean 5 years apart. To evaluate lifetime fractures, we used a retrospective controlled study design in 397 retired male elite soccer players and 1368 controls. Differences in bone traits were evaluated by Student's t-test and fracture risk assessments by Poisson regression and Cox regression. More than 30 years after retirement from sports, the soccer players had a Z-score for total body BMD of 0.4 (0.1 to 0.6), leg BMD of 0.5 (0.2 to 0.8), and femoral neck area of 0.3 (0.0 to 0.5). The rate ratio for fracture after career end was 0.6 (0.4 to 0.9) and for any fragility fracture 0.4 (0.2 to 0.9). Exercise-associated bone trait benefits are found long term after retirement from sports together with a lower fracture risk. This indicates that physical activity in youth could reduce the burden of fragility fractures. PMID:25109568
Diarrhea Prevalence, Care, and Risk Factors Among Poor Children Under 5 Years of Age in Mesoamerica.
Colombara, Danny V; Hernández, Bernardo; McNellan, Claire R; Desai, Sima S; Gagnier, Marielle C; Haakenstad, Annie; Johanns, Casey; Palmisano, Erin B; Ríos-Zertuche, Diego; Schaefer, Alexandra; Zúñiga-Brenes, Paola; Zyznieuski, Nicholas; Iriarte, Emma; Mokdad, Ali H
2016-03-01
Care practices and risk factors for diarrhea among impoverished communities across Mesoamerica are unknown. Using Salud Mesoamérica Initiative baseline data, collected 2011-2013, we assessed the prevalence of diarrhea, adherence to evidence-based treatment guidelines, and potential diarrhea correlates in poor and indigenous communities across Mesoamerica. This study surveyed 14,500 children under 5 years of age in poor areas of El Salvador, Guatemala, Mexico (Chiapas State), Nicaragua, and Panama. We compared diarrhea prevalence and treatment modalities using χ(2) tests and used multivariable Poisson regression models to calculate adjusted risk ratios (aRRs) and 95% confidence intervals (CIs) for potential correlates of diarrhea. The 2-week point prevalence of diarrhea was 13% overall, with significant differences between countries (P < 0.05). Approximately one-third of diarrheal children were given oral rehydration solution and less than 3% were given zinc. Approximately 18% were given much less to drink than usual or nothing to drink at all. Antimotility medication was given to 17% of diarrheal children, while antibiotics were inappropriately given to 36%. In a multivariable regression model, compared with children 0-5 months, those 6-23 months had a 49% increased risk for diarrhea (aRR = 1.49, 95% CI = 1.15, 1.95). Our results call for programs to examine and remedy low adherence to evidence-based treatment guidelines. PMID:26787152
A Fast Poisson Solver with Periodic Boundary Conditions for GPU Clusters in Various Configurations
NASA Astrophysics Data System (ADS)
Rattermann, Dale Nicholas
Fast Poisson solvers using the Fast Fourier Transform on uniform grids are especially suited for parallel implementation, making them appropriate for portability on graphical processing unit (GPU) devices. The goal of the following work was to implement, test, and evaluate a fast Poisson solver for periodic boundary conditions for use on a variety of GPU configurations. The solver used in this research was FLASH, an immersed-boundary-based method, which is well suited for complex, time-dependent geometries, has robust adaptive mesh refinement/de-refinement capabilities to capture evolving flow structures, and has been successfully implemented on conventional, parallel supercomputers. However, these solvers are still computationally costly to employ, and the total solver time is dominated by the solution of the pressure Poisson equation using state-of-the-art multigrid methods. FLASH improves the performance of its multigrid solvers by integrating a parallel FFT solver on a uniform grid during a coarse level. This hybrid solver could then be theoretically improved by replacing the highly-parallelizable FFT solver with one that utilizes GPUs, and, thus, was the motivation for my research. In the present work, the CPU-utilizing parallel FFT solver (PFFT) used in the base version of FLASH for solving the Poisson equation on uniform grids has been modified to enable parallel execution on CUDA-enabled GPU devices. New algorithms have been implemented to replace the Poisson solver that decompose the computational domain and send each new block to a GPU for parallel computation. One-dimensional (1-D) decomposition of the computational domain minimizes the amount of network traffic involved in this bandwidth-intensive computation by limiting the amount of all-to-all communication required between processes. Advanced techniques have been incorporated and implemented in a GPU-centric code design, while allowing end users the flexibility of parameter control at runtime in
Regression of post-orthodontic lesions by a remineralizing cream.
Bailey, D L; Adams, G G; Tsao, C E; Hyslop, A; Escobar, K; Manton, D J; Reynolds, E C; Morgan, M V
2009-12-01
Orthodontic patients have an increased risk of white-spot lesion formation. A clinical trial was conducted to test whether, in a post-orthodontic population using fluoride toothpastes and receiving supervised fluoride mouthrinses, more lesions would regress in participants using a remineralizing cream containing casein phosphopeptide- amorphous calcium phosphate compared with a placebo. Forty-five participants (aged 12-18 yrs) with 408 white-spot lesions were recruited, with 23 participants randomized to the remineralizing cream and 22 to the placebo. Product was applied twice daily after fluoride toothpaste use for 12 weeks. Clinical assessments were performed according to ICDAS II criteria. Transitions between examinations were coded as progressing, regressing, or stable. Ninety-two percent of lesions were assessed as code 2 or 3. For these lesions, 31% more had regressed with the remineralizing cream than with the placebo (OR = 2.3, P = 0.04) at 12 weeks. Significantly more post-orthodontic white-spot lesions regressed with the remineralizing cream compared with a placebo over 12 weeks. PMID:19887683
Age and Sex Differences in Rates of Influenza-Associated Hospitalizations in Hong Kong.
Wang, Xi-Ling; Yang, Lin; Chan, Kwok-Hung; Chan, King-Pan; Cao, Pei-Hua; Lau, Eric Ho-Yin; Peiris, J S Malik; Wong, Chit-Ming
2015-08-15
Few studies have explored age and sex differences in the disease burden of influenza, although men and women probably differ in their susceptibility to influenza infections. In this study, quasi-Poisson regression models were applied to weekly age- and sex-specific hospitalization numbers of pneumonia and influenza cases in the Hong Kong SAR, People's Republic of China, from 2004 to 2010. Age and sex differences were assessed by age- and sex-specific rates of excess hospitalization for influenza A subtypes A(H1N1), A(H3N2), and A(H1N1)pdm09 and influenza B, respectively. We found that, in children younger than 18 years, boys had a higher excess hospitalization rate than girls, with the male-to-female ratio of excess rate (MFR) ranging from 1.1 to 2.4. MFRs of hospitalization associated with different types/subtypes were less than 1.0 for adults younger than 40 years except for A(H3N2) (MFR = 1.6), while all the MFRs were equal to or higher than 1.0 in adults aged 40 years or more except for A(H1N1)pdm09 in elderly persons aged 65 years or more (MFR = 0.9). No MFR was found to be statistically significant (P < 0.05) for hospitalizations associated with influenza type/subtype. There is some limited evidence on age and sex differences in hospitalization associated with influenza in the subtropical city of Hong Kong. PMID:26219977
The Association of Smoking and Surgery in Inflammatory Bowel Disease is Modified by Age at Diagnosis
Frolkis, Alexandra D; de Bruyn, Jennifer; Jette, Nathalie; Lowerison, Mark; Engbers, Jordan; Ghali, William; Lewis, James D; Vallerand, Isabelle; Patten, Scott; Eksteen, Bertus; Barnabe, Cheryl; Panaccione, Remo; Ghosh, Subrata; Wiebe, Samuel; Kaplan, Gilaad G
2016-01-01
Objectives: We assessed the association of smoking at diagnosis of inflammatory bowel disease (IBD) on the need for an intestinal resection. Methods: The Health Improvement Network was used to identify an inception cohort of Crohn's disease (n=1519) and ulcerative colitis (n=3600) patients from 1999–2009. Poisson regression explored temporal trends for the proportion of newly diagnosed IBD patients who never smoked before their diagnosis and the risk of surgery within 3 years of diagnosis. Cox proportional hazard models assessed the association between smoking and surgery, and effect modification was explored for age at diagnosis. Results: The rate of never smokers increased by 3% per year for newly diagnosed Crohn's disease patients (incidence rate ratio (IRR) 1.03; 95% confidence interval (CI): 1.02–1.05), but not for ulcerative colitis. The rate of surgery decreased among Crohn's disease patients aged 17–40 years (IRR 0.96; 95% CI: 0.93–0.98), but not for ulcerative colitis. Smoking at diagnosis increased the risk of surgery for Crohn's disease patients diagnosed after the age of 40 (hazard ratio (HR) 2.99; 95% CI: 1.52–5.92), but not for those diagnosed before age 40. Ulcerative colitis patients diagnosed between the ages of 17 and 40 years and who quit smoking before their diagnosis were more likely to undergo a colectomy (ex-smoker vs. never smoker: HR 1.66; 95% CI: 1.04–2.66). The age-specific findings were consistent across sensitivity analyses for Crohn's disease, but not ulcerative colitis. Conclusions: In this study, the association of smoking and surgical resection was dependent on the age at diagnosis of IBD. PMID:27101004
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
Laurence, T; Chromy, B
2009-11-10
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE
A regression tree approach to identifying subgroups with differential treatment effects.
Loh, Wei-Yin; He, Xu; Man, Michael
2015-05-20
In the fight against hard-to-treat diseases such as cancer, it is often difficult to discover new treatments that benefit all subjects. For regulatory agency approval, it is more practical to identify subgroups of subjects for whom the treatment has an enhanced effect. Regression trees are natural for this task because they partition the data space. We briefly review existing regression tree algorithms. Then, we introduce three new ones that are practically free of selection bias and are applicable to data from randomized trials with two or more treatments, censored response variables, and missing values in the predictor variables. The algorithms extend the generalized unbiased interaction detection and estimation (GUIDE) approach by using three key ideas: (i) treatment as a linear predictor, (ii) chi-squared tests to detect residual patterns and lack of fit, and (iii) proportional hazards modeling via Poisson regression. Importance scores with thresholds for identifying influential variables are obtained as by-products. A bootstrap technique is used to construct confidence intervals for the treatment effects in each node. The methods are compared using real and simulated data. PMID:25656439
A regression tree approach to identifying subgroups with differential treatment effects
Loh, Wei-Yin; He, Xu; Man, Michael
2015-01-01
In the fight against hard-to-treat diseases such as cancer, it is often difficult to discover new treatments that benefit all subjects. For regulatory agency approval, it is more practical to identify subgroups of subjects for whom the treatment has an enhanced effect. Regression trees are natural for this task because they partition the data space. We briefly review existing regression tree algorithms. Then we introduce three new ones that are practically free of selection bias and are applicable to data from randomized trials with two or more treatments, censored response variables, and missing values in the predictor variables. The algorithms extend the GUIDE approach by using three key ideas: (i) treatment as a linear predictor, (ii) chi-squared tests to detect residual patterns and lack of fit, and (iii) proportional hazards modeling via Poisson regression. Importance scores with thresholds for identifying influential variables are obtained as by-products. A bootstrap technique is used to construct confidence intervals for the treatment effects in each node. The methods are compared using real and simulated data. PMID:25656439
Developmental regression in ring chromosome 20 syndrome: A prion disease?
Aughton, D.J.
1994-09-01
Since 1972, the occurrence of r(20) has been described in at least 22 patients. In contrast to the relatively early-onset and nonprogressive developmental delay typical of chromosomal syndromes generally, the development of patients with r(20) is often normal for many months or even years, and developmental regression has been observed in at least 3 cases. Herein I present a further instance of developmental regression associated with r(20), and suggest that such regression may owe to disruption of function of the prion protein gene [PRNP], which has been mapped to 20pter-p12. The proposita was born at 33 weeks of gestation but had a relatively uncomplicated neonatal course; her early development was normal. By age 8-2/12 years, she appeared to have some cognitive deficits; by age 9-7/12 years, she was considered to have educable mental retardation, with a behavior disorder. On physical examination at age 9-8/12 years, her weight was between p10 and p25, and her head circumference was ca. p50. She had very mild coarseness and hirsutism, but was not dysmorphic. Extensive investigation was largely unremarkable; however, fragile X chromosome analysis at age 11-6/12 years showed a 46,XX,r(20) karyotype [fra(X) negative] in each of 50 cells examined. The maternal karyotype was mos46,XX/46,XX,r(20). Molecular analysis of PRNP is in progress. Rivers et al. reported a progressive neurological disorder associated with a telomeric fusion 15p;20p, and suggested that the disorder might be secondary to the presence of a pathogenic isoform of the prion protein. I suggest that a similar mechanism may be responsible for the neurodegeneration sometimes associated with r(20) syndrome. Molecular analysis of PRNP in patients with r(20) syndrome and, when possible, pathologic examination of central nervous system tissue of these patients will be helpful in further assessing this hypothesis.
Technology Transfer Automated Retrieval System (TEKTRAN)
In precision agriculture regression has been used widely to quality the relationship between soil attributes and other environmental variables. However, spatial correlation existing in soil samples usually makes the regression model suboptimal. In this study, a regression-kriging method was attemp...
Anderson, Craig L.
2009-01-01
Objectives. We estimated the effectiveness of child restraints in preventing death during motor vehicle collisions among children 3 years or younger. Methods. We conducted a matched cohort study using Fatality Analysis Reporting System data from 1996 to 2005. We estimated death risk ratios using conditional Poisson regression, bootstrapping, multiple imputation, and a sensitivity analysis of misclassification bias. We examined possible effect modification by selected factors. Results. The estimated death risk ratios comparing child safety seats with no restraint were 0.27 (95% confidence interval [CI] = 0.21, 0.34) for infants, 0.24 (95% CI = 0.19, 0.30) for children aged 1 year, 0.40 (95% CI = 0.32, 0.51) for those aged 2 years, and 0.41 (95% CI = 0.33, 0.52) for those aged 3 years. Estimated safety seat effectiveness was greater during rollover collisions, in rural environments, and in light trucks. We estimated seat belts to be as effective as safety seats in preventing death for children aged 2 and 3 years. Conclusions. Child safety seats are highly effective in reducing the risk of death during severe traffic collisions and generally outperform seat belts. Parents should be encouraged to use child safety seats in favor of seat belts. PMID:19059860
Regression modeling of ground-water flow
Cooley, R.L.; Naff, R.L.
1985-01-01
Nonlinear multiple regression methods are developed to model and analyze groundwater flow systems. Complete descriptions of regression methodology as applied to groundwater flow models allow scientists and engineers engaged in flow modeling to apply the methods to a wide range of problems. Organization of the text proceeds from an introduction that discusses the general topic of groundwater flow modeling, to a review of basic statistics necessary to properly apply regression techniques, and then to the main topic: exposition and use of linear and nonlinear regression to model groundwater flow. Statistical procedures are given to analyze and use the regression models. A number of exercises and answers are included to exercise the student on nearly all the methods that are presented for modeling and statistical analysis. Three computer programs implement the more complex methods. These three are a general two-dimensional, steady-state regression model for flow in an anisotropic, heterogeneous porous medium, a program to calculate a measure of model nonlinearity with respect to the regression parameters, and a program to analyze model errors in computed dependent variables such as hydraulic head. (USGS)
Investigating bias in squared regression structure coefficients
Nimon, Kim F.; Zientek, Linda R.; Thompson, Bruce
2015-01-01
The importance of structure coefficients and analogs of regression weights for analysis within the general linear model (GLM) has been well-documented. The purpose of this study was to investigate bias in squared structure coefficients in the context of multiple regression and to determine if a formula that had been shown to correct for bias in squared Pearson correlation coefficients and coefficients of determination could be used to correct for bias in squared regression structure coefficients. Using data from a Monte Carlo simulation, this study found that squared regression structure coefficients corrected with Pratt's formula produced less biased estimates and might be more accurate and stable estimates of population squared regression structure coefficients than estimates with no such corrections. While our findings are in line with prior literature that identified multicollinearity as a predictor of bias in squared regression structure coefficients but not coefficients of determination, the findings from this study are unique in that the level of predictive power, number of predictors, and sample size were also observed to contribute bias in squared regression structure coefficients. PMID:26217273
Efficient Lie-Poisson Integrator for Secular Spin Dynamics of Rigid Bodies
NASA Astrophysics Data System (ADS)
Breiter, Sławomir; Nesvorný, David; Vokrouhlický, David
2005-09-01
A fast and efficient numerical integration algorithm is presented for the problem of the secular evolution of the spin axis. Under the assumption that a celestial body rotates around its maximum moment of inertia, the equations of motion are reduced to the Hamiltonian form with a Lie-Poisson bracket. The integration method is based on the splitting of the Hamiltonian function, and so it conserves the Lie-Poisson structure. Two alternative partitions of the Hamiltonian are investigated, and second-order leapfrog integrators are provided for both cases. Non-Hamiltonian torques can be incorporated into the integrators with a combination of Euler and Lie-Euler approximations. Numerical tests of the methods confirm their useful properties of short computation time and reliability on long integration intervals.
Beyond Poisson-Boltzmann: fluctuations and fluid structure in a self-consistent theory.
Buyukdagli, S; Blossey, R
2016-09-01
Poisson-Boltzmann (PB) theory is the classic approach to soft matter electrostatics and has been applied to numerous physical chemistry and biophysics problems. Its essential limitations are in its neglect of correlation effects and fluid structure. Recently, several theoretical insights have allowed the formulation of approaches that go beyond PB theory in a systematic way. In this topical review, we provide an update on the developments achieved in the self-consistent formulations of correlation-corrected Poisson-Boltzmann theory. We introduce a corresponding system of coupled non-linear equations for both continuum electrostatics with a uniform dielectric constant, and a structured solvent-a dipolar Coulomb fluid-including non-local effects. While the approach is only approximate and also limited to corrections in the so-called weak fluctuation regime, it allows us to include physically relevant effects, as we show for a range of applications of these equations. PMID:27357125
Poisson statistics of PageRank probabilities of Twitter and Wikipedia networks
NASA Astrophysics Data System (ADS)
Frahm, Klaus M.; Shepelyansky, Dima L.
2014-04-01
We use the methods of quantum chaos and Random Matrix Theory for analysis of statistical fluctuations of PageRank probabilities in directed networks. In this approach the effective energy levels are given by a logarithm of PageRank probability at a given node. After the standard energy level unfolding procedure we establish that the nearest spacing distribution of PageRank probabilities is described by the Poisson law typical for integrable quantum systems. Our studies are done for the Twitter network and three networks of Wikipedia editions in English, French and German. We argue that due to absence of level repulsion the PageRank order of nearby nodes can be easily interchanged. The obtained Poisson law implies that the nearby PageRank probabilities fluctuate as random independent variables.
Two-dimensional Green`s function Poisson solution appropriate for cylindrical-symmetry simulations
Riley, M.E.
1998-04-01
This report describes the numerical procedure used to implement the Green`s function method for solving the Poisson equation in two-dimensional (r,z) cylindrical coordinates. The procedure can determine the solution to a problem with any or all of the applied voltage boundary conditions, dielectric media, floating (insulated) conducting media, dielectric surface charging, and volumetric space charge. The numerical solution is reasonably fast, and the dimension of the linear problem to be solved is that of the number of elements needed to represent the surfaces, not the whole computational volume. The method of solution is useful in the simulation of plasma particle motion in the vicinity of complex surface structures as found in microelectronics plasma processing applications. This report is a stand-alone supplement to the previous Sandia Technical Report SAND98-0537 presenting the two-dimensional Cartesian Poisson solver.
On the canonical forms of the multi-dimensional averaged Poisson brackets
NASA Astrophysics Data System (ADS)
Maltsev, A. Ya.
2016-05-01
We consider here special Poisson brackets given by the "averaging" of local multi-dimensional Poisson brackets in the Whitham method. For the brackets of this kind it is natural to ask about their canonical forms, which can be obtained after transformations preserving the "physical meaning" of the field variables. We show here that the averaged bracket can always be written in the canonical form after a transformation of "Hydrodynamic Type" in the case of absence of annihilators of initial bracket. However, in general case the situation is more complicated. As we show here, in more general case the averaged bracket can be transformed to a "pseudo-canonical" form under some special ("physical") requirements on the initial bracket.
Statistical shape analysis using 3D Poisson equation-A quantitatively validated approach.
Gao, Yi; Bouix, Sylvain
2016-05-01
Statistical shape analysis has been an important area of research with applications in biology, anatomy, neuroscience, agriculture, paleontology, etc. Unfortunately, the proposed methods are rarely quantitatively evaluated, and as shown in recent studies, when they are evaluated, significant discrepancies exist in their outputs. In this work, we concentrate on the problem of finding the consistent location of deformation between two population of shapes. We propose a new shape analysis algorithm along with a framework to perform a quantitative evaluation of its performance. Specifically, the algorithm constructs a Signed Poisson Map (SPoM) by solving two Poisson equations on the volumetric shapes of arbitrary topology, and statistical analysis is then carried out on the SPoMs. The method is quantitatively evaluated on synthetic shapes and applied on real shape data sets in brain structures. PMID:26874288
An empirical Bayesian and Buhlmann approach with non-homogenous Poisson process
NASA Astrophysics Data System (ADS)
Noviyanti, Lienda
2015-12-01
All general insurance companies in Indonesia have to adjust their current premium rates according to maximum and minimum limit rates in the new regulation established by the Financial Services Authority (Otoritas Jasa Keuangan / OJK). In this research, we estimated premium rate by means of the Bayesian and the Buhlmann approach using historical claim frequency and claim severity in a five-group risk. We assumed a Poisson distributed claim frequency and a Normal distributed claim severity. Particularly, we used a non-homogenous Poisson process for estimating the parameters of claim frequency. We found that estimated premium rates are higher than the actual current rate. Regarding to the OJK upper and lower limit rates, the estimates among the five-group risk are varied; some are in the interval and some are out of the interval.
Dynamics of a prey-predator system under Poisson white noise excitation
NASA Astrophysics Data System (ADS)
Pan, Shan-Shan; Zhu, Wei-Qiu
2014-10-01
The classical Lotka-Volterra (LV) model is a well-known mathematical model for prey-predator ecosystems. In the present paper, the pulse-type version of stochastic LV model, in which the effect of a random natural environment has been modeled as Poisson white noise, is investigated by using the stochastic averaging method. The averaged generalized Itô stochastic differential equation and Fokker-Planck-Kolmogorov (FPK) equation are derived for prey-predator ecosystem driven by Poisson white noise. Approximate stationary solution for the averaged generalized FPK equation is obtained by using the perturbation method. The effect of prey self-competition parameter ɛ2 s on ecosystem behavior is evaluated. The analytical result is confirmed by corresponding Monte Carlo (MC) simulation.
Deformations of Poisson brackets and extensions of Lie algebras of contact vector fields
NASA Astrophysics Data System (ADS)
Ovsienko, V.; Roger, C.
1992-12-01
CONTENTSIntroduction § 1. Main theoremsChapter I. Algebra § 2. Moyal deformations of the Poisson bracket and *-product on \\mathbb R^{2n} § 3. Algebraic construction § 4. Central extensions § 5. ExamplesChapter II. Deformations of the Poisson bracket and *-product on an arbitrary symplectic manifold § 6. Formal deformations: definitions § 7. Graded Lie algebras as a means of describing deformations § 8. Cohomology computations and their consequences § 9. Existence of a *-productChapter III. Extensions of the Lie algebra of contact vector fields on an arbitrary contact manifold §10. Lagrange bracket §11. Extensions and modules of tensor fieldsAppendix 1. Extensions of the Lie algebra of differential operatorsAppendix 2. Examples of equations of Korteweg-de Vries typeReferences
Hidden Markov Models for Zero-Inflated Poisson Counts with an Application to Substance Use
DeSantis, Stacia M.; Bandyopadhyay, Dipankar
2011-01-01
Paradigms for substance abuse cue-reactivity research involve short term pharmacological or stressful stimulation designed to elicit stress and craving responses in cocaine-dependent subjects. It is unclear as to whether stress induced from participation in such studies increases drug-seeking behavior. We propose a 2-state Hidden Markov model to model the number of cocaine abuses per week before and after participation in a stress- and cue-reactivity study. The hypothesized latent state corresponds to ‘high’ or ‘low’ use. To account for a preponderance of zeros, we assume a zero-inflated Poisson model for the count data. Transition probabilities depend on the prior week’s state, fixed demographic variables, and time-varying covariates. We adopt a Bayesian approach to model fitting, and use the conditional predictive ordinate statistic to demonstrate that the zero-inflated Poisson hidden Markov model outperforms other models for longitudinal count data. PMID:21538455
Dependent Neyman type A processes based on common shock Poisson approach
NASA Astrophysics Data System (ADS)
Kadilar, Gamze Özel; Kadilar, Cem
2016-04-01
The Neyman type A process is used for describing clustered data since the Poisson process is insufficient for clustering of events. In a multivariate setting, there may be dependencies between multivarite Neyman type A processes. In this study, dependent form of the Neyman type A process is considered under common shock approach. Then, the joint probability function are derived for the dependent Neyman type A Poisson processes. Then, an application based on forest fires in Turkey are given. The results show that the joint probability function of the dependent Neyman type A processes, which is obtained in this study, can be a good tool for the probabilistic fitness for the total number of burned trees in Turkey.
On splitting methods for Schroedinger-Poisson and cubic nonlinear Schroedinger equations
NASA Astrophysics Data System (ADS)
Lubich, Christian
2008-12-01
We give an error analysis of Strang-type splitting integrators for nonlinear Schroedinger equations. For Schroedinger-Poisson equations with an H^4 -regular solution, a first-order error bound in the H^1 norm is shown and used to derive a second-order error bound in the L_2 norm. For the cubic Schroedinger equation with an H^4 -regular solution, first-order convergence in the H^2 norm is used to obtain second-order convergence in the L_2 norm. Basic tools in the error analysis are Lie-commutator bounds for estimating the local error and H^m -conditional stability for error propagation, where mD1 for the Schroedinger-Poisson system and mD2 for the cubic Schroedinger equation.
User`s guide for the POISSON/SUPERFISH Group of Codes
Menzel, M.T.; Stokes, H.K.
1987-01-01
The POISSON/SUPERFISH Group Codes are a set of programs written by Ronald Holsinger, with theoretical assistance from Klaus Halbach, to solve two distinct problems--the calculation of magnetostatic and electrostatic fields, and the computation of the resonant frequencies and fields in radio-frequency cavities--in a two-dimensional Cartesian or three-dimensional cylindrical geometry. These codes are widely used for the design of magnets and radio frequency cavities.
Existence of Rotating Planet Solutions to the Euler-Poisson Equations with an Inner Hard Core
NASA Astrophysics Data System (ADS)
Wu, Yilun
2016-01-01
The Euler-Poisson equations model rotating gaseous stars. Numerous efforts have been made to establish the existence and properties of the rotating star solutions. Recent interests in extrasolar planet structures require extension of the model to include an inner rocky core together with its own gravitational potential. In this paper, we discuss various extensions of the classical rotating star results to incorporate a solid core.
Instability conditions for some periodic BGK waves in the Vlasov-Poisson system
NASA Astrophysics Data System (ADS)
Pankavich, Stephen; Allen, Robert
2014-12-01
A one-dimensional, collisionless plasma given by the Vlasov-Poisson system is considered and the stability properties of periodic steady state solutions known as Bernstein-Greene-Kruskal (BGK) waves are investigated. Sufficient conditions are determined under which BGK waves are linearly unstable under perturbations that share the same period as the equilibria. It is also shown that such solutions cannot support a monotonically decreasing particle distribution function.
On rotating star solutions to the non-isentropic Euler-Poisson equations
NASA Astrophysics Data System (ADS)
Wu, Yilun
2015-12-01
This paper investigates rotating star solutions to the Euler-Poisson equations with a non-isentropic equation of state. As a first step, the equation for gas density with a prescribed entropy and angular velocity distribution is studied. The resulting elliptic equation is solved either by the method of sub and supersolutions or by a variational method, depending on the value of the adiabatic index. The reverse problem of determining angular velocity from gas density is also considered.
The accurate solution of Poisson's equation by expansion in Chebyshev polynomials
NASA Technical Reports Server (NTRS)
Haidvogel, D. B.; Zang, T.
1979-01-01
A Chebyshev expansion technique is applied to Poisson's equation on a square with homogeneous Dirichlet boundary conditions. The spectral equations are solved in two ways - by alternating direction and by matrix diagonalization methods. Solutions are sought to both oscillatory and mildly singular problems. The accuracy and efficiency of the Chebyshev approach compare favorably with those of standard second- and fourth-order finite-difference methods.
Formulation of the Multi-Hit Model With a Non-Poisson Distribution of Hits
Vassiliev, Oleg N.
2012-07-15
Purpose: We proposed a formulation of the multi-hit single-target model in which the Poisson distribution of hits was replaced by a combination of two distributions: one for the number of particles entering the target and one for the number of hits a particle entering the target produces. Such an approach reflects the fact that radiation damage is a result of two different random processes: particle emission by a radiation source and interaction of particles with matter inside the target. Methods and Materials: Poisson distribution is well justified for the first of the two processes. The second distribution depends on how a hit is defined. To test our approach, we assumed that the second distribution was also a Poisson distribution. The two distributions combined resulted in a non-Poisson distribution. We tested the proposed model by comparing it with previously reported data for DNA single- and double-strand breaks induced by protons and electrons, for survival of a range of cell lines, and variation of the initial slopes of survival curves with radiation quality for heavy-ion beams. Results: Analysis of cell survival equations for this new model showed that they had realistic properties overall, such as the initial and high-dose slopes of survival curves, the shoulder, and relative biological effectiveness (RBE) In most cases tested, a better fit of survival curves was achieved with the new model than with the linear-quadratic model. The results also suggested that the proposed approach may extend the multi-hit model beyond its traditional role in analysis of survival curves to predicting effects of radiation quality and analysis of DNA strand breaks. Conclusions: Our model, although conceptually simple, performed well in all tests. The model was able to consistently fit data for both cell survival and DNA single- and double-strand breaks. It correctly predicted the dependence of radiation effects on parameters of radiation quality.
Renormalized perturbation theory - Vlasov-Poisson system, weak turbulence limit, and gyrokinetics
NASA Astrophysics Data System (ADS)
Zhang, Y. Z.; Mahadjan, S. M.
1988-10-01
The self-consistency of the renormalized perturbation theory of Zhang and Mahajan (1985) is demonstrated by applying it to the Vlasov-Poisson system and showing that the theory has the correct weak turbulence limit. Energy conservation is proved to arbitrary high order for the electrostatic drift waves. The theory is applied to derive renormalized equations for a low-beta gyrokinetic system. Comparison of this theory with other current theories is presented.
Regression of altitude-produced cardiac hypertrophy.
NASA Technical Reports Server (NTRS)
Sizemore, D. A.; Mcintyre, T. W.; Van Liere, E. J.; Wilson , M. F.
1973-01-01
The rate of regression of cardiac hypertrophy with time has been determined in adult male albino rats. The hypertrophy was induced by intermittent exposure to simulated high altitude. The percentage hypertrophy was much greater (46%) in the right ventricle than in the left (16%). The regression could be adequately fitted to a single exponential function with a half-time of 6.73 plus or minus 0.71 days (90% CI). There was no significant difference in the rates of regression for the two ventricles.
L-moments under nuisance regression
NASA Astrophysics Data System (ADS)
Picek, Jan; Schindler, Martin
2016-06-01
The L-moments are analogues of the conventional moments and have similar interpretations. They are calculated using linear combinations of the expectation of ordered data. In practice, L-moments must usually be estimated from a random sample drawn from an unknown distribution as a linear combination of ordered statistics. Jureckova and Picek (2014) showed that averaged regression quantile is asymptotically equivalent to the location quantile. We therefore propose a generalization of L-moments in the model with nuisance regression using the averaged regression quantiles.
Sparse Multivariate Regression With Covariance Estimation
Rothman, Adam J.; Levina, Elizaveta; Zhu, Ji
2014-01-01
We propose a procedure for constructing a sparse estimator of a multivariate regression coefficient matrix that accounts for correlation of the response variables. This method, which we call multivariate regression with covariance estimation (MRCE), involves penalized likelihood with simultaneous estimation of the regression coefficients and the covariance structure. An efficient optimization algorithm and a fast approximation are developed for computing MRCE. Using simulation studies, we show that the proposed method outperforms relevant competitors when the responses are highly correlated. We also apply the new method to a finance example on predicting asset returns. An R-package containing this dataset and code for computing MRCE and its approximation are available online. PMID:24963268
Spontaneous Regression of Primitive Merkel Cell Carcinoma
2015-01-01
Merkel cell carcinoma (MCC) is a rare, aggressive skin tumor that mainly occurs in the elderly with a generally poor prognosis. Like all skin cancers, its incidence is rising. Despite the poor prognosis, a few reports of spontaneous regression have been published. We describe the case of a 89-year-old male patient who presented two MCC lesions of the scalp. Following biopsy the lesions underwent complete regression with no clinical evidence of residual tumor up to 24 months. The current knowledge of MCC and the other cases of spontaneous regression described in the literature are reviewed. PMID:26788270
A multiple regression equation for prediction of posthepatectomy liver failure.
Yamanaka, N; Okamoto, E; Kuwata, K; Tanaka, N
1984-01-01
This article reports a multiple regression equation for prediction of posthepatectomy liver failure. In phase I, using the correlations between 17 preoperative parameters (Xi) and the postoperative course scored (Y) of the past 36 hepatectomized patients, we proposed the following multiple regression equation: Y = -110 + 0.942 X resection rate (%) + 1.36 X ICG retention rate (%) + 1.17 X patient's age + 5.94 X ICG maximal removal rate (mg/kg/min). With the equation, the calculated Y value (prediction score) of these patients revealed that prediction scores of the eight nonsurvivors with liver failure were more than 50 points while those of the 28 survivors were 50 points or less. In phase II, the relationships between early prognosis and a precalculated prediction score were prospectively found the same as that seen in phase I. These findings indicate that our formula is a useful prognostic index for prediction of posthepatectomy liver failure. PMID:6486915
Optimal dispersion with minimized Poisson equations for non-hydrostatic free surface flows
NASA Astrophysics Data System (ADS)
Cui, Haiyang; Pietrzak, J. D.; Stelling, G. S.
2014-09-01
A non-hydrostatic shallow-water model is proposed to simulate the wave propagation in situations where the ratio of the wave length to the water depth is small. It exploits the reduced-size stencil in the Poisson pressure solver to make the model less expensive in terms of memory and CPU time. We refer to this new technique as the minimized Poisson equations formulation. In the simplest case when the method applied to a two-layer model, the new model requires the same computational effort as depth-integrated non-hydrostatic models, but can provide a much better description of dispersive waves. To allow an easy implementation of the new method in depth-integrated models, the governing equations are transformed into a depth-integrated system, in which the velocity difference serves as an extra variable. The non-hydrostatic shallow-water model with minimized Poisson equations formulation produces good results in a series of numerical experiments, including a standing wave in a basin, a non-linear wave test, solitary wave propagation in a channel and a wave propagation over a submerged bar.